Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-11-27 Thread haruka tanizawa
Hi, all!!
I've just summarized mail until now.


How to implement request identification ?
===
1. Enable user to provide his/her own request identification within API
request.
   e.g) Using instance-tasks-api
2. Use correlation_id in oslo as request identification.
3. Enable keystone to generate request identification (we can call it
'request-token', for example).
   Like Keystone identifies 'User'.

Feedback
===
There are already these blueprints.
* cross-service-request-id
* delegation-workplans
* instance-tasks-api

My conclusion
===
However, the following opinions also have failed to resolve the problem
what I want to solve.
1. Cross component
2. User can know before user put request API
3. User can see how their create request is going

* cross-service-request-id
- Lack of point of 'User can know before user put request API'.
- Or is something missing? Any plan?
* delegation-workplans
- Lack of point of 'User can see how their create request is going'.
- Does it return state of 'requests'?
* instance-tasks-api
- Lack of point of 'User can know before user put request API'.
- How do we know task_id when nova-api downs before we get task_id?

So, I think that 'Idempotency for OpenStack API'[0] is still valid because
of requirement.
(Yes, I think word 'Idempotency' is appropriate...)

If you have any thoughts on that please share it with me.


Sincerely, Haruka Tanizawa

[0] https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token


2013/11/27 Takahiro Shida shida.takah...@gmail.com

 Hi all,

 I'm also interested in this issue.

  Create a unified request identifier
  https://blueprints.launchpad.net/nova/+spec/cross-service-request-id

 I checked this BP and the following review.
 https://review.openstack.org/#/c/29480/

 There are many comments. Finally, this review looks rejected by user
 specified correlation-id is useless and insecure.


  3. Enable keystone to generate request identification (we can call it
 'request-token', for example).
  -2

 So, this idea will be helpful to solve the cross-service-request-id
 problem.
 Because the correlation-id specified by keystone.

 How about nova guys and keystone guys ?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][horizon] The meaning of Network Duration

2013-11-27 Thread Ying Chun Guo

Hello,

While I translate Horizon web UI, I'm a little confused with Network
Duration,
Port Duration, and Router Duration in the Resources Usage statistics
table.

What does Duration mean here?
If I translate it exactly as the meaning of Duration, my customers cannot
understand.
Does it equal to usage time?

Regards
Ying Chun Guo (Daisy)___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Thomas Spatzier
Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 00:47
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap



snip

 That is not the use case that I'm attempting to make, let me try again.
 For what it's worth I agree, that in this use case I want a mechanism to
 tag particular versions of templates your solution makes sense and will
 probably be necessary as the requirements for the template catalog start
 to become defined.

 What I am attempting to explain is actually much simpler than that. There
 are 2 times in a UI that I would be interested in the keywords of the
 template. When I am initially browsing the catalog to create a new stack,
 I expect the stacks to be searchable and/or organized by these keywords
 AND when I am viewing the stack-list page I should be able to sort my
 existing stacks by keywords.

 In this second case I am suggesting that the end-user, not the Template
 Catalog Moderator should have control over the keywords that are defined
 in his instantiated stack. So if he does a Stack Update, he is not
 committing a new version of the template back to a git repository, he is
 just updating the definition of the stack. If the user decides that the
 original template defined the keyword as wordpress and he wants to
 revise the keyword to tims wordpress then he can do that without the
 original template moderator knowing or caring about it.

 This could be useful to an end-user who's business is managing client
 stacks on one account maybe. So he could have tims wordpress, tims
 drupal, angus wordpress, angus drupal the way that he updates the
 keywords after the stack has been instantiated is up to him. Then he can
 sort or search his stacks on his own custom keywords.


For me this all sounds like really user specific tagging, so something that
should really be done outside the template file itself in the template
catalog service. The use case seems about a role that organizes templates
(or later stacks) by some means, which is fine, but then everything is a
decision of the person organizing the templates, and not necessarily a
decision of the template author. So it would be cleaner to keep this
tagging outside the template IMO.

 I agree that the template versioning is a separate use case.

 Tim
 
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 26.11.2013 23:29:06:

 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 26.11.2013 23:32
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 On 11/27/2013 11:02 AM, Christopher Armstrong wrote:
 On Tue, Nov 26, 2013 at 3:24 PM, Tim Schnell tim.schn...@rackspace.com
  wrote:

snip

 Good point, I'd like to revise my previous parameter-groups example:
 parameter-groups:
 - name: db
   description: Database configuration options
   parameters: [db_name, db_username, db_password]
 - name: web_node
   description: Web server configuration
   parameters: [web_node_name, keypair]
 parameters:
   # as above, but without requiring any order or group attributes

+1 on this approach. Very clean and gives you all the information you would
need to the UI use case.
And as you say, does not have any impact on the current way parameters are
processed by Heat.


 Here, parameter-groups is a list which implies the order, and
 parameters are specified in the group as a list, so we get the order
 from that too. This means a new parameter-groups section which
 contains anything required to build a good parameters form, and no
 modifications required to the parameters section at all.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-27 Thread Thierry Carrez
Tim Bell wrote:
 Can we get a TC policy that 'project' is the standard and that all projects 
 using tenant should plan a smooth migration path to project along with the 
 timescales for implementation and retirement of tenant ?

Feel free to propose such policy for TC review. Rules of engagement at:

https://wiki.openstack.org/wiki/Governance/TechnicalCommittee#Presenting_a_motion_before_the_TC

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-27 Thread Bob Ball
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 26 November 2013 19:33
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core
 
 I would like to propose that we re-add Dan Prince to the nova-core
 review team.

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-11-27 Thread Eugene Nikanorov
Hi Neutron folks,

LBaaS subteam meeting will be on Thursday, 27, at 14-00 UTC as usual.
We'll discuss current progress and continue with feature design.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Alex Heneveld


Personally Application gets my vote as the conceptual top level 
unit, with the best combination of some meaning and not too much 
ambiguity.  At least so far.  As Tom notes there is some ambiguity ... 
not sure we can avoid that altogether but worth some brainstorming.


Project is what I had originally proposed to avoid the confusion with 
running app instances (assemblies) but that is even less descriptive and 
has meanings elsewhere.


Package feels too low level, I agree.

Product is perhaps an alternative though also not ideal.

--A


On 27/11/2013 06:31, Clayton Coleman wrote:



On Nov 26, 2013, at 11:10 PM, Adrian Otto adrian.o...@rackspace.com wrote:




On Nov 26, 2013, at 4:20 PM, Clayton Coleman ccole...@redhat.com wrote:




On Nov 26, 2013, at 6:30 PM, Adrian Otto adrian.o...@rackspace.com wrote:

Tom,

On Nov 26, 2013, at 12:09 PM, Tom Deckers (tdeckers) tdeck...@cisco.com
wrote:


Hi All,

Few comments on the Definitions blueprint [1]:

1. I'd propose to alter the term 'Application' to either 'Application Package' 
or 'Package'.  Application isn't very descriptive and can be confusing to some 
with the actual deployed instance, etc.

I think that's a sensible suggestion. I'm open to using Package, as that's an 
accurate description of what an Application is currently conceived of.

Package is a bit fraught given its overlap with other programming concepts:

Python Dev: How do I get my django app up in production?
Admin: You can create a new package for it.
Python Dev: You mean with an __init__.py file?

Admin: Go create your package in horizon so you can deploy it.
Java Dev: Ok, I ran Create Package from eclipse
(Hours of humorous miscommunication ensue)

Solum Admin: Go update the package for Bob's app.
Other Admin: I ran yum update but nothing happened...

If application is generic, that might be a good thing.  I'm not sure there are 
too many words that can accurately describe a Java WAR, a Ruby on Rails site, a 
Jenkins server, a massive service oriented architecture, or a worker queue 
processing log data at the same time.  Server and site are too specific or in 
use in openstack already, program is too singular.

At the end of the day someone has to explain these terms to a large number of 
end users (developers) - would hope we can pick a name that is recognizable to 
most of them immediately, because they're going to pick the option that looks 
the most familiar to them and try it first.

All good points. This is why I like having these discussions with such a 
diverse group. I am still open to considering different terminology, accepting 
that whatever we pick to call things it will feel like a compromise for some of 
us. Any other thoughts on revisiting this name, or should we stick with 
application for now, and address this with more documentation to further 
clarify the meaning of the various abstracts?

I think Tom's point on this is valid - the app resource is more of a factory 
or template for your app at first.  However, I could easily imagine interaction patterns 
that imply a cohesive unit over time, but those are hard to argue about until we've got 
more direct use cases and client interaction drawn up.

For instance, if someone starts by creating an assembly right off the bat the 
application might not really be relevant, but if the client forces people to 
develop a plan first (either by helping them build it or pulling from operator 
templates) and then iterate on that plan directly (deploy app to env), a user 
might feel like the app is a stronger concept.


2. It should be possible for the package to be self-contained, in order to 
distribute application definitions.   As such, instead of using a repo, source 
code might come with the package itself.  Has this been considered as a 
blueprint: deploy code/binaries that are in a zip, rather than a repo?  Maybe 
Artifact serves this purpose?

The API should allow you to deploy something directly from a source code repo 
without packaging anything up. It should also allow you to present some static 
deployment artifacts (container image, zip file, etc.) for code that has 
already been built/tested.


3. Artifact has not been called out as a top-level noun.  It probably should 
and get a proper definition.

Good idea, I will add a definition for it.


4. Plan is described as deployment plan, but then it's also referenced in the 
description of 'Build'.  Plan seems to have a dual meaning, which is fine, but 
that should be called out explicitly.  Plan is not synonymous with deployment 
plan, rather we have a deployment plan and a build plan.  Those two together 
can be 'the plan'.

Currently Plan does have a dual meaning, but it may make sense to split each 
out if they are different enough. I'm open to considering ideas on this.


5. Operations.  The definition states that definitions can be added to a 
Service too.  Since the Service is provided by the platform, I assume it 
already comes with operations predefined.

[openstack-dev] [diagnostic] Diagnostic/instrumentation API specification v0.1

2013-11-27 Thread Oleg Gelbukh
Greetings, fellow stackers

I'm happy to announce that Rubick team has completed draft of Diagnostic
API v0.1 spec [1] and started implementation of that API [2]. This draft
provides hardware and operating system configuration data from physical
nodes of the cluster.
If you're interested in instrumentation of OpenStack installations, please,
raise your voice and tell us if coverage of this specification is
sufficient from your standpoint (probably not, then tell what else you need
to see!). We're eager to hear from operators as well as from developers and
QA folks.

[1] https://blueprints.launchpad.net/rubick/+spec/diagnostic-api-spec
[2]
https://blueprints.launchpad.net/rubick/+spec/instumentation-and-diagnostic-rest-api

--
Best regards,
Oleg Gelbukh
Mirantis Labs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-27 Thread Flavio Percoco

On 26/11/13 22:54 +, Mark McLoughlin wrote:

On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:

On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com wrote:
1) Store the commit sha from which the module was copied from.
Every project using oslo, currently keeps the list of modules it
is using in `openstack-modules.conf` in a `module` parameter. We
could store, along with the module name, the sha of the commit it
was last synced from:

module=log,commit

or

module=log
log=commit


The second form will be easier to manage. Humans edit the module field and
the script will edit the others.


How about adding it as a comment at the end of the python files
themselves and leaving openstack-common.conf for human editing?


I think having the commit sha will give us a starting point from which
we could start updating that module from. It will mostly help with
getting a diff for that module and the short commit messages where it
was modified.

Here's a pseudo-buggy-algorithm for the update process:

   (1) Get current sha for $module
   (2) Get list of new commits for $module
   (3) for each commit of $module:
   (3.1) for each modified_module in $commit
   (3.1.1) Update those modules up to $commit (1)(modified_module)
   (3.2) Copy the new file
   (3.3) Update openstack-common with the latest sha

This trusts the granularity and isolation of the patches proposed in
oslo-incubator. However, in cases like 'remove vim mode lines' it'll
fail assuming that updating every module is necessary - which is true
from a git stand point.

The above will make update.py way smarter - and complex - than it's
today, but given the number of projects - new and existing - we have
now, I think it's worth it.

Thoughts?

FF

--
@flaper87
Flavio Percoco


pgptFgZslWEs0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-27 Thread Mark McLoughlin
On Wed, 2013-11-27 at 11:50 +0100, Flavio Percoco wrote:
 On 26/11/13 22:54 +, Mark McLoughlin wrote:
 On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
  On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com wrote:
  1) Store the commit sha from which the module was copied from.
  Every project using oslo, currently keeps the list of modules it
  is using in `openstack-modules.conf` in a `module` parameter. We
  could store, along with the module name, the sha of the commit it
  was last synced from:
  
  module=log,commit
  
  or
  
  module=log
  log=commit
  
 
  The second form will be easier to manage. Humans edit the module field and
  the script will edit the others.
 
 How about adding it as a comment at the end of the python files
 themselves and leaving openstack-common.conf for human editing?
 
 I think having the commit sha will give us a starting point from which
 we could start updating that module from. 

Sure, my only point was about where the commit sha comes from - i.e.
whether it's from a comment at the end of the python module itself or in
openstack-common.conf

 It will mostly help with
 getting a diff for that module and the short commit messages where it
 was modified.
 
 Here's a pseudo-buggy-algorithm for the update process:
 
 (1) Get current sha for $module
 (2) Get list of new commits for $module
 (3) for each commit of $module:
 (3.1) for each modified_module in $commit
 (3.1.1) Update those modules up to $commit (1)(modified_module)
 (3.2) Copy the new file
 (3.3) Update openstack-common with the latest sha
 
 This trusts the granularity and isolation of the patches proposed in
 oslo-incubator. However, in cases like 'remove vim mode lines' it'll
 fail assuming that updating every module is necessary - which is true
 from a git stand point.

This is another variant of the kind of inter-module dependency smarts
that update.py already has ... I'd be inclined to just omit those smarts
and just require the caller to explicitly list the modules they want to
include.

Maybe update.py could include some reporting to help with that choice
like module foo depends on modules bar and blaa, maybe you want to
include them too and commit XXX modified module foo, but also module
bar and blaa, maybe you want to include them too.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Tom Deckers (tdeckers)
Looks like most terms would have some ambiguity to them.  Regardless of what we 
decide on - maybe Application can stay after all - we need a crisp definition 
of the term, preferably with an example that illustrates the terminology and 
puts Application in context of different versions of the same applications, 
instances of the application.  

Great discussion!

Regards,
Tom.

-Original Message-
From: Alex Heneveld [mailto:alex.henev...@cloudsoftcorp.com] 
Sent: Wednesday, November 27, 2013 11:32
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Definition feedback


Personally Application gets my vote as the conceptual top level 
unit, with the best combination of some meaning and not too much ambiguity.  At 
least so far.  As Tom notes there is some ambiguity ... 
not sure we can avoid that altogether but worth some brainstorming.

Project is what I had originally proposed to avoid the confusion with running 
app instances (assemblies) but that is even less descriptive and has meanings 
elsewhere.

Package feels too low level, I agree.

Product is perhaps an alternative though also not ideal.

--A


On 27/11/2013 06:31, Clayton Coleman wrote:

 On Nov 26, 2013, at 11:10 PM, Adrian Otto adrian.o...@rackspace.com wrote:



 On Nov 26, 2013, at 4:20 PM, Clayton Coleman ccole...@redhat.com wrote:



 On Nov 26, 2013, at 6:30 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Tom,

 On Nov 26, 2013, at 12:09 PM, Tom Deckers (tdeckers) 
 tdeck...@cisco.com
 wrote:

 Hi All,

 Few comments on the Definitions blueprint [1]:

 1. I'd propose to alter the term 'Application' to either 'Application 
 Package' or 'Package'.  Application isn't very descriptive and can be 
 confusing to some with the actual deployed instance, etc.
 I think that's a sensible suggestion. I'm open to using Package, as that's 
 an accurate description of what an Application is currently conceived of.
 Package is a bit fraught given its overlap with other programming concepts:

 Python Dev: How do I get my django app up in production?
 Admin: You can create a new package for it.
 Python Dev: You mean with an __init__.py file?

 Admin: Go create your package in horizon so you can deploy it.
 Java Dev: Ok, I ran Create Package from eclipse (Hours of humorous 
 miscommunication ensue)

 Solum Admin: Go update the package for Bob's app.
 Other Admin: I ran yum update but nothing happened...

 If application is generic, that might be a good thing.  I'm not sure there 
 are too many words that can accurately describe a Java WAR, a Ruby on Rails 
 site, a Jenkins server, a massive service oriented architecture, or a 
 worker queue processing log data at the same time.  Server and site are too 
 specific or in use in openstack already, program is too singular.

 At the end of the day someone has to explain these terms to a large number 
 of end users (developers) - would hope we can pick a name that is 
 recognizable to most of them immediately, because they're going to pick the 
 option that looks the most familiar to them and try it first.
 All good points. This is why I like having these discussions with such a 
 diverse group. I am still open to considering different terminology, 
 accepting that whatever we pick to call things it will feel like a 
 compromise for some of us. Any other thoughts on revisiting this name, or 
 should we stick with application for now, and address this with more 
 documentation to further clarify the meaning of the various abstracts?
 I think Tom's point on this is valid - the app resource is more of a 
 factory or template for your app at first.  However, I could easily imagine 
 interaction patterns that imply a cohesive unit over time, but those are hard 
 to argue about until we've got more direct use cases and client interaction 
 drawn up.

 For instance, if someone starts by creating an assembly right off the bat the 
 application might not really be relevant, but if the client forces people to 
 develop a plan first (either by helping them build it or pulling from 
 operator templates) and then iterate on that plan directly (deploy app to 
 env), a user might feel like the app is a stronger concept.

 2. It should be possible for the package to be self-contained, in order 
 to distribute application definitions.   As such, instead of using a 
 repo, source code might come with the package itself.  Has this been 
 considered as a blueprint: deploy code/binaries that are in a zip, rather 
 than a repo?  Maybe Artifact serves this purpose?
 The API should allow you to deploy something directly from a source code 
 repo without packaging anything up. It should also allow you to present 
 some static deployment artifacts (container image, zip file, etc.) for 
 code that has already been built/tested.

 3. Artifact has not been called out as a top-level noun.  It probably 
 should and get a proper definition.
 Good idea, I will add a definition for 

Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Tom Deckers (tdeckers)


-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: Wednesday, November 27, 2013 0:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Definition feedback

Tom,

On Nov 26, 2013, at 12:09 PM, Tom Deckers (tdeckers)
tdeck...@cisco.com
 wrote:

 Hi All,

 Few comments on the Definitions blueprint [1]:

 1. I'd propose to alter the term 'Application' to either 'Application 
 Package'
or 'Package'.  Application isn't very descriptive and can be confusing to some
with the actual deployed instance, etc.

I think that's a sensible suggestion. I'm open to using Package, as that's an
accurate description of what an Application is currently conceived of.

 2. It should be possible for the package to be self-contained, in order to
distribute application definitions.   As such, instead of using a repo, source
code might come with the package itself.  Has this been considered as a
blueprint: deploy code/binaries that are in a zip, rather than a repo?  Maybe
Artifact serves this purpose?

The API should allow you to deploy something directly from a source code
repo without packaging anything up. It should also allow you to present some
static deployment artifacts (container image, zip file, etc.) for code that has
already been built/tested.

 3. Artifact has not been called out as a top-level noun.  It probably should
and get a proper definition.

Good idea, I will add a definition for it.

 4. Plan is described as deployment plan, but then it's also referenced in the
description of 'Build'.  Plan seems to have a dual meaning, which is fine, but
that should be called out explicitly.  Plan is not synonymous with deployment
plan, rather we have a deployment plan and a build plan.  Those two together
can be 'the plan'.

Currently Plan does have a dual meaning, but it may make sense to split each
out if they are different enough. I'm open to considering ideas on this.

 5. Operations.  The definition states that definitions can be added to a
Service too.  Since the Service is provided by the platform, I assume it 
already
comes with operations predefined.

Yes, the service provider owns services that are provided by the Platform, and
can extend them, where users may not. However, a user may register his/her
own Services within the context of a given tenant account, and those can be
extended and managed. In that case, you can actually connect Operations to
Services as a tenant. So this is really a question of what scope the Service
belongs to.

 6. Operations. A descriptor should exist that can be used for registration of
the deployed assembly into a catalog.  The descriptor should contain basic
information about the exposed functional API and management API (e.g.
Operations too).

An Assembly is a running group of cloud resources (a whole networked
application). A listing of those is exposed through the Assemblies resource.

A Plan is a rubber stamp for producing new Assemblies, and can also be listed
through the Plans resource. Any plan can be easily converted to an Assembly
with an API call.

Were you thinking that we should have a catalog beyond these listings?
Please elaborate on what you have in mind. I agree that any catalog should
provide a way to resolve through to a resources registered Operations. If the
current design prohibits this in any way, then I'd like to revise that.

I understand the an Assembly can be a larger group of components.  However, 
those together exist to provide a capability which we want to capture in some 
catalog so the capability becomes discoverable.  I'm not sure how the 'listing' 
mechanism works out in practice.  If this can be used in an enterprise 
ecosystem to discover services then that's fine.  We should capture a work item 
flesh out discoverability of both Applications and Assemblies.  I make that 
distinction because both scenarios should be provided.
As a service consumer, I should be able to look at the 'Applications' listed in 
the Openstack environment and provision them.  In that case, we should also 
support flavors of the service.  Depending on the consumer-provider 
relationship, we might want to provide different configuratons of the same 
Application. (e.g. gold-silver-bronze tiering).  I believe this is covered by 
the 'listing' you mentioned.
Once deployed, there should also be a mechanism to discover the deployed 
assemblies.  One example of such deployed Assembly is a persistence service 
that can in its turn be used as a Service in another Assembly.  The specific 
details of the capability provided by the Assembly needs to be discoverable in 
order to allow successful auto-wiring (I've seen a comment about this elsewhere 
in the project - I believe in last meeting).



 This leads to the next point:

 7. Package blueprint.  The Application Package might require its own
blueprint to define how it's composed.  I can see how the Package is used to
distribute/share an 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-27 Thread Flavio Percoco

On 27/11/13 10:59 +, Mark McLoughlin wrote:

On Wed, 2013-11-27 at 11:50 +0100, Flavio Percoco wrote:

On 26/11/13 22:54 +, Mark McLoughlin wrote:
On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
 On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com wrote:
 1) Store the commit sha from which the module was copied from.
 Every project using oslo, currently keeps the list of modules it
 is using in `openstack-modules.conf` in a `module` parameter. We
 could store, along with the module name, the sha of the commit it
 was last synced from:
 
 module=log,commit
 
 or
 
 module=log
 log=commit
 

 The second form will be easier to manage. Humans edit the module field and
 the script will edit the others.

How about adding it as a comment at the end of the python files
themselves and leaving openstack-common.conf for human editing?

I think having the commit sha will give us a starting point from which
we could start updating that module from.


Sure, my only point was about where the commit sha comes from - i.e.
whether it's from a comment at the end of the python module itself or in
openstack-common.conf


And, indeed you said 'at the end of the python files'. Don't ask me
how the heck I misread that.

The benefit I see from having them in the openstack-common.conf is
that we can register a `StrOpt` for each object dynamically and get
the sha using oslo.config. If we put it as a comment at the end of the
python file, we'll have to read it and 'parse' it, I guess.




It will mostly help with
getting a diff for that module and the short commit messages where it
was modified.

Here's a pseudo-buggy-algorithm for the update process:

(1) Get current sha for $module
(2) Get list of new commits for $module
(3) for each commit of $module:
(3.1) for each modified_module in $commit
(3.1.1) Update those modules up to $commit (1)(modified_module)
(3.2) Copy the new file
(3.3) Update openstack-common with the latest sha

This trusts the granularity and isolation of the patches proposed in
oslo-incubator. However, in cases like 'remove vim mode lines' it'll
fail assuming that updating every module is necessary - which is true
from a git stand point.


This is another variant of the kind of inter-module dependency smarts
that update.py already has ... I'd be inclined to just omit those smarts
and just require the caller to explicitly list the modules they want to
include.

Maybe update.py could include some reporting to help with that choice
like module foo depends on modules bar and blaa, maybe you want to
include them too and commit XXX modified module foo, but also module
bar and blaa, maybe you want to include them too.


But, if we get to the point of suggesting the user to update module
foo because it was modified in commit XXX, we'd have everything needed
to make it recursive and update those modules as well.

I agree with you on making it explicit, though. What about making it
interactive then? update.py could ask users if they want to update
module foo because it was modified in commit XXX and do it right away,
which is not very different from updating module foo, print a report
and let the user choose afterwards.

(/me feels like Gollum now)

I prefer the interactive way though, at least it doesn't require the
user to run update several times for each module. We could also add a
`--no-stop` flag that does exactly what you suggested.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpjWZxq5ji2s.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] summit wrap-up: Heat integration

2013-11-27 Thread Sergey Lukjanov
Hi Steve,

Thank you for comments and help.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Nov 26, 2013, at 3:21 AM, Steve Baker sba...@redhat.com wrote:

 On 11/26/2013 06:21 AM, Sergey Lukjanov wrote:
 Hi guys,
 
 There was the Design Summit session in Hong Kong about Heat integration and 
 Savanna scalability [0]. We discussed some details about it, approved 
 integration plan and decided to use guest agents.
 
 First of all, or the Icehouse release cycle, we’ll implement resources 
 orchestration using Heat by creating YAML templates generator, blueprints 
 [1][2] and PoC [3]. It’ll be done by implementing extension mechanism for 
 provisioning w/o removing current orchestration solution to transparently 
 replace current code with the new Heat-based approach. As the first step all 
 resources (VMs, volumes, IPs) will be provisioned by Heat using template 
 generated by Savanna. Hadoop configuration will be done by Savanna and 
 especially by corresponding plugins.
 The second step of improving provisioning code will be to implement guest 
 agent for Savanna (we’re looking at unified agent [4][5] implementation due 
 to the growing amount of projects interested in it). Guest agents will allow 
 Savanna plugins to configure software by interacting with vendor-specific 
 management console APIs. The main goal of implementing agents in Savanna is 
 to get rid of direct ssh and http access to VMs.
 
 For the earlier J release cycle we’re planning to enable Heat by default and 
 then completely remove our current direct provisioning. We’ll contribute 
 Savanna resource to Heat, it’ll be something like “Data Processing Cluster” 
 or just “Hadoop Cluster” at the beginning, I’ll start discussion on it 
 separately.
 
 There are some problems that we’ll try to solve to support all current 
 Savanna features:
 
 * anti-affinity support (currently implemented using scheduler hints ‘not on 
 the specific hosts’ and stack provisioning is simultaneous in this case); 
 there are two possible solutions - use Nova’s Group API (when it’ll be 
 ready) or add support for it into the Heat; 
 OS::Nova::Server has the scheduler_hints property, so you could always 
 continue with the current approach in the interim.
Yep, that’s our plan for the first time.
 * partially active stack and/or optional and mandatory resources; the 
 easiest way to explain this is to have an example - we provisioning 100 
 nodes with same roles (data nodes of Hadoop cluster) and only one is down, 
 so, we can say that cluster is partially active and then rebuild failed 
 nodes.
 Some combination of our new autoscaling and stack convergence should help 
 here.
We’ll try to research it.
 To summarize, we’re currently finishing the PoC version of Heat-based 
 provisioning and we’ll merge it into the codebase soon.
 
 [0] https://etherpad.openstack.org/p/savanna-icehouse-architecture
 [1] 
 https://blueprints.launchpad.net/savanna/+spec/heat-backed-resources-provisioning
 [2] 
 https://blueprints.launchpad.net/savanna/+spec/infra-provisioning-extensions
 [3] https://review.openstack.org/#/c/55978
 [4] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018276.html
 [5] https://etherpad.openstack.org/p/UnifiedAgents
 
 
 Nice, I've just added some comments to 
 https://review.openstack.org/#/c/55978/ . Feel free to add me as a reviewer 
 to any others.
Really appreciate your comments, we’ll share all new reviews with you.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Maintaining backwards compatibility for RPC calls

2013-11-27 Thread Day, Phil
Hi Folks,

I'm a bit confused about the expectations of a manager class to be able to 
receive and process messages from a previous RPC version.   I thought the 
objective was to always make changes such that the manage can process any 
previous version of the call  that could come from the last release,  For 
example Icehouse code should be able to receive any version that could be 
generated by a version of Havana.   Generally of course that means new 
parameters have to have a default value.

I'm kind of struggling then to see why we've now removed, for example, the 
default values for example from terminate_instance() as part of moving to RPC 
version to 3.0:

def terminate_instance(self, context, instance, bdms=None, reservations=None):

def terminate_instance(self, context, instance, bdms, reservations):

https://review.openstack.org/#/c/54493/

Doesn't this mean that you can't deploy Icehouse (3.0) code into a Havana 
system but leave the RPC version pinned at Havana until all of the code has 
been updated ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-27 Thread Sean Dague
The problem is you can't really support both iso8601 was dormant for
years, and the revived version isn't compatible with the old version.
So supporting both means basically forking iso8601 and maintaining you
own version of it monkey patched in your own tree.

On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang
yaguang.t...@canonical.com wrote:
 after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests,
 because stable/glance requires  iso8601=0.1.4, log info
 https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console,
 I have filed a bug to track this
 https://bugs.launchpad.net/glance/+bug/1255419.


 2013/11/26 Thomas Goirand z...@debian.org

 I'm sorry to restart this topic.

 I don't mind if we upgrade to 0.1.8, but then I will need to have
 patches for Havana to support version 0.1.8. Otherwise, it's going to be
 very difficult on the packaging side: I will need to upload 0.1.8 for
 Icehouse, but then it will break everything else (eg: Havana) that is
 currently in Sid.

 Was there some patches already for that? If so, please point to them so
 that I can cherry-pick them, and carry the patches in the Debian
 packages (it doesn't have to be backported to the Havana branch, I'm
 fine keeping the patches in the packages, if at least they are
 identified).

 Is there a way that I can grep all commits in Gerrit, to see if there
 was such patches committed recently?

 Cheers,

 Thomas Goirand (zigo)

 On 10/24/2013 09:37 PM, Morgan Fainberg wrote:
  It seems like adopting 0.1.8 is the right approach. If it doesn't work
  with other projects, we should work to help those projects get updated
  to work with it.
 
  --Morgan
 
  On Thursday, October 24, 2013, Zhi Yan Liu wrote:
 
  Hi all,
 
  Adopt 0.1.8 as iso8601 minimum version:
  https://review.openstack.org/#/c/53567/
 
  zhiyan
 
  On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews
  dolph.math...@gmail.com javascript:; wrote:
  
   On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins
  robe...@robertcollins.net javascript:;
   wrote:
  
   On 24 October 2013 07:34, Mark Washenberger
   mark.washenber...@markwash.net javascript:; wrote:
Hi folks!
   
1) Adopt 0.1.8 as the minimum version in
  openstack-requirements.
2) Do nothing (i.e. let Glance behavior depend on iso8601 in
  this way,
and
just fix the tests so they don't care about these extra
  formats)
3) Make Glance work with the added formats even if 0.1.4 is
  installed.
  
   I think we should do (1) because both (2) will permit surprising,
   nonobvious changes in behaviour and (3) is just nasty
  engineering.
   Alternatively, add a (4) which is (2) with whinge on startup if
  0.1.4
   is installed to make identifying this situation easy.
  
  
   I'm in favor of (1), unless there's a reason why 0.1.8 not viable
  for
   another project or packager, in which case, I've never heard the
  term
   whinge before so there should definitely be some of that.
  
  
  
   The last thing a new / upgraded deployment wants is something
  like
   nova, or a third party API script failing in nonobvious ways with
  no
   breadcrumbs to lead them to 'upgrade iso8601' as an answer.
  
   -Rob
  
   --
   Robert Collins rbtcoll...@hp.com javascript:;
   Distinguished Technologist
   HP Converged Cloud
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   --
  
   -Dolph
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Tang Yaguang

 Canonical Ltd. | www.ubuntu.com | www.canonical.com
 Mobile:  +86 152 1094 6968
 gpg key: 0x187F664F


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sean Dague
http://dague.net

___

[openstack-dev] [neutron] [ml2] Today's ML2 meeting is canceled

2013-11-27 Thread Kyle Mestery (kmestery)
Some of the key people cannot make the meeting today, so
we're canceling it for today. I would ask that people review
the following two patches this week and we'll discuss them
in detail next week if they haven't merged by then:

https://review.openstack.org/#/c/21946/
https://review.openstack.org/#/c/37893/

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-27 Thread Andrew Laski

+1

On 11/26/13 at 02:32pm, Russell Bryant wrote:

Greetings,

I would like to propose that we re-add Dan Prince to the nova-core
review team.

Dan Prince has been involved with Nova since early in OpenStack's
history (Bexar timeframe).  He was a member of the nova-core review team
from May 2011 to June 2013.  He has since picked back up with nova
reviews [1].  We always say that when people leave nova-core, we would
love to have them back if they are able to commit the time in the
future.  I think this is a good example of that.

Please respond with +1s or any concerns.

Thanks,

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-27 Thread Jaromir Coufal


On 2013/27/11 00:00, Robert Collins wrote:

On 26 November 2013 07:41, Jaromir Coufal jcou...@redhat.com wrote:

Hey Rob,

can we add 'Slick Overcloud deployment through the UI' to the list? There
was no session about that, but we discussed it afterwords and agreed that it
is high priority for Icehouse as well.

I just want to keep it on the list, so we are aware of that.

Certainly. Please add a blueprint for that and I'll mark itup appropriately.

I will do.


Related to that we had a long chat in IRC that I was to follow up here, so - ...

Tuskar is refocusing on getting the basics really right - slick basic
install, and then work up. At the same time, just about every nova
person I've spoken too (a /huge/ sample of three, but meh :)) has
expressed horror that Tuskar is doing it's own scheduling, and
confusion about the need to manage flavors in such detail.
So the discussion on IRC was about getting back to basics - a clean
core design and something that we aren't left with technical debt that
we need to eliminate in order to move forward - which the scheduler
stuff would be.

So: my question/proposal was this: lets set a couple of MVPs.

0: slick install homogeneous nodes:
  - ask about nodes and register them with nova baremetal / Ironic (can
use those APIs directly)
  - apply some very simple heuristics to turn that into a cloud:
- 1 machine - all in one
- 2 machines - separate hypervisor and the rest
- 3 machines - two hypervisors and the rest
- 4 machines - two hypervisors, HA the rest
- 5 + scale out hypervisors
  - so total forms needed = 1 gather hw details
  - internals: heat template with one machine flavor used

1: add support for heterogeneous nodes:
  - for each service (storage compute etc) supply a list of flavors
we're willing to have that run on
  - pass that into the heat template
  - teach heat to deal with flavor specific resource exhaustion by
asking for a different flavor (or perhaps have nova accept multiple
flavors and 'choose one that works'): details to be discussed with
heat // nova at the right time.

2: add support for anti-affinity for HA setups:
  - here we get into the question about short term deliverables vs long
term desire, but at least we'll have a polished installer already.

-Rob


Important point here is, that we agree on starting with very basics - 
grow then. Which is great.


The whole deployment workflow (not just UI) is all about user experience 
which is built on top of TripleO's approach. Here I see two important 
factors:

- There are *users* who are having some *needs and expectations*.
- There is underlying *concept of TripleO*, which we are using for 
*implementing* features which are satisfying those needs.


We are circling around and trying to approach the problem from wrong end 
- which is implementation point of view (how to avoid own scheduling).


Let's try get out of the box and start with thinking about our audience 
first - what they expect, what they need. Then we go back, put our 
implementation thinking hat on and find out how we are going to re-use 
OpenStack components to achieve our goals. In the end we have detailed plan.



=== Users ===

I would like to start with our targeted audience first - without 
milestones, without implementation details.


I think here is the main point where I disagree and which leads to 
different approaches. I don't think, that user of TripleO cares *only* 
about deploying infrastructure without any knowledge where the things 
go. This is overcloud user's approach - 'I want VM and I don't care 
where it runs'. Those are self-service users / cloud users. I know we 
are OpenStack on OpenStack, but we shouldn't go that far that we expect 
same behavior from undercloud users. I can tell you various examples of 
why the operator will care about where the image goes and what runs on 
specific node.


/One quick example:/
I have three racks of homogenous hardware and I want to design it the 
way so that I have one control node in each, 3 storage nodes and the 
rest compute. With that smart deployment, I'll never know what my rack 
contains in the end. But if I have control over stuff, I can say that 
this node is controller, those three are storage and those are compute - 
I am happy from the very beginning.


Our targeted audience are sysadmins, operators. They hate 'magics'. They 
want to have control over things which they are doing. If we put in 
front of them workflow, where they click one button and they get cloud 
installed, they will get horrified.


That's why I am very sure and convinced that we need to have ability for 
user to have control over stuff. What node is having what role. We can 
be smart, suggest and advice. But not hiding this functionality from 
user. Otherwise, I am afraid that we can fail.


Furthermore, if we put lots of restrictions (like homogenous hardware) 
in front of users from the very beginning, we are discouraging people 
from using TripleO-UI. We are 

Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-27 Thread Gary Kotton
+1

On 11/27/13 3:29 PM, Andrew Laski andrew.la...@rackspace.com wrote:

+1

On 11/26/13 at 02:32pm, Russell Bryant wrote:
Greetings,

I would like to propose that we re-add Dan Prince to the nova-core
review team.

Dan Prince has been involved with Nova since early in OpenStack's
history (Bexar timeframe).  He was a member of the nova-core review team
from May 2011 to June 2013.  He has since picked back up with nova
reviews [1].  We always say that when people leave nova-core, we would
love to have them back if they are able to commit the time in the
future.  I think this is a good example of that.

Please respond with +1s or any concerns.

Thanks,

[1] 
https://urldefense.proofpoint.com/v1/url?u=http://russellbryant.net/opens
tack-stats/nova-reviewers-30.txtk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH
0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=ws5v3PcJlVyDHxBEuq3Qt
F5MfZbkBJrzcDfxg8DyKKA%3D%0As=8fd40d786207d6d20e55419828d25eafd08532e6cd
f738be810a00931c653fb9

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=ws5v3PcJlVyDHxBEuq
3QtF5MfZbkBJrzcDfxg8DyKKA%3D%0As=06197bd4214df723e2369fb6e0c4848c122f003
1cea9df8d4d90d9fde0ce6e35

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=ws5v3PcJlVyDHxBEuq3Qt
F5MfZbkBJrzcDfxg8DyKKA%3D%0As=06197bd4214df723e2369fb6e0c4848c122f0031cea
9df8d4d90d9fde0ce6e35


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-27 Thread Tuomas Paappanen

On 19.11.2013 20:18, yunhong jiang wrote:

On Tue, 2013-11-19 at 12:52 +, Daniel P. Berrange wrote:

On Wed, Nov 13, 2013 at 02:46:06PM +0200, Tuomas Paappanen wrote:

Hi all,

I would like to hear your thoughts about core pinning in Openstack.
Currently nova(with qemu-kvm) supports usage of cpu set of PCPUs
what can be used by instances. I didn't find blueprint, but I think
this feature is for isolate cpus used by host from cpus used by
instances(VCPUs).

But, from performance point of view it is better to exclusively
dedicate PCPUs for VCPUs and emulator. In some cases you may want to
guarantee that only one instance(and its VCPUs) is using certain
PCPUs.  By using core pinning you can optimize instance performance
based on e.g. cache sharing, NUMA topology, interrupt handling, pci
pass through(SR-IOV) in multi socket hosts etc.

We have already implemented feature like this(PoC with limitations)
to Nova Grizzly version and would like to hear your opinion about
it.

The current implementation consists of three main parts:
- Definition of pcpu-vcpu maps for instances and instance spawning
- (optional) Compute resource and capability advertising including
free pcpus and NUMA topology.
- (optional) Scheduling based on free cpus and NUMA topology.

The implementation is quite simple:

(additional/optional parts)
Nova-computes are advertising free pcpus and NUMA topology in same
manner than host capabilities. Instances are scheduled based on this
information.

(core pinning)
admin can set PCPUs for VCPUs and for emulator process, or select
NUMA cell for instance vcpus, by adding key:value pairs to flavor's
extra specs.

EXAMPLE:
instance has 4 vcpus
key:value
vcpus:1,2,3,4 -- vcpu0 pinned to pcpu1, vcpu1 pinned to pcpu2...
emulator:5 -- emulator pinned to pcpu5
or
numacell:0 -- all vcpus are pinned to pcpus in numa cell 0.

In nova-compute, core pinning information is read from extra specs
and added to domain xml same way as cpu quota values(cputune).

cputune
   vcpupin vcpu='0' cpuset='1'/
   vcpupin vcpu='1' cpuset='2'/
   vcpupin vcpu='2' cpuset='3'/
   vcpupin vcpu='3' cpuset='4'/
   emulatorpin cpuset='5'/
/cputune

What do you think? Implementation alternatives? Is this worth of
blueprint? All related comments are welcome!

I think there are several use cases mixed up in your descriptions
here which should likely be considered independantly

  - pCPU/vCPU pinning

I don't really think this is a good idea as a general purpose
feature in its own right. It tends to lead to fairly inefficient
use of CPU resources when you consider that a large % of guests
will be mostly idle most of the time. It has a fairly high
administrative burden to maintain explicit pinning too. This
feels like a data center virt use case rather than cloud use
case really.

  - Dedicated CPU reservation

The ability of an end user to request that their VM (or their
group of VMs) gets assigned a dedicated host CPU set to run on.
This is obviously something that would have to be controlled
at a flavour level, and in a commercial deployment would carry
a hefty pricing premium.

I don't think you want to expose explicit pCPU/vCPU placement
for this though. Just request the high level concept and allow
the virt host to decide actual placement
I think pcpu/vcpu pinning could be considered like an extension for 
dedicated cpu reservation feature. And I agree that if we exclusively 
dedicate pcpus for VMs it is inefficient from cloud point of view, but 
in some case, end user may want to be sure(and ready to pay) that their 
VMs have resources available e.g. for sudden load peaks.


So, here is my proposal how dedicated cpu reservation would function on 
high level:


When an end user wants VM with nn vcpus which are running on dedicated 
host cpu set, admin could enable it by setting a new dedicate_pcpu 
parameter in a flavor(e.g. optional flavor parameter). By default, 
amount of pcpus and vcpus could be same. And as option, explicit 
vcpu/pcpu pinning could be done by defining vcpu/pcpu relations to 
flavors extra specs(vcpupin:0 0...).


In the virt driver there is two alternatives how to do the pcpu sharing 
1. all dedicated pcpus are shared with all vcpus(default case) or 2. 
each vcpu has dedicated pcpu(vcpu 0 will be pinned to the first pcpu in 
a cpu set, vcpu 1 to the second pcpu and so on). Vcpu/pcpu pinning 
option could be used to extend the latter case.


In any case, before VM with or without dedicated pcpus is launched the 
virt driver must take care of that the dedicated pcpus are excluded from 
existing VMs and from a new VMs and that there are enough free pcpus for 
placement. And I think minimum amount of pcpus for VMs without dedicated 
pcpus must be configurable somewhere.


Comments?

Br, Tuomas



  - Host NUMA placement.

By not taking NUMA into account currently the libvirt driver
at least is badly wasting resources. Having 

Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-27 Thread Christopher Yeoh
+1

On Wed, Nov 27, 2013 at 6:02 AM, Russell Bryant rbry...@redhat.com wrote:

 Greetings,

 I would like to propose that we re-add Dan Prince to the nova-core
 review team.

 Dan Prince has been involved with Nova since early in OpenStack's
 history (Bexar timeframe).  He was a member of the nova-core review team
 from May 2011 to June 2013.  He has since picked back up with nova
 reviews [1].  We always say that when people leave nova-core, we would
 love to have them back if they are able to commit the time in the
 future.  I think this is a good example of that.

 Please respond with +1s or any concerns.

 Thanks,

 [1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-27 Thread Zhi Yan Liu
Yes agreed with Sean, make compatible with both iso8601 is overcomplicated.
This is my abandoned try: https://review.openstack.org/#/c/53186/

zhiyan

On Wed, Nov 27, 2013 at 8:49 PM, Sean Dague s...@dague.net wrote:
 The problem is you can't really support both iso8601 was dormant for
 years, and the revived version isn't compatible with the old version.
 So supporting both means basically forking iso8601 and maintaining you
 own version of it monkey patched in your own tree.

 On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang
 yaguang.t...@canonical.com wrote:
 after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests,
 because stable/glance requires  iso8601=0.1.4, log info
 https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console,
 I have filed a bug to track this
 https://bugs.launchpad.net/glance/+bug/1255419.


 2013/11/26 Thomas Goirand z...@debian.org

 I'm sorry to restart this topic.

 I don't mind if we upgrade to 0.1.8, but then I will need to have
 patches for Havana to support version 0.1.8. Otherwise, it's going to be
 very difficult on the packaging side: I will need to upload 0.1.8 for
 Icehouse, but then it will break everything else (eg: Havana) that is
 currently in Sid.

 Was there some patches already for that? If so, please point to them so
 that I can cherry-pick them, and carry the patches in the Debian
 packages (it doesn't have to be backported to the Havana branch, I'm
 fine keeping the patches in the packages, if at least they are
 identified).

 Is there a way that I can grep all commits in Gerrit, to see if there
 was such patches committed recently?

 Cheers,

 Thomas Goirand (zigo)

 On 10/24/2013 09:37 PM, Morgan Fainberg wrote:
  It seems like adopting 0.1.8 is the right approach. If it doesn't work
  with other projects, we should work to help those projects get updated
  to work with it.
 
  --Morgan
 
  On Thursday, October 24, 2013, Zhi Yan Liu wrote:
 
  Hi all,
 
  Adopt 0.1.8 as iso8601 minimum version:
  https://review.openstack.org/#/c/53567/
 
  zhiyan
 
  On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews
  dolph.math...@gmail.com javascript:; wrote:
  
   On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins
  robe...@robertcollins.net javascript:;
   wrote:
  
   On 24 October 2013 07:34, Mark Washenberger
   mark.washenber...@markwash.net javascript:; wrote:
Hi folks!
   
1) Adopt 0.1.8 as the minimum version in
  openstack-requirements.
2) Do nothing (i.e. let Glance behavior depend on iso8601 in
  this way,
and
just fix the tests so they don't care about these extra
  formats)
3) Make Glance work with the added formats even if 0.1.4 is
  installed.
  
   I think we should do (1) because both (2) will permit surprising,
   nonobvious changes in behaviour and (3) is just nasty
  engineering.
   Alternatively, add a (4) which is (2) with whinge on startup if
  0.1.4
   is installed to make identifying this situation easy.
  
  
   I'm in favor of (1), unless there's a reason why 0.1.8 not viable
  for
   another project or packager, in which case, I've never heard the
  term
   whinge before so there should definitely be some of that.
  
  
  
   The last thing a new / upgraded deployment wants is something
  like
   nova, or a third party API script failing in nonobvious ways with
  no
   breadcrumbs to lead them to 'upgrade iso8601' as an answer.
  
   -Rob
  
   --
   Robert Collins rbtcoll...@hp.com javascript:;
   Distinguished Technologist
   HP Converged Cloud
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   --
  
   -Dolph
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org javascript:;
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Tang Yaguang

 Canonical Ltd. | www.ubuntu.com | www.canonical.com
 Mobile:  +86 152 1094 6968
 gpg key: 0x187F664F


 

Re: [openstack-dev] tenant or project

2013-11-27 Thread Steven Hardy
On Tue, Nov 26, 2013 at 10:17:56PM +1030, Christopher Yeoh wrote:
 On Mon, Nov 25, 2013 at 7:50 PM, Flavio Percoco fla...@redhat.com wrote:
  On 24/11/13 12:47 -0500, Doug Hellmann wrote:
 
  On Sun, Nov 24, 2013 at 12:08 AM, Morgan Fainberg m...@metacloud.com
  wrote:
 
 In all honesty it doesn't matter which term we go with.  As long as we
  are
 consistent and define the meaning.  I think we can argue intuitive vs
 non-intuitive in this case unto the ground.  I prefer project to
  tenant,
 but beyond being a bit of an overloaded term, I really don't think
  anyone
 will really notice one way or another as long as everything is using
  the
 same terminology.  We could call it grouping-of-openstack-things if
  we
 wanted to (though I might have to pull some hair out if we go to that
 terminology).  However, with all that in mind, we have made the
  choice to move toward
 project (horizon, keystone, OSC, keystoneclient) and have some momentum
 behind that push (plus newer projects already use the project
 nomenclature).   Making a change back to tenant might prove a worse UX
  than
 moving everything else in line (nova I think is the one real major
  hurdle
 to get converted over, and deprecation of keystone v2 API).
 
  FWIW, ceilometer also uses project in our API (although some of our docs
  use
  the terms interchangeably).
 
 
  And, FWIW, Marconi uses project as well.
 
 
 Well project seems to be the way everyone is heading long term.  So we'll
 do this for the Nova
 V3 API.  As others have mentioned, I think the most important this is that
 we all end up using
 the same terminology (though with the long life of APIs we're stuck with
 the both for a few years
 at least).

So, Heat has some work to do as we're still using tenant in various places.

However, I've been thinking, why do the APIs requests have to contain the
project ID at all?  Isn't that something we derive from the token in
auth_token (setting X-Project-Id, which we use to set the project in the
request context)?

Maybe I'm missing something obvious, but at the moment, when you create a
heat stack, you specify the tenant ID three times, once in the path, once
in the request body, and again in the context.  I'm wondering why, and if
we can kill the first two?

Clearly this is different for keystone, where the top level of request
granularity is Domain not Project, but for all other services, every
request is scoped to a Project is it not?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-27 Thread Anne Gentle
Hi Steve,

There was a long thread about dropping project ID/tenant ID from the URL at
http://openstack.markmail.org/thread/c2wi2uwdsye32z7f

Looking back through it, it looks like nova v3 has it removed
https://blueprints.launchpad.net/nova/+spec/v3-api-remove-project-id

Maybe something in there will lend itself to reuse in heat.
Anne


On Wed, Nov 27, 2013 at 8:12 AM, Steven Hardy sha...@redhat.com wrote:

 On Tue, Nov 26, 2013 at 10:17:56PM +1030, Christopher Yeoh wrote:
  On Mon, Nov 25, 2013 at 7:50 PM, Flavio Percoco fla...@redhat.com
 wrote:
   On 24/11/13 12:47 -0500, Doug Hellmann wrote:
  
   On Sun, Nov 24, 2013 at 12:08 AM, Morgan Fainberg m...@metacloud.com
   wrote:
  
  In all honesty it doesn't matter which term we go with.  As long
 as we
   are
  consistent and define the meaning.  I think we can argue intuitive
 vs
  non-intuitive in this case unto the ground.  I prefer project to
   tenant,
  but beyond being a bit of an overloaded term, I really don't
 think
   anyone
  will really notice one way or another as long as everything is
 using
   the
  same terminology.  We could call it grouping-of-openstack-things
 if
   we
  wanted to (though I might have to pull some hair out if we go to
 that
  terminology).  However, with all that in mind, we have made the
   choice to move toward
  project (horizon, keystone, OSC, keystoneclient) and have some
 momentum
  behind that push (plus newer projects already use the project
  nomenclature).   Making a change back to tenant might prove a
 worse UX
   than
  moving everything else in line (nova I think is the one real major
   hurdle
  to get converted over, and deprecation of keystone v2 API).
  
   FWIW, ceilometer also uses project in our API (although some of our
 docs
   use
   the terms interchangeably).
  
  
   And, FWIW, Marconi uses project as well.
  
  
  Well project seems to be the way everyone is heading long term.  So we'll
  do this for the Nova
  V3 API.  As others have mentioned, I think the most important this is
 that
  we all end up using
  the same terminology (though with the long life of APIs we're stuck with
  the both for a few years
  at least).

 So, Heat has some work to do as we're still using tenant in various places.

 However, I've been thinking, why do the APIs requests have to contain the
 project ID at all?  Isn't that something we derive from the token in
 auth_token (setting X-Project-Id, which we use to set the project in the
 request context)?

 Maybe I'm missing something obvious, but at the moment, when you create a
 heat stack, you specify the tenant ID three times, once in the path, once
 in the request body, and again in the context.  I'm wondering why, and if
 we can kill the first two?

 Clearly this is different for keystone, where the top level of request
 granularity is Domain not Project, but for all other services, every
 request is scoped to a Project is it not?

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-27 Thread Steven Hardy
On Wed, Nov 27, 2013 at 08:17:59AM -0600, Anne Gentle wrote:
 Hi Steve,
 
 There was a long thread about dropping project ID/tenant ID from the URL at
 http://openstack.markmail.org/thread/c2wi2uwdsye32z7f
 
 Looking back through it, it looks like nova v3 has it removed
 https://blueprints.launchpad.net/nova/+spec/v3-api-remove-project-id
 
 Maybe something in there will lend itself to reuse in heat.
 Anne

Thanks Anne, I knew Nova had removed the ID from the path, but hadn't
spotted that they've also removed it from the request body, which is what
I'm thinking may make sense for Heat:

https://github.com/openstack/nova/blob/master/doc/v3/api_samples/servers/server-post-req.json

Looks like we can take a similar path for a future v2 Heat API.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 03:50:47PM +0200, Tuomas Paappanen wrote:
 On Tue, 2013-11-19 at 12:52 +, Daniel P. Berrange wrote:
 I think there are several use cases mixed up in your descriptions
 here which should likely be considered independantly
 
   - pCPU/vCPU pinning
 
 I don't really think this is a good idea as a general purpose
 feature in its own right. It tends to lead to fairly inefficient
 use of CPU resources when you consider that a large % of guests
 will be mostly idle most of the time. It has a fairly high
 administrative burden to maintain explicit pinning too. This
 feels like a data center virt use case rather than cloud use
 case really.
 
   - Dedicated CPU reservation
 
 The ability of an end user to request that their VM (or their
 group of VMs) gets assigned a dedicated host CPU set to run on.
 This is obviously something that would have to be controlled
 at a flavour level, and in a commercial deployment would carry
 a hefty pricing premium.
 
 I don't think you want to expose explicit pCPU/vCPU placement
 for this though. Just request the high level concept and allow
 the virt host to decide actual placement
 I think pcpu/vcpu pinning could be considered like an extension for
 dedicated cpu reservation feature. And I agree that if we
 exclusively dedicate pcpus for VMs it is inefficient from cloud
 point of view, but in some case, end user may want to be sure(and
 ready to pay) that their VMs have resources available e.g. for
 sudden load peaks.
 
 So, here is my proposal how dedicated cpu reservation would function
 on high level:
 
 When an end user wants VM with nn vcpus which are running on
 dedicated host cpu set, admin could enable it by setting a new
 dedicate_pcpu parameter in a flavor(e.g. optional flavor
 parameter). By default, amount of pcpus and vcpus could be same. And
 as option, explicit vcpu/pcpu pinning could be done by defining
 vcpu/pcpu relations to flavors extra specs(vcpupin:0 0...).
 
 In the virt driver there is two alternatives how to do the pcpu
 sharing 1. all dedicated pcpus are shared with all vcpus(default
 case) or 2. each vcpu has dedicated pcpu(vcpu 0 will be pinned to
 the first pcpu in a cpu set, vcpu 1 to the second pcpu and so on).
 Vcpu/pcpu pinning option could be used to extend the latter case.
 
 In any case, before VM with or without dedicated pcpus is launched
 the virt driver must take care of that the dedicated pcpus are
 excluded from existing VMs and from a new VMs and that there are
 enough free pcpus for placement. And I think minimum amount of pcpus
 for VMs without dedicated pcpus must be configurable somewhere.
 
 Comments?

I still don't believe that vcpu:pcpu pinning is something we want
to do, even with dedicated CPUs. There are always threads in the
host doing work on behalf of the VM that are not related to vCPUs.
For example the main QEMU emulator thread, the QEMU I/O threads,
kernel threads. Other hypervisors have similar behaviour. It is
better to let the kernel / hypervisor scheduler decide how to
balance the competing workloads than forcing a fixed   suboptimally
performing vcpu:pcpu mapping. The only time I've seen fixed pinning
make a consistent benefit is when you have NUMA involved and want to
prevent a VM spanning NUMA nodes. Even then you'd just be best pinning
to the set of CPUs in a node and then letting the vCPUs float amonst
the pCPUs in that node.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Clayton Coleman


- Original Message -
 
 Personally Application gets my vote as the conceptual top level
 unit, with the best combination of some meaning and not too much
 ambiguity.  At least so far.  As Tom notes there is some ambiguity ...
 not sure we can avoid that altogether but worth some brainstorming.
 
 Project is what I had originally proposed to avoid the confusion with
 running app instances (assemblies) but that is even less descriptive and
 has meanings elsewhere.

And for those not following openstack-dev it looks like there is consensus 
developing about tenant - project in the REST APIs (not yet closed, but 
did not see much if any objections).

 
 Package feels too low level, I agree.
 
 Product is perhaps an alternative though also not ideal.
 
 --A
 
 
 On 27/11/2013 06:31, Clayton Coleman wrote:
 
  On Nov 26, 2013, at 11:10 PM, Adrian Otto adrian.o...@rackspace.com
  wrote:
 
 
 
  On Nov 26, 2013, at 4:20 PM, Clayton Coleman ccole...@redhat.com
  wrote:
 
 
 
  On Nov 26, 2013, at 6:30 PM, Adrian Otto adrian.o...@rackspace.com
  wrote:
 
  Tom,
 
  On Nov 26, 2013, at 12:09 PM, Tom Deckers (tdeckers)
  tdeck...@cisco.com
  wrote:
 
  Hi All,
 
  Few comments on the Definitions blueprint [1]:
 
  1. I'd propose to alter the term 'Application' to either 'Application
  Package' or 'Package'.  Application isn't very descriptive and can be
  confusing to some with the actual deployed instance, etc.
  I think that's a sensible suggestion. I'm open to using Package, as
  that's an accurate description of what an Application is currently
  conceived of.
  Package is a bit fraught given its overlap with other programming
  concepts:
 
  Python Dev: How do I get my django app up in production?
  Admin: You can create a new package for it.
  Python Dev: You mean with an __init__.py file?
 
  Admin: Go create your package in horizon so you can deploy it.
  Java Dev: Ok, I ran Create Package from eclipse
  (Hours of humorous miscommunication ensue)
 
  Solum Admin: Go update the package for Bob's app.
  Other Admin: I ran yum update but nothing happened...
 
  If application is generic, that might be a good thing.  I'm not sure
  there are too many words that can accurately describe a Java WAR, a Ruby
  on Rails site, a Jenkins server, a massive service oriented
  architecture, or a worker queue processing log data at the same time.
  Server and site are too specific or in use in openstack already,
  program is too singular.
 
  At the end of the day someone has to explain these terms to a large
  number of end users (developers) - would hope we can pick a name that is
  recognizable to most of them immediately, because they're going to pick
  the option that looks the most familiar to them and try it first.
  All good points. This is why I like having these discussions with such a
  diverse group. I am still open to considering different terminology,
  accepting that whatever we pick to call things it will feel like a
  compromise for some of us. Any other thoughts on revisiting this name, or
  should we stick with application for now, and address this with more
  documentation to further clarify the meaning of the various abstracts?
  I think Tom's point on this is valid - the app resource is more of a
  factory or template for your app at first.  However, I could easily
  imagine interaction patterns that imply a cohesive unit over time, but
  those are hard to argue about until we've got more direct use cases and
  client interaction drawn up.
 
  For instance, if someone starts by creating an assembly right off the bat
  the application might not really be relevant, but if the client forces
  people to develop a plan first (either by helping them build it or pulling
  from operator templates) and then iterate on that plan directly (deploy
  app to env), a user might feel like the app is a stronger concept.
 
  2. It should be possible for the package to be self-contained, in order
  to distribute application definitions.   As such, instead of using a
  repo, source code might come with the package itself.  Has this been
  considered as a blueprint: deploy code/binaries that are in a zip,
  rather than a repo?  Maybe Artifact serves this purpose?
  The API should allow you to deploy something directly from a source code
  repo without packaging anything up. It should also allow you to present
  some static deployment artifacts (container image, zip file, etc.) for
  code that has already been built/tested.
 
  3. Artifact has not been called out as a top-level noun.  It probably
  should and get a proper definition.
  Good idea, I will add a definition for it.
 
  4. Plan is described as deployment plan, but then it's also referenced
  in the description of 'Build'.  Plan seems to have a dual meaning,
  which is fine, but that should be called out explicitly.  Plan is not
  synonymous with deployment plan, rather we have a deployment plan and
  a build plan.  Those two together can be 'the plan'.
 

Re: [openstack-dev] [qa] Major change to tempest.conf.sample coming

2013-11-27 Thread Matthew Treinish
On Wed, Nov 13, 2013 at 02:42:34PM -0500, David Kranz wrote:
 This is a heads up that soon we will be auto-generating the
 tempest.conf.sample from the tempest code that uses oslo.config.
 Following in the footsteps of nova, this should reduce bugs around
 failures to keep the config code and the sample conf file in sync
 manually. So when you add a new item to the config code you will no
 longer have to make a corresponding change to the sample conf file.
 This change, along with some tooling, is in
 https://review.openstack.org/#/c/53870/ which is currently blocked
 on a rebase. Obviously once this merges, any pending patches with
 changes to the conf file will have to be updated to remove the
 changes to the conf file.
 

So this change was merged earlier this week meaning that any proposed commit
that changes a config option (including adding them) will need to re-generate
the sample config file. There was some initial confusion on the procedure for
doing this. If you need to regenerate the tempest sample config just run:

tools/generate_sample.sh

from the tempest root dir and that will regenerate the sample config file.

Also, the tox pep8 job will check that the config sample is up to date so you
can run the check to see if you need to regenerate the sample config with:

tox -epep8

Thanks,

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
Moving this to the ml as requested, would appreciate
comments/thoughts/feedback.

So, I recently proposed a small patch to the oslo rpc code (initially in
oslo-incubator then moved to oslo.messaging) which extends the existing
support for limiting the rpc thread pool so that concurrent requests can
be limited based on type/method. The blueprint and patch are here:

https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

The basic idea is that if you have server with limited resources you may
want restrict operations that would impact those resources e.g. live
migrations on a specific hypervisor or volume formatting on particular
volume node. This patch allows you, admittedly in a very crude way, to
apply a fixed limit to a set of rpc methods. I would like to know
whether or not people think this is sort of thing would be useful or
whether it alludes to a more fundamental issue that should be dealt with
in a different manner.

Thoughts?

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Neutron] [Fuel] Implementing Elastic Applications

2013-11-27 Thread Serg Melikyan
I had added Neutron and Fuel teams to this e-mail thread: Guys what is your
thoughts on the subject?

We see three possible ways to implement Elastic Applications in Murano:
using Heat  Neutron LBaaS, Heat  AWS::ElasticLoadBalancing::LoadBalancer
resource and own solution using HAProxy directly (see more details in the
mail-thread).

Previously we was using Heat and AWS::ElasticLoadBalancing::LoadBalancer
resource, but this way have certain limitations.

Does Fuel team have plans to implement support for Neutron LBaaS any time
soon?

Guys from Heat suggest Neutron LBaaS as a best long-term solution. Neutron
Team - what is your thoughts?


On Fri, Nov 15, 2013 at 6:53 PM, Thomas Hervé the...@gmail.com wrote:

 On Fri, Nov 15, 2013 at 12:56 PM, Serg Melikyan smelik...@mirantis.com
 wrote:
  Murano has several applications which support scaling via load-balancing,
  this applications (Internet Information Services Web Farm, ASP.NET
  Application Web Farm) currently are based on Heat, particularly on
 resource
  called AWS::ElasticLoadBalancing::LoadBalancer, that currently does not
  support specification of any network related parameters.
 
  Inability to specify network related params leads to incorrect behavior
  during deployment in tenants with advanced Quantum deployment
 configuration,
  like Per-tenant Routers with Private Networks and this makes deployment
 of
  our * Web Farm applications to fail.
 
  We need to resolve issues with our * Web Farm, and make this
 applications to
  be reference implementation for elastic applications in Murano.
 
  This issue may be resolved in three ways: via extending configuration
  capabilities of AWS::ElasticLoadBalancing::LoadBalancer, using another
  implementation of load balancing in Heat - OS::Neutron::LoadBalancer or
 via
  implementing own load balancing application (that going to balance other
  apllications), for example based on HAProxy (as all previous ones).
 
  Please, respond with your thoughts on the question: Which
 implementation we
  should use to resolve issue with our Web Farm applications and why?.
 Below
  you can find more details about each of the options.
 
  AWS::ElasticLoadBalancing::LoadBalancer
 
  AWS::ElasticLoadBalancing::LoadBalancer is Amazon Cloud Formation
 compatible
  resource that implements load balancer via hard-coded nested stack that
  deploys and configures HAProxy. This resource requires specific image
 with
  CFN Tools and specific name F17-x86_64-cfntools available in Glance. It's
  look like we miss implementation of only one property in this resource -
  Subnets.
 
  OS::Neutron::LoadBalancer
 
  OS::Neutron::LoadBalancer is another Heat resource that implements load
  balancer. This resource is based on Load Balancer as a Service feature in
  Neutron. OS::Neutron::LoadBalancer is much more configurable and
  sophisticated but underlying implementation makes usage of this resource
  quite complex.
  LBaaS is a set of services installed and configured as a part of Neutron.
  Fuel does not support LBaaS; Devstack has support for LBaaS, but LBaaS
 not
  installed by default with Neutron.
 
  Own, Based on HAProxy
 
  We may implement load-balancer as a regular application in Murano using
  HAProxy. This service may look like our Active Directory application with
  almost same user-expirience. User may create load-balancer inside of the
  environment and join any web-application (with any number of instances)
  directly to load-balancer.
  Load-balancer may be also implemented on Conductor workflows level, this
  implementation strategy not going to change user-experience (in fact we
  changing only underlying implementation details for our * Web Farm
  applications, without introducing new ones).

 Hi,

 I would strongly encourage using OS::Neutron::LoadBalancer. The AWS
 resource is supposed to mirror Amazon capabilities, so any extension,
 while not impossible, is frowned upon. On the other hand the Neutron
 load balancer can be extended to your need, and being able to use an
 API gives you much more flexibility. It also in active development and
 will get more interesting features in the future.

 If you're having concerns about deploying Neutron LBaaS, you should
 bring it up with the team, and I'm sure they can improve the
 situation. My limited experience with it in devstack has been really
 good.

 Cheers,

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-27 Thread Dolph Mathews
On Wed, Nov 27, 2013 at 8:12 AM, Steven Hardy sha...@redhat.com wrote:

 On Tue, Nov 26, 2013 at 10:17:56PM +1030, Christopher Yeoh wrote:
  On Mon, Nov 25, 2013 at 7:50 PM, Flavio Percoco fla...@redhat.com
 wrote:
   On 24/11/13 12:47 -0500, Doug Hellmann wrote:
  
   On Sun, Nov 24, 2013 at 12:08 AM, Morgan Fainberg m...@metacloud.com
   wrote:
  
  In all honesty it doesn't matter which term we go with.  As long
 as we
   are
  consistent and define the meaning.  I think we can argue intuitive
 vs
  non-intuitive in this case unto the ground.  I prefer project to
   tenant,
  but beyond being a bit of an overloaded term, I really don't
 think
   anyone
  will really notice one way or another as long as everything is
 using
   the
  same terminology.  We could call it grouping-of-openstack-things
 if
   we
  wanted to (though I might have to pull some hair out if we go to
 that
  terminology).  However, with all that in mind, we have made the
   choice to move toward
  project (horizon, keystone, OSC, keystoneclient) and have some
 momentum
  behind that push (plus newer projects already use the project
  nomenclature).   Making a change back to tenant might prove a
 worse UX
   than
  moving everything else in line (nova I think is the one real major
   hurdle
  to get converted over, and deprecation of keystone v2 API).
  
   FWIW, ceilometer also uses project in our API (although some of our
 docs
   use
   the terms interchangeably).
  
  
   And, FWIW, Marconi uses project as well.
  
  
  Well project seems to be the way everyone is heading long term.  So we'll
  do this for the Nova
  V3 API.  As others have mentioned, I think the most important this is
 that
  we all end up using
  the same terminology (though with the long life of APIs we're stuck with
  the both for a few years
  at least).

 So, Heat has some work to do as we're still using tenant in various places.

 However, I've been thinking, why do the APIs requests have to contain the
 project ID at all?  Isn't that something we derive from the token in
 auth_token (setting X-Project-Id, which we use to set the project in the
 request context)?


+1



 Maybe I'm missing something obvious, but at the moment, when you create a
 heat stack, you specify the tenant ID three times, once in the path, once
 in the request body, and again in the context.  I'm wondering why, and if
 we can kill the first two?


Unless they have different reasons for being, stick with the one in the
request context. Users shouldn't have to care about multi-tenancy once
they've obtained a scoped token, it should just happen.



 Clearly this is different for keystone, where the top level of request
 granularity is Domain not Project, but for all other services, every
 request is scoped to a Project is it not?

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell
Ok, I just re-read my example and that was a terrible example. I'll try
and create the user story first and hopefully answer Clint's and Thomas's
concerns.


If the only use case for adding keywords to the template is to help
organize the template catalog then I would agree the keywords would go
outside of heat. The second purpose for keywords is why I think they
belong in the template so I'll cover that.

Let's assume that an end-user of Heat has spun up 20 stacks and has now
requested help from a Support Operator of heat. In this case, the end-user
did not have a solid naming convention for naming his stacks, they are all
named tim1, tim2, etcŠ And also his request to the Support Operator
was really vague, like My Wordpress stack is broken.

The first thing that the Support Operator would do, would be to pull up
end-user's stacks in either Horizon or via the heat client api. In both
cases, at the moment, he would then have to either stack-show on each
stack to look at the description of the stack or ask the end-user for a
stack-id/stack-name. This currently gets the job done but a better
experience would be for stack-list to already display some keywords about
each stack so the Support Operator would have to do less digging.

In this case the end-user only has one Wordpress stack so he would have
been annoyed if the Support Operator requested more information from him.
(Or maybe he has more than one wordpress stack, but only one currently in
CREATE_FAILED state).

As a team, we have already encountered this exact situation just doing
team testing so I imagine that others would find value in a consistent way
to determine at least a general purpose of a stack, from the stack-list
page. Putting the stack-description in the stack-list table would take up
too much room from a design standpoint.

Once keywords has been added to the template then part of the blueprint
would be to return it with the stack-list information.

The previous example I attempted to explain is really more of an edge
case, so let's ignore it for now.

Thanks,
Tim

On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote:

Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 00:47
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap



snip

 That is not the use case that I'm attempting to make, let me try again.
 For what it's worth I agree, that in this use case I want a mechanism
to
 tag particular versions of templates your solution makes sense and will
 probably be necessary as the requirements for the template catalog start
 to become defined.

 What I am attempting to explain is actually much simpler than that.
There
 are 2 times in a UI that I would be interested in the keywords of the
 template. When I am initially browsing the catalog to create a new
stack,
 I expect the stacks to be searchable and/or organized by these keywords
 AND when I am viewing the stack-list page I should be able to sort my
 existing stacks by keywords.

 In this second case I am suggesting that the end-user, not the Template
 Catalog Moderator should have control over the keywords that are defined
 in his instantiated stack. So if he does a Stack Update, he is not
 committing a new version of the template back to a git repository, he is
 just updating the definition of the stack. If the user decides that the
 original template defined the keyword as wordpress and he wants to
 revise the keyword to tims wordpress then he can do that without the
 original template moderator knowing or caring about it.

 This could be useful to an end-user who's business is managing client
 stacks on one account maybe. So he could have tims wordpress, tims
 drupal, angus wordpress, angus drupal the way that he updates the
 keywords after the stack has been instantiated is up to him. Then he can
 sort or search his stacks on his own custom keywords.


For me this all sounds like really user specific tagging, so something
that
should really be done outside the template file itself in the template
catalog service. The use case seems about a role that organizes templates
(or later stacks) by some means, which is fine, but then everything is a
decision of the person organizing the templates, and not necessarily a
decision of the template author. So it would be cleaner to keep this
tagging outside the template IMO.

 I agree that the template versioning is a separate use case.

 Tim
 
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

[openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Steven Hardy
Hi all,

Recently we've been skirting around the issue of an API version bump in
various reviews, so I thought I'd start a thread where we can discuss the
way forward.

My view is that we probably should look at creating a v2 API during
Icehouse, but we need to discuss what changes make sense, and if we want to
attempt adopting pecan/wsme or just clone what we have and modify the
interfaces.

I've raised this BP:

https://blueprints.launchpad.net/heat/+spec/v2api

And started a WIP wiki page where we can all work on refining what changes
need to be made:

https://wiki.openstack.org/wiki/Heat/Blueprints/V2API

Feel free to add items to the list at the top so we can capture the various
changes folks want.

Thoughts?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Mark McLoughlin
Hi,

On Wed, 2013-11-27 at 14:45 +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

Thanks, I too would appreciate input from others.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:
 
 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.

Just to be clear for everyone what we're talking about. Your patch means
that if an operator sees that requests to the 'foo' and 'bar' RPC
methods for a given service are overwhelming the capacity of the
machine, you can throttle them by adding e.g.

  concurrency_control_enabled = true
  concurrency_control_actions = foo,bar
  concurrency_control_limit = 2

to the service's configuration file.

If you accept the premise of what's required here, I think you really
want to have e.g. a json policy file which can control the concurrency
limit on each method individually:

{
compute: {
baseapi: {
ping: 10
},
: {
foo: 1,
bar: 2
}
}
}

but that starts feeling pretty ridiculous.

My concern is that we're avoiding addressing a more fundamental issue
here. From IRC:

 markmc avoid specific concurrent operations from consuming too many
  system resources and starving other less resource intensive
  actions
 markmc I'd like us to think about whether we can come up with a
  solution that fixes the problem for people, without them
  having to mess with this type of configuration
 markmc but yeah ... if we can't figure out a way of doing that, there
  is an argument for giving operators and interim workaround
 markmc I wouldn't be in favour of an interim fix without first
  exploring the options for a more fundamental fix
 markmc this isn't easily removable later, because once people start
  to rely on it we would need to put it through a deprecation
  period to remove it
 markmc also, an interim solution like this takes away the pressure on
  us to find a more fundamental solution ... and we may wind up
  never doing that


So, I guess my first question is ... what specific RPC methods have you
seen issues with and feel you need to throttle?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-27 Thread James Slagle
On Wed, Nov 27, 2013 at 8:39 AM, Jaromir Coufal jcou...@redhat.com wrote:

 V0: basic slick installer - flexibility and control first
 - enable user to auto-discover (or manual register) nodes
 - let user decide, which node is going to be controller, which is going to
 be compute or storage
 - associate images with these nodes
 - deploy


I think you've made some good points about the user experience helping drive the
design of what Tuskar is targeting.  I think the conversation around
how to design
letting the user pick what to deploy where should continue.  I wonder
though, would
it be possible to not have that in a V0?

Basically make your V0 above even smaller (eliminating the middle 2
bullets), and just
letting nova figure it out, the same as what happens now when we run
heat stack-create  from the CLI.

I see 2 possible reasons for trying this:
- Gets us to something people can try even sooner
- It may turn out we want this option in the long run ... a figure it
out all out for me
  type of approach, so it wouldn't be wasted effort.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Infra] Support for PCI Passthrough

2013-11-27 Thread Jeremy Stanley
On 2013-11-27 11:18:46 +0800 (+0800), yongli he wrote:
[...]
 if you post -1, you should post testing log somewhere for people
 to debug it, so does third party testing can post testing log to
 the infra log server?

Not at the moment--the infra log server is just an Apache
name-based virtual host on the static.openstack.org VM using
mod_autoindex to serve log files out of the DocumentRoot (plus a
custom filter CGI Sean Dague wrote recently), and our Jenkins has a
shell account it can use to SCP files onto it. We can't really scale
that access control particularly safely to accommodate third
parties, nor do we have an unlimited amount of space on that machine
(we currently only preserve 6 months of test logs, and even
compressing the limit on how much Cinder block storage we can attach
to the VM is coming into sight).

There has been recent discussion about designing a more scalable
build/test artifact publication system backed by Swift object
storage, and suggestion that once it's working we might consider
support for handing out authorization to third-party-specific
containers for the purpose you describe. Until we have developed
something like that, however, you'll need to provide your own place
to publish your logs (something like we use--bog standard Apache on
a public VM--should work fine I'd think?).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-11-27 Thread Adam Young



On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:

Hi Adam,

Based on our discussion over IRC, I have updated the below etherpad with 
proposal for nested role definition


Updated.  I made my changes Green.  It isn't easy being green.



https://etherpad.openstack.org/p/service-scoped-role-definition

Please take a look @ Proposal (Ayoung) - Nested role definitions, I am sorry 
if I could not catch your idea.

Feel free to update the etherpad.

Regards,
Arvind


-Original Message-
From: Tiwari, Arvind
Sent: Tuesday, November 26, 2013 4:08 PM
To: David Chadwick; OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi David,

Thanks for your time and valuable comments. I have replied to your comments and 
try to explain why I am advocating to this BP.

Let me know your thoughts, please feel free to update below etherpad
https://etherpad.openstack.org/p/service-scoped-role-definition

Thanks again,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Monday, November 25, 2013 12:12 PM
To: Tiwari, Arvind; OpenStack Development Mailing List
Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

I have just added some comments to your blueprint page

regards

David


On 19/11/2013 00:01, Tiwari, Arvind wrote:

Hi,

  


Based on our discussion in design summit , I have redone the service_id
binding with roles BP
https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
I have added a new BP (link below) along with detailed use case to
support this BP.

https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition

Below etherpad link has some proposals for Role REST representation and
pros and cons analysis

  


https://etherpad.openstack.org/p/service-scoped-role-definition

  


Please take look and let me know your thoughts.

  


It would be awesome if we can discuss it in tomorrow's meeting.

  


Thanks,

Arvind


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.
 
 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:
 
 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.

Based on this description of the problem I have some observations

 - I/O load from the guest OS itself is just as important to consider
   as I/O load from management operations Nova does for a guest. Both
   have the capability to impose denial-of-service on a host. IIUC, the
   flavour specs have the ability to express resource constraints for
   the virtual machines to prevent a guest OS initiated DOS-attack

 - I/O load from live migration is attributable to the running
   virtual machine. As such I'd expect that any resource controls
   associated with the guest (from the flavour specs) should be
   applied to control the load from live migration.

   Unfortunately life isn't quite this simple with KVM/libvirt
   currently. For networking we've associated each virtual TAP
   device with traffic shaping filters. For migration you have
   to set a bandwidth cap explicitly via the API. For network
   based storage backends, you don't directly control network
   usage, but instead I/O operations/bytes. Ultimately though
   there should be a way to enforce limits on anything KVM does,
   similarly I expect other hypervisors can do the same

 - I/O load from operations that Nova does on behalf of a guest
   that may be running, or may yet to be launched. These are not
   directly known to the hypervisor, so existing resource limits
   won't apply. Nova however should have some capability for
   applying resource limits to I/O intensive things it does and
   somehow associate them with the flavour limits  or some global
   per user cap perhaps.

 Thoughts?

Overall I think that trying to apply caps on the number of API calls
that can be made is not really a credible way to avoid users inflicting
DOS attack on the host OS. Not least because it does nothing to control
what a guest OS itself may do. If you do caps based on num of APIs calls
in a time period, you end up having to do an extremely pessistic
calculation - basically have to consider the worst case for any single
API call, even if most don't hit the worst case. This is going to hurt
scalability of the system as a whole IMHO.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Zane Bitter

On 26/11/13 22:24, Tim Schnell wrote:

Use Case #1
I see valid value in being able to group templates based on a type or


+1, me too.


keyword. This would allow any client, Horizon or a Template Catalog
service, to better organize and handle display options for an end-user.


I believe these are separate use cases and deserve to be elaborated as 
such. If one feature can help with both that's great, but we're putting 
the cart before the horse if we jump in and implement the feature 
without knowing why.


Let's consider first a catalog of operator-provided templates as 
proposed (IIUC) by Keith. It seems clear to me in that instance the 
keywords are a property of the template's position in the catalog, and 
not of the template itself.


Horizon is a slightly different story. Do we even allow people to upload 
a bunch of templates and store them in Horizon? If not then there 
doesn't seem much point in this feature for current Horizon users. (And 
if we do, which would surprise me greatly, then the proposed 
implementation doesn't seem that practical - would we want to retrieve 
and parse every template to get the keyword?)


In the longer term, there seems to be a lot of demand for some sort of 
template catalog service, like Glance for templates. (I disagree with 
Clint that it should actually _be_ Glance the project as we know it, for 
the reasons Steve B mentioned earlier, but the concept is right.) And 
this brings us back to a very similar situation to the operator-provided 
template catalog (indeed, that use case would likely be subsumed by this 
one).



I believe that Ladislav initially proposed a solution that will work here.
So I will second a proposal that we add a new top-level field to the HOT
specification called keywords that contains this template type.

keywords: wordpress, mysql, etcŠ


+1. If we decide that the template is the proper place for these tags 
then this is the perfect way to do it IMO (assuming that it's actually a 
list, not a comma-separated string). It's a standard format that we can 
document and any tool can recognise, the name keywords describes 
exactly what it does and there's no confusion with tags in Nova and EC2.



Use Case #2
The template author should also be able to explicitly define a help string
that is distinct and separate from the description of an individual


This is not a use case, it's a specification. There seems to be a lot of 
confusion about the difference, so let me sum it up:


Why - Use Case
What - Specification
How - Design Document (i.e. Code)

I know this all sounds very top-down, and believe me that's not my 
philosophy. But design is essentially a global optimisation problem - we 
need to see the whole picture to properly evaluate any given design (or, 
indeed, to find an alternate design), and you're only giving us one 
small piece near the very bottom.


A use case is something that a user of Heat needs to do.

An example of a use case would be: The user needs to see two types of 
information in Horizon that are styled differently/shown at different 
times/other (please specify) so that they can __.


I'm confident that a valid use case _does_ actually exist here, but you 
haven't described it yet.



parameter. An example where this use case originated was with Nova
Keypairs. The description of a keypair parameter might be something like,
This is the name of a nova key pair that will be used to ssh to the
compute instance. A help string for this same parameter would be, To
learn more about nova keypairs click on this help article.


It's not at all clear to me that these are different pieces of 
information. They both describe the parameter and they're both there to 
help the user. It would be easier to figure out what the right thing 
would be if you gave an example of what you had in mind for how Horizon 
should display these. Even without that, though, it seems to me that the 
help is just adding more detail to the description.


One idea I suggested in the review comments is to just interpret the 
first paragraph as the description and any subsequent paragraphs as the 
help. There is ample precedent for that kind of interpretation in things 
like Python docstrings and Git commit messages.



I propose adding an additional field to the parameter definition:

Parameters:
parameter name:
description: This is the name of a nova key pair that 
will be used to
ssh to the compute instance.
help: To learn more about nova key pairs click on this 
a
href=/some/url/help article/a.


(Side note: you're seriously going to let users stick HTML in the 
template and then have the dashboard display it?  Yikes.)



Use Case #3
Grouping parameters would help the client make smarter decisions about how
to display the parameters for input to the end-user. This is so that all
parameters related to some database resource can be intelligently 

Re: [openstack-dev] [Nova] Proposal to re-add Dan Prince to nova-core

2013-11-27 Thread Russell Bryant
On 11/26/2013 02:32 PM, Russell Bryant wrote:
 Greetings,
 
 I would like to propose that we re-add Dan Prince to the nova-core
 review team.
 
 Dan Prince has been involved with Nova since early in OpenStack's
 history (Bexar timeframe).  He was a member of the nova-core review team
 from May 2011 to June 2013.  He has since picked back up with nova
 reviews [1].  We always say that when people leave nova-core, we would
 love to have them back if they are able to commit the time in the
 future.  I think this is a good example of that.
 
 Please respond with +1s or any concerns.
 
 Thanks,
 
 [1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
 

Thanks for the feedback, everyone,  Consider him fast-tracked back on to
nova-core.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Adrian Otto

On Nov 27, 2013, at 3:23 AM, Tom Deckers (tdeckers) tdeck...@cisco.com
 wrote:

 -Original Message-
 From: Adrian Otto [mailto:adrian.o...@rackspace.com]
 Sent: Wednesday, November 27, 2013 0:28
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Definition feedback
 
 Tom,
 
 On Nov 26, 2013, at 12:09 PM, Tom Deckers (tdeckers)
 tdeck...@cisco.com
 wrote:
 
 Hi All,
 
 Few comments on the Definitions blueprint [1]:
 
 1. I'd propose to alter the term 'Application' to either 'Application 
 Package'
 or 'Package'.  Application isn't very descriptive and can be confusing to 
 some
 with the actual deployed instance, etc.
 
 I think that's a sensible suggestion. I'm open to using Package, as that's an
 accurate description of what an Application is currently conceived of.
 
 2. It should be possible for the package to be self-contained, in order to
 distribute application definitions.   As such, instead of using a repo, 
 source
 code might come with the package itself.  Has this been considered as a
 blueprint: deploy code/binaries that are in a zip, rather than a repo?  Maybe
 Artifact serves this purpose?
 
 The API should allow you to deploy something directly from a source code
 repo without packaging anything up. It should also allow you to present some
 static deployment artifacts (container image, zip file, etc.) for code that 
 has
 already been built/tested.
 
 3. Artifact has not been called out as a top-level noun.  It probably should
 and get a proper definition.
 
 Good idea, I will add a definition for it.
 
 4. Plan is described as deployment plan, but then it's also referenced in 
 the
 description of 'Build'.  Plan seems to have a dual meaning, which is fine, 
 but
 that should be called out explicitly.  Plan is not synonymous with deployment
 plan, rather we have a deployment plan and a build plan.  Those two together
 can be 'the plan'.
 
 Currently Plan does have a dual meaning, but it may make sense to split each
 out if they are different enough. I'm open to considering ideas on this.
 
 5. Operations.  The definition states that definitions can be added to a
 Service too.  Since the Service is provided by the platform, I assume it 
 already
 comes with operations predefined.
 
 Yes, the service provider owns services that are provided by the Platform, 
 and
 can extend them, where users may not. However, a user may register his/her
 own Services within the context of a given tenant account, and those can be
 extended and managed. In that case, you can actually connect Operations to
 Services as a tenant. So this is really a question of what scope the Service
 belongs to.
 
 6. Operations. A descriptor should exist that can be used for registration 
 of
 the deployed assembly into a catalog.  The descriptor should contain basic
 information about the exposed functional API and management API (e.g.
 Operations too).
 
 An Assembly is a running group of cloud resources (a whole networked
 application). A listing of those is exposed through the Assemblies resource.
 
 A Plan is a rubber stamp for producing new Assemblies, and can also be listed
 through the Plans resource. Any plan can be easily converted to an Assembly
 with an API call.
 
 Were you thinking that we should have a catalog beyond these listings?
 Please elaborate on what you have in mind. I agree that any catalog should
 provide a way to resolve through to a resources registered Operations. If the
 current design prohibits this in any way, then I'd like to revise that.
 
 I understand the an Assembly can be a larger group of components.  However, 
 those together exist to provide a capability which we want to capture in some 
 catalog so the capability becomes discoverable.  

I think I understand what you are getting at now. This is where the concept of 
Services come in. The Services resource is a collection of Service resources, 
each of which are some service/thing/capability that wither the platform 
provider is offering, or that the tenant/user has created. For example, suppose 
I have an Adrian API that I create a Plan for, and launch into an Assembly. 
Now I can take that Assembly, and list it's connection specifics in a Service. 
Now that Assembly can be found/used by other Assemblies. So the next Blah App 
that comes along can call for a service named Adrian API and will be wired up 
to that existing running assembly. If I added support for multi-tenancy to 
Adrian App, I might actually get a slice of that service using a particular 
tenant/app id.

 I'm not sure how the 'listing' mechanism works out in practice.  If this can 
 be used in an enterprise ecosystem to discover services then that's fine.  We 
 should capture a work item flesh out discoverability of both Applications and 
 Assemblies.  I make that distinction because both scenarios should be 
 provided.

The Plans resources is where you could browse Applications, and the Services 
resource 

Re: [openstack-dev] [Nova] Proposal to add Matt Riedemann to nova-core

2013-11-27 Thread Russell Bryant
On 11/22/2013 03:53 PM, Russell Bryant wrote:
 Greetings,
 
 I would like to propose adding Matt Riedemann to the nova-core review team.
 
 Matt has been involved with nova for a long time, taking on a wide range
 of tasks.  He writes good code.  He's very engaged with the development
 community.  Most importantly, he provides good code reviews and has
 earned the trust of other members of the review team.
 
 https://review.openstack.org/#/dashboard/6873
 https://review.openstack.org/#/q/owner:6873,n,z
 https://review.openstack.org/#/q/reviewer:6873,n,z
 
 Please respond with +1/-1, or any further comments.

Thanks for the feedback, everyone.  Matt has been added to the team.
Welcome!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Maintaining backwards compatibility for RPC calls

2013-11-27 Thread yunhong jiang
On Wed, 2013-11-27 at 12:38 +, Day, Phil wrote:
 Doesn’t this mean that you can’t deploy Icehouse (3.0) code into a
 Havana system but leave the RPC version pinned at Havana until all of
 the code has been updated ?  

I think it's because this change is for computer manager, not for
conductor or other service.

Thanks
--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell

On 11/27/13 10:39 AM, Zane Bitter zbit...@redhat.com wrote:

On 26/11/13 23:44, Tim Schnell wrote:
 A template-validate call would need to return all the group and
ordering
 information, but otherwise heat can ignore this extra data.
 I agree with all of your modifications, although bringing up the
 template-validate call reminded me that the implementation of this use
 case should also imply a new REST endpoint specifically for returning
 parameters. It seems like the current implementation in Horizon is a bit
 hack-y by calling template-validate instead of something like
 get-parameters.

That's inherited from how the cfn-compatible API does it, but it doesn't
seem hacky to me. And it matches exactly how the UI works - you upload a
template, it validates it and gets back the list of parameters.

This raises a good point though - it ought to be Heat that determines
the order of the parameters and returns them in a list (like heat-cfn
does - though the list is currently randomly ordered). Clients need to
treat the template as a black box, since the format changes over time.

I would be fine with Heat returning the parameters already ordered
(dependent on the order recognizing parameter grouping). I don't have a
strong opinion about the naming convention of the REST API call that I
have to make to get the parameters but I would also like to start
discussing including the current parameter values in whatever api call
this ends up being. For a Stack Update, I would like to populate the
parameters with previously entered values, although this has further
implications for values that are encrypted, like passwords.

Horizon, currently does not implement a Stack Update and this has been one
of the major sticking points for me trying to figure out how to provide an
interface for it.

I would imagine that adding the parameter values to the template validate
response would make it more obvious that it may warrant a separate
endpoint. Then template_validate can(should?) simply return a boolean
value.

Tim

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] [Security]

2013-11-27 Thread Paul Montgomery
I created some relatively high level security best practices that I
thought would apply to Solum.  I don't think it is ever too early to get
mindshare around security so that developers keep that in mind throughout
the project.  When a design decision point could easily go two ways,
perhaps these guidelines can sway direction towards a more secure path.

This is a living document, please contribute and let's discuss topics.
I've worn a security hat in various jobs so I'm always interested. :)
Also, I realize that many of these features may not directly be
encapsulated by Solum but rather components such as KeyStone or Horizon.

https://wiki.openstack.org/wiki/Solum/Security

I would like to build on this list and create blueprints or tasks based on
topics that the community agrees upon.  We will also need to start
thinking about timing of these features.

Is there an OpenStack standard for code comments that highlight potential
security issues to investigate at a later point?  If not, what would the
community think of making a standard for Solum?  I would like to identify
these areas early while the developer is still engaged/thinking about the
code.  It is always harder to go back later and find everything in my
experience.  Perhaps something like:

# (SECURITY) This exception may contain database field data which could
expose passwords to end users unless filtered.

Or

# (SECURITY) The admin password is read in plain text from a configuration
file.  We should fix this later.

Regards,
Paulmo


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Fox, Kevin M
This use case is sort of a providence case. Where did the stack come from so I 
can find out more about it.

You could put a git commit field in the template itself but then it would be 
hard to keep updated.

How about the following:

Extend heat to support setting a scmcommit metadata item on stack create. 
Heat will ignore this but make it available for retrieval.

I'm guessing any catalog will have some sort of scm managing the templates. 
When you go and submit the stack, you can set the metadata and know exactly 
when and where and all the history of the stack by just referring to the git 
commit string.

This should tell you far more then a set of strings a user could set for their 
own use, confusing others.

Thanks,
Kevin


From: Tim Schnell [tim.schn...@rackspace.com]
Sent: Wednesday, November 27, 2013 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements  
roadmap

Ok, I just re-read my example and that was a terrible example. I'll try
and create the user story first and hopefully answer Clint's and Thomas's
concerns.


If the only use case for adding keywords to the template is to help
organize the template catalog then I would agree the keywords would go
outside of heat. The second purpose for keywords is why I think they
belong in the template so I'll cover that.

Let's assume that an end-user of Heat has spun up 20 stacks and has now
requested help from a Support Operator of heat. In this case, the end-user
did not have a solid naming convention for naming his stacks, they are all
named tim1, tim2, etcŠ And also his request to the Support Operator
was really vague, like My Wordpress stack is broken.

The first thing that the Support Operator would do, would be to pull up
end-user's stacks in either Horizon or via the heat client api. In both
cases, at the moment, he would then have to either stack-show on each
stack to look at the description of the stack or ask the end-user for a
stack-id/stack-name. This currently gets the job done but a better
experience would be for stack-list to already display some keywords about
each stack so the Support Operator would have to do less digging.

In this case the end-user only has one Wordpress stack so he would have
been annoyed if the Support Operator requested more information from him.
(Or maybe he has more than one wordpress stack, but only one currently in
CREATE_FAILED state).

As a team, we have already encountered this exact situation just doing
team testing so I imagine that others would find value in a consistent way
to determine at least a general purpose of a stack, from the stack-list
page. Putting the stack-description in the stack-list table would take up
too much room from a design standpoint.

Once keywords has been added to the template then part of the blueprint
would be to return it with the stack-list information.

The previous example I attempted to explain is really more of an edge
case, so let's ignore it for now.

Thanks,
Tim

On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote:

Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 00:47
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap



snip

 That is not the use case that I'm attempting to make, let me try again.
 For what it's worth I agree, that in this use case I want a mechanism
to
 tag particular versions of templates your solution makes sense and will
 probably be necessary as the requirements for the template catalog start
 to become defined.

 What I am attempting to explain is actually much simpler than that.
There
 are 2 times in a UI that I would be interested in the keywords of the
 template. When I am initially browsing the catalog to create a new
stack,
 I expect the stacks to be searchable and/or organized by these keywords
 AND when I am viewing the stack-list page I should be able to sort my
 existing stacks by keywords.

 In this second case I am suggesting that the end-user, not the Template
 Catalog Moderator should have control over the keywords that are defined
 in his instantiated stack. So if he does a Stack Update, he is not
 committing a new version of the template back to a git repository, he is
 just updating the definition of the stack. If the user decides that the
 original template defined the keyword as wordpress and he wants to
 revise the keyword to tims wordpress then he can do that without the
 original template moderator knowing or caring about it.

 This could be useful to an end-user who's business is managing client
 stacks on one account maybe. So he could have tims wordpress, tims
 drupal, angus wordpress, angus drupal the way that he updates the
 keywords after the stack has been 

Re: [openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Zane Bitter

On 27/11/13 16:27, Steven Hardy wrote:

I've raised this BP:

https://blueprints.launchpad.net/heat/+spec/v2api

And started a WIP wiki page where we can all work on refining what changes
need to be made:

https://wiki.openstack.org/wiki/Heat/Blueprints/V2API


The current (v1) API design is based around having the tenant in the URL 
and stack names that are unique per-tenant. So a URL of the form:


GET /v1/{tenant_id}/stacks/{stack_name}

uniquely identifies a stack (even though it redirects to a longer URL 
including the stack_id).


Your proposal on the wiki page retains this URL but removes the tenant ID:

   GET /v2/stacks/{stack_name}

This now no longer uniquely identifies a stack, and is therefore not ReST.

So if we drop the tenant_id then we should also start addressing stacks 
only by UUID:


GET v2/stacks/{stack_id}

and do lookups by name using stack list or something. However other 
projects do it.



This seems clearly worse than the API we have, but it would at least be 
consistent with other OpenStack projects. For them this API makes more 
sense, because they don't enforce unique names for things in a tenant 
and probably couldn't start if they wanted to. This was IMHO a mistake, 
and one that Heat avoided, but I accept that it's hard to escape gravity 
on this. I, for one, welcome our new UUID overlords.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Edmund Troche
You bring up a good point Thomas. I think some of the discussions are
mixing template and stack perspectives, they are not the same thing, stack
== instance of a template. There likely is room for tagging stacks, all
under the control of the user and meant for user consumption, vs the long
going discussion on template-level metadata. This may be yet another use
case ;-)

  
   Edmund Troche Senior Software Engineer 
  
 IBM Software Group | 
 11501 Burnet Rd. |   
 Austin, TX 78758 
  
 ( +1.512.286.8977 ) T/L  
 363.8977 *   
 edmund.tro...@us.ibm.com 
 7 +1.512.286.8977
  






From:   Thomas Spatzier thomas.spatz...@de.ibm.com
To: OpenStack Development Mailing List \(not for usage questions
\) openstack-dev@lists.openstack.org,
Date:   11/27/2013 11:00 AM
Subject:Re: [openstack-dev] [heat][horizon]Heat UI related requirements
   roadmap



Thanks, that clarified the use case a bit. Bot looking at the use case now,
isn't this stack tagging instead of template tagging?
I.e. assume that for each stack a user creates, he/she can assign one or
more tags so you can do better queries to find stacks later?

Regards,
Thomas

Tim Schnell tim.schn...@rackspace.com wrote on 27.11.2013 16:24:18:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 16:28
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 Ok, I just re-read my example and that was a terrible example. I'll try
 and create the user story first and hopefully answer Clint's and Thomas's
 concerns.


 If the only use case for adding keywords to the template is to help
 organize the template catalog then I would agree the keywords would go
 outside of heat. The second purpose for keywords is why I think they
 belong in the template so I'll cover that.

 Let's assume that an end-user of Heat has spun up 20 stacks and has now
 requested help from a Support Operator of heat. In this case, the
end-user
 did not have a solid naming convention for naming his stacks, they are
all
 named tim1, tim2, etcŠ And also his request to the Support Operator
 was really vague, like My Wordpress stack is broken.

 The first thing that the Support Operator would do, would be to pull up
 end-user's stacks in either Horizon or via the heat client api. In both
 cases, at the moment, he would then have to either stack-show on each
 stack to look at the description of the stack or ask the end-user for a
 stack-id/stack-name. This currently gets the job done but a better
 experience would be for stack-list to already display some keywords about
 each stack so the Support Operator would have to do less digging.

 In this case the end-user only has one Wordpress stack so he would have
 been annoyed if the Support Operator requested more information from him.
 (Or maybe he has more than one wordpress stack, but only one currently in
 CREATE_FAILED state).

 As a team, we have already encountered this exact situation just doing
 team testing so I imagine that others would find value in a consistent
way
 to determine at least a general purpose of a stack, from the stack-list
 page. Putting the stack-description in the stack-list table would take up
 too much room from a design standpoint.

 Once keywords has been added to the template then part of the blueprint
 would be to return it with the stack-list information.

 The previous example I attempted to explain is really more of an edge
 case, so let's ignore it for now.

 Thanks,
 Tim

 On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com
wrote:

 Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
  From: Tim Schnell tim.schn...@rackspace.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 27.11.2013 00:47
  Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
  requirements  roadmap
 
 
 
 snip
 
  That is not the use case that I'm attempting to make, let me try
again.
  For what it's worth I agree, that in this use case I want a mechanism
 to
  tag particular versions of templates your solution makes sense and
will
  probably be necessary as the requirements for the template catalog
start
  to become defined.
 
  What I am 

Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-11-27 Thread Salvatore Orlando
Thanks Maru,

This is something my team had on the backlog for a while.
I will push some patches to contribute towards this effort in the next few
days.

Let me know if you're already thinking of targeting the completion of this
job for a specific deadline.

Salvatore


On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:

 Just a heads up, the console output for neutron gate jobs is about to get
 a lot noisier.  Any log output that contains 'ERROR' is going to be dumped
 into the console output so that we can identify and eliminate unnecessary
 error logging.  Once we've cleaned things up, the presence of unexpected
 (non-whitelisted) error output can be used to fail jobs, as per the
 following Tempest blueprint:

 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

 I've filed a related Neutron blueprint for eliminating the unnecessary
 error logging:


 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

 I'm looking for volunteers to help with this effort, please reply in this
 thread if you're willing to assist.

 Thanks,


 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2013-11-27 08:09:33 -0800:
 In the longer term, there seems to be a lot of demand for some sort of 
 template catalog service, like Glance for templates. (I disagree with 
 Clint that it should actually _be_ Glance the project as we know it, for 
 the reasons Steve B mentioned earlier, but the concept is right.) And 
 this brings us back to a very similar situation to the operator-provided 
 template catalog (indeed, that use case would likely be subsumed by this 
 one).
 

Could you provide a stronger link to Steve B's comments, I think I
missed them. Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell

On 11/27/13 10:09 AM, Zane Bitter zbit...@redhat.com wrote:

On 26/11/13 22:24, Tim Schnell wrote:
 Use Case #1
 I see valid value in being able to group templates based on a type or

+1, me too.

 keyword. This would allow any client, Horizon or a Template Catalog
 service, to better organize and handle display options for an end-user.

I believe these are separate use cases and deserve to be elaborated as
such. If one feature can help with both that's great, but we're putting
the cart before the horse if we jump in and implement the feature
without knowing why.

Let's consider first a catalog of operator-provided templates as
proposed (IIUC) by Keith. It seems clear to me in that instance the
keywords are a property of the template's position in the catalog, and
not of the template itself.

Horizon is a slightly different story. Do we even allow people to upload
a bunch of templates and store them in Horizon? If not then there
doesn't seem much point in this feature for current Horizon users. (And
if we do, which would surprise me greatly, then the proposed
implementation doesn't seem that practical - would we want to retrieve
and parse every template to get the keyword?)

Correct, at the moment, Horizon has no concept of a template catalog of
any kind.

Here is my use case for including this in the template for Horizon:
(I'm going to start moving these to the wiki that Steve Baker setup)

Let's assume that an end-user of Heat has spun up 20 stacks and has now
requested help from a Support Operator of heat. In this case, the end-user
did not have a solid naming convention for naming his stacks, they are all
named tim1, tim2, etcŠ And also his request to the Support Operator
was really vague, like My Wordpress stack is broken.

The first thing that the Support Operator would do, would be to pull up
end-user's stacks in either Horizon or via the heat client cli. In both
cases, at the moment, he would then have to either stack-show on each
stack to look at the description of the stack or ask the end-user for a
stack-id/stack-name. This currently gets the job done but a better
experience would be for stack-list to already display some keywords about
each stack so the Support Operator would have to do less digging.

In this case the end-user only has one Wordpress stack so he would have
been annoyed if the Support Operator requested more information from him.
(Or maybe he has more than one wordpress stack, but only one currently in
CREATE_FAILED state).

As a team, we have already encountered this exact situation just doing
team testing so I imagine that others would find value in a consistent way
to determine at least a general purpose of a stack, from the stack-list
page. Putting the stack-description in the stack-list table would take up
too much room from a design standpoint.

Once keywords has been added to the template then part of the blueprint
would be to return it with the stack-list information.




In the longer term, there seems to be a lot of demand for some sort of
template catalog service, like Glance for templates. (I disagree with
Clint that it should actually _be_ Glance the project as we know it, for
the reasons Steve B mentioned earlier, but the concept is right.) And
this brings us back to a very similar situation to the operator-provided
template catalog (indeed, that use case would likely be subsumed by this
one).

 I believe that Ladislav initially proposed a solution that will work
here.
 So I will second a proposal that we add a new top-level field to the HOT
 specification called keywords that contains this template type.

  keywords: wordpress, mysql, etcŠ

+1. If we decide that the template is the proper place for these tags
then this is the perfect way to do it IMO (assuming that it's actually a
list, not a comma-separated string). It's a standard format that we can
document and any tool can recognise, the name keywords describes
exactly what it does and there's no confusion with tags in Nova and EC2.

 Use Case #2
 The template author should also be able to explicitly define a help
string
 that is distinct and separate from the description of an individual

This is not a use case, it's a specification. There seems to be a lot of
confusion about the difference, so let me sum it up:

Why - Use Case
What - Specification
How - Design Document (i.e. Code)

I know this all sounds very top-down, and believe me that's not my
philosophy. But design is essentially a global optimisation problem - we
need to see the whole picture to properly evaluate any given design (or,
indeed, to find an alternate design), and you're only giving us one
small piece near the very bottom.

A use case is something that a user of Heat needs to do.

An example of a use case would be: The user needs to see two types of
information in Horizon that are styled differently/shown at different
times/other (please specify) so that they can __.

I'm confident that a valid use case _does_ 

Re: [openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Jay Pipes

On 11/27/2013 12:02 PM, Zane Bitter wrote:

On 27/11/13 16:27, Steven Hardy wrote:

I've raised this BP:

https://blueprints.launchpad.net/heat/+spec/v2api

And started a WIP wiki page where we can all work on refining what
changes
need to be made:

https://wiki.openstack.org/wiki/Heat/Blueprints/V2API


The current (v1) API design is based around having the tenant in the URL
and stack names that are unique per-tenant. So a URL of the form:

 GET /v1/{tenant_id}/stacks/{stack_name}

uniquely identifies a stack (even though it redirects to a longer URL
including the stack_id).

Your proposal on the wiki page retains this URL but removes the tenant ID:

GET /v2/stacks/{stack_name}

This now no longer uniquely identifies a stack, and is therefore not ReST.


It would be along with a Vary: X-Project-Id header, no?


So if we drop the tenant_id then we should also start addressing stacks
only by UUID:

 GET v2/stacks/{stack_id}

and do lookups by name using stack list or something. However other
projects do it.


++ for this anyway :)


This seems clearly worse than the API we have, but it would at least be
consistent with other OpenStack projects. For them this API makes more
sense, because they don't enforce unique names for things in a tenant
and probably couldn't start if they wanted to. This was IMHO a mistake,
and one that Heat avoided, but I accept that it's hard to escape gravity
on this. I, for one, welcome our new UUID overlords.


LOL. Actually, in Nova and other places it's now moving to UNIQUE 
CONSTRAINT (name, project, deleted) ;)


-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell
Yes, I guess you could phrase it as stack tagging, focusing on the
template was my attempt to solve 2 use cases with one solution but I'M
open to alternatives. Are you suggesting that we build the ability to add
tags to stacks that exist outside of the template? I guess add them
directly to Heat's database?

Tim Schnell
Software Developer
Rackspace





On 11/27/13 10:53 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote:

Thanks, that clarified the use case a bit. Bot looking at the use case
now,
isn't this stack tagging instead of template tagging?
I.e. assume that for each stack a user creates, he/she can assign one or
more tags so you can do better queries to find stacks later?

Regards,
Thomas

Tim Schnell tim.schn...@rackspace.com wrote on 27.11.2013 16:24:18:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 16:28
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 Ok, I just re-read my example and that was a terrible example. I'll try
 and create the user story first and hopefully answer Clint's and
Thomas's
 concerns.


 If the only use case for adding keywords to the template is to help
 organize the template catalog then I would agree the keywords would go
 outside of heat. The second purpose for keywords is why I think they
 belong in the template so I'll cover that.

 Let's assume that an end-user of Heat has spun up 20 stacks and has now
 requested help from a Support Operator of heat. In this case, the
end-user
 did not have a solid naming convention for naming his stacks, they are
all
 named tim1, tim2, etcŠ And also his request to the Support Operator
 was really vague, like My Wordpress stack is broken.

 The first thing that the Support Operator would do, would be to pull up
 end-user's stacks in either Horizon or via the heat client api. In both
 cases, at the moment, he would then have to either stack-show on each
 stack to look at the description of the stack or ask the end-user for a
 stack-id/stack-name. This currently gets the job done but a better
 experience would be for stack-list to already display some keywords
about
 each stack so the Support Operator would have to do less digging.

 In this case the end-user only has one Wordpress stack so he would have
 been annoyed if the Support Operator requested more information from
him.
 (Or maybe he has more than one wordpress stack, but only one currently
in
 CREATE_FAILED state).

 As a team, we have already encountered this exact situation just doing
 team testing so I imagine that others would find value in a consistent
way
 to determine at least a general purpose of a stack, from the stack-list
 page. Putting the stack-description in the stack-list table would take
up
 too much room from a design standpoint.

 Once keywords has been added to the template then part of the blueprint
 would be to return it with the stack-list information.

 The previous example I attempted to explain is really more of an edge
 case, so let's ignore it for now.

 Thanks,
 Tim

 On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com
wrote:

 Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
  From: Tim Schnell tim.schn...@rackspace.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 27.11.2013 00:47
  Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
  requirements  roadmap
 
 
 
 snip
 
  That is not the use case that I'm attempting to make, let me try
again.
  For what it's worth I agree, that in this use case I want a
mechanism
 to
  tag particular versions of templates your solution makes sense and
will
  probably be necessary as the requirements for the template catalog
start
  to become defined.
 
  What I am attempting to explain is actually much simpler than that.
 There
  are 2 times in a UI that I would be interested in the keywords of the
  template. When I am initially browsing the catalog to create a new
 stack,
  I expect the stacks to be searchable and/or organized by these
keywords
  AND when I am viewing the stack-list page I should be able to sort my
  existing stacks by keywords.
 
  In this second case I am suggesting that the end-user, not the
Template
  Catalog Moderator should have control over the keywords that are
defined
  in his instantiated stack. So if he does a Stack Update, he is not
  committing a new version of the template back to a git repository, he
is
  just updating the definition of the stack. If the user decides that
the
  original template defined the keyword as wordpress and he wants to
  revise the keyword to tims wordpress then he can do that without
the
  original template moderator knowing or caring about it.
 
  This could be useful to an end-user who's business is managing client
  stacks on one account maybe. So he could have tims wordpress, tims
  drupal, angus 

Re: [openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Steven Hardy
On Wed, Nov 27, 2013 at 06:02:27PM +0100, Zane Bitter wrote:
 On 27/11/13 16:27, Steven Hardy wrote:
 I've raised this BP:
 
 https://blueprints.launchpad.net/heat/+spec/v2api
 
 And started a WIP wiki page where we can all work on refining what changes
 need to be made:
 
 https://wiki.openstack.org/wiki/Heat/Blueprints/V2API
 
 The current (v1) API design is based around having the tenant in the
 URL and stack names that are unique per-tenant. So a URL of the
 form:
 
 GET /v1/{tenant_id}/stacks/{stack_name}
 
 uniquely identifies a stack (even though it redirects to a longer
 URL including the stack_id).
 
 Your proposal on the wiki page retains this URL but removes the tenant ID:
 
GET /v2/stacks/{stack_name}
 
 This now no longer uniquely identifies a stack, and is therefore not ReST.

Well we still know the tenant ID, because it's a tenant/project scoped
request (keystone auth_token gives us the tenant ID and we set it in the
context).

So in theory we still have all the information we need to uniquely
reference a stack by name, it could just become a query parameter which
filters based on the name:

GET v2/stacks?stack_name=foo

 So if we drop the tenant_id then we should also start addressing
 stacks only by UUID:
 
 GET v2/stacks/{stack_id}
 
 and do lookups by name using stack list or something. However other
 projects do it.

Yes, AFAICT that is what the nova and keystone v3 API's do, keystone does
the name-id lookup in keystoneclient, not sure how nova handles it yet.

Note the wiki is not a proposal, it's just a cut/paste from our API docs
which we can all hack on until it makes sense.  I'll update it to reflect
the above.

 This seems clearly worse than the API we have, but it would at least
 be consistent with other OpenStack projects. For them this API makes
 more sense, because they don't enforce unique names for things in a
 tenant and probably couldn't start if they wanted to. This was IMHO
 a mistake, and one that Heat avoided, but I accept that it's hard to
 escape gravity on this. I, for one, welcome our new UUID overlords.

Lol, well I supposed there are advantages to both.  I'm approaching this
from the perspective of trying to support non project-scoped requests (the
management-api stuff), which is impossible to do properly with the tenant
in the path.

I think consistency accross projects is a good reason to do this, assuming
the way keystone and nova v3 APIs look is the way OpenStack APIs are
headed, which appears to be the case.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell
On 11/27/13 10:58 AM, Fox, Kevin M kevin@pnnl.gov wrote:


This use case is sort of a providence case. Where did the stack come from
so I can find out more about it.

You could put a git commit field in the template itself but then it would
be hard to keep updated.

How about the following:

Extend heat to support setting a scmcommit metadata item on stack
create. Heat will ignore this but make it available for retrieval.

I'm guessing any catalog will have some sort of scm managing the
templates. When you go and submit the stack, you can set the metadata and
know exactly when and where and all the history of the stack by just
referring to the git commit string.

This should tell you far more then a set of strings a user could set for
their own use, confusing others.

Hi Kevin,

Yeah I think tying the keywords use case to the template catalog is what
is causing confusion. I agree that the above solution is a good way to
solve this problem for the template catalog. It would definitely be more
specific about what exactly the template represents. But if we remove the
template catalog from the equation and focus on the experience of
providing searchable keywords to the stack either in the template or in
the database. This would solve the use case I'm attempting to describe.

In my use case I'm referring to stacks that are being created with a user
generated template. Something that isn't necessarily a part of any
catalog, in Horizon you can provide a template in a direct input text area
or via file upload.

Tim

Thanks,
Kevin


From: Tim Schnell [tim.schn...@rackspace.com]
Sent: Wednesday, November 27, 2013 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements
 roadmap

Ok, I just re-read my example and that was a terrible example. I'll try
and create the user story first and hopefully answer Clint's and Thomas's
concerns.


If the only use case for adding keywords to the template is to help
organize the template catalog then I would agree the keywords would go
outside of heat. The second purpose for keywords is why I think they
belong in the template so I'll cover that.

Let's assume that an end-user of Heat has spun up 20 stacks and has now
requested help from a Support Operator of heat. In this case, the end-user
did not have a solid naming convention for naming his stacks, they are all
named tim1, tim2, etcŠ And also his request to the Support Operator
was really vague, like My Wordpress stack is broken.

The first thing that the Support Operator would do, would be to pull up
end-user's stacks in either Horizon or via the heat client api. In both
cases, at the moment, he would then have to either stack-show on each
stack to look at the description of the stack or ask the end-user for a
stack-id/stack-name. This currently gets the job done but a better
experience would be for stack-list to already display some keywords about
each stack so the Support Operator would have to do less digging.

In this case the end-user only has one Wordpress stack so he would have
been annoyed if the Support Operator requested more information from him.
(Or maybe he has more than one wordpress stack, but only one currently in
CREATE_FAILED state).

As a team, we have already encountered this exact situation just doing
team testing so I imagine that others would find value in a consistent way
to determine at least a general purpose of a stack, from the stack-list
page. Putting the stack-description in the stack-list table would take up
too much room from a design standpoint.

Once keywords has been added to the template then part of the blueprint
would be to return it with the stack-list information.

The previous example I attempted to explain is really more of an edge
case, so let's ignore it for now.

Thanks,
Tim

On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote:

Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
 From: Tim Schnell tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 27.11.2013 00:47
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap



snip

 That is not the use case that I'm attempting to make, let me try again.
 For what it's worth I agree, that in this use case I want a mechanism
to
 tag particular versions of templates your solution makes sense and
will
 probably be necessary as the requirements for the template catalog
start
 to become defined.

 What I am attempting to explain is actually much simpler than that.
There
 are 2 times in a UI that I would be interested in the keywords of the
 template. When I am initially browsing the catalog to create a new
stack,
 I expect the stacks to be searchable and/or organized by these keywords
 AND when I am viewing the stack-list page I should be able to sort my
 existing stacks by 

Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Tim Schnell

From: Edmund Troche edmund.tro...@us.ibm.commailto:edmund.tro...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, November 27, 2013 11:08 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements  
roadmap


You bring up a good point Thomas. I think some of the discussions are mixing 
template and stack perspectives, they are not the same thing, stack == instance 
of a template. There likely is room for tagging stacks, all under the control 
of the user and meant for user consumption, vs the long going discussion on 
template-level metadata. This may be yet another use case ;-)


+1 yes, thanks Edmund for clarifying we are definitely talking about separate 
use cases now.

  Edmund Troche
Senior Software Engineer

IBMhttp://www.ibm.com/ Software Group | 11501 Burnet Rd. | Austin, TX 78758
* +1.512.286.8977 * T/L 363.8977 * 
edmund.tro...@us.ibm.commailto:edmund.tro...@us.ibm.com * +1.512.286.8977


[Inactive hide details for Thomas Spatzier ---11/27/2013 11:00:58 AM---Thanks, 
that clarified the use case a bit. Bot looking at]Thomas Spatzier ---11/27/2013 
11:00:58 AM---Thanks, that clarified the use case a bit. Bot looking at the use 
case now, isn't this stack tagging

From: Thomas Spatzier 
thomas.spatz...@de.ibm.commailto:thomas.spatz...@de.ibm.com
To: OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 11/27/2013 11:00 AM
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements  
roadmap





Thanks, that clarified the use case a bit. Bot looking at the use case now,
isn't this stack tagging instead of template tagging?
I.e. assume that for each stack a user creates, he/she can assign one or
more tags so you can do better queries to find stacks later?

Regards,
Thomas

Tim Schnell tim.schn...@rackspace.commailto:tim.schn...@rackspace.com wrote 
on 27.11.2013 16:24:18:
 From: Tim Schnell 
 tim.schn...@rackspace.commailto:tim.schn...@rackspace.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
 Date: 27.11.2013 16:28
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements  roadmap

 Ok, I just re-read my example and that was a terrible example. I'll try
 and create the user story first and hopefully answer Clint's and Thomas's
 concerns.


 If the only use case for adding keywords to the template is to help
 organize the template catalog then I would agree the keywords would go
 outside of heat. The second purpose for keywords is why I think they
 belong in the template so I'll cover that.

 Let's assume that an end-user of Heat has spun up 20 stacks and has now
 requested help from a Support Operator of heat. In this case, the
end-user
 did not have a solid naming convention for naming his stacks, they are
all
 named tim1, tim2, etcŠ And also his request to the Support Operator
 was really vague, like My Wordpress stack is broken.

 The first thing that the Support Operator would do, would be to pull up
 end-user's stacks in either Horizon or via the heat client api. In both
 cases, at the moment, he would then have to either stack-show on each
 stack to look at the description of the stack or ask the end-user for a
 stack-id/stack-name. This currently gets the job done but a better
 experience would be for stack-list to already display some keywords about
 each stack so the Support Operator would have to do less digging.

 In this case the end-user only has one Wordpress stack so he would have
 been annoyed if the Support Operator requested more information from him.
 (Or maybe he has more than one wordpress stack, but only one currently in
 CREATE_FAILED state).

 As a team, we have already encountered this exact situation just doing
 team testing so I imagine that others would find value in a consistent
way
 to determine at least a general purpose of a stack, from the stack-list
 page. Putting the stack-description in the stack-list table would take up
 too much room from a design standpoint.

 Once keywords has been added to the template then part of the blueprint
 would be to return it with the stack-list information.

 The previous example I attempted to explain is really more of an edge
 case, so let's ignore it for now.

 Thanks,
 Tim

 On 11/27/13 3:19 AM, Thomas Spatzier 
 thomas.spatz...@de.ibm.commailto:thomas.spatz...@de.ibm.com
wrote:

 Excerpts from Tim Schnell's message on 27.11.2013 00:44:04:
  From: Tim Schnell 
  tim.schn...@rackspace.commailto:tim.schn...@rackspace.com
  To: OpenStack Development Mailing List (not for usage questions)
  

[openstack-dev] [Ceilometer] storage driver testing

2013-11-27 Thread Sandy Walsh
Hey!

We've ballparked that we need to store a million events per day. To that end, 
we're flip-flopping between sql and no-sql solutions, hybrid solutions that 
include elastic search and other schemes. Seems every road we go down has some 
limitations. So, we've started working on test suite for load testing the 
ceilometer storage drivers. The intent is to have a common place to record our 
findings and compare with the efforts of others.

There's an etherpad where we're tracking our results [1] and a test suite that 
we're building out [2]. The test suite works against a fork of ceilometer where 
we can keep our experimental storage driver tweaks [3].

The test suite hits the storage drivers directly, bypassing the api, but still 
uses the ceilometer models. We've added support for dumping the results to 
statsd/graphite for charting of performance results in real-time.

If you're interested in large scale deployments of ceilometer, we would welcome 
any assistance.

Thanks!
-Sandy

[1] https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
[2] https://github.com/rackerlabs/ceilometer-load-tests
[3] https://github.com/rackerlabs/instrumented-ceilometer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
Daniel, thanks for this, these are all valid points and essentially tie
with the fundamental issue of dealing with DOS attacks but for this bp I
actually want to stay away from this area i.e. this is not intended to
solve any tenant-based attack issues in the rpc layer (although that
definitely warrants a discussion e.g. how do we stop a single tenant
from consuming the entire thread pool with requests) but rather I'm
thinking more from a QOS perspective i.e. to allow an admin to account
for a resource bias e.g. slow RAID controller, on a given node (not
necessarily Nova/HV) which could be alleviated with this sort of crude
rate limiting. Of course one problem with this approach is that
blocked/limited requests still reside in the same pool as other requests
so if we did want to use this it may be worth considering offloading
blocked requests or giving them their own pool altogether.

...or maybe this is just pie in the sky after all.

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Is there a way for the VM to identify that it is getting booted in OpenStack

2013-11-27 Thread Chris Friesen

On 11/26/2013 07:48 PM, Vijay Venkatachalam wrote:

Hi,

 Is there a way for the VM to identify that it is
getting booted in OpenStack?

 As said in the below mail, once the VM knows it is
booting in OpenStack it will alter the boot sequence.


What does getting booted in OpenStack mean?  OpenStack supports 
multiple hypervisors, so you could have something coming up via 
kvm/vmware/Xen/baremetal/etc. but they're all getting booted in OpenStack.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
  Moving this to the ml as requested, would appreciate
  comments/thoughts/feedback.
 
  So, I recently proposed a small patch to the oslo rpc code (initially in
  oslo-incubator then moved to oslo.messaging) which extends the existing
  support for limiting the rpc thread pool so that concurrent requests can
  be limited based on type/method. The blueprint and patch are here:
 
  https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
  The basic idea is that if you have server with limited resources you may
  want restrict operations that would impact those resources e.g. live
  migrations on a specific hypervisor or volume formatting on particular
  volume node. This patch allows you, admittedly in a very crude way, to
  apply a fixed limit to a set of rpc methods. I would like to know
  whether or not people think this is sort of thing would be useful or
  whether it alludes to a more fundamental issue that should be dealt with
  in a different manner.
  Based on this description of the problem I have some observations
 
   - I/O load from the guest OS itself is just as important to consider
 as I/O load from management operations Nova does for a guest. Both
 have the capability to impose denial-of-service on a host. IIUC, the
 flavour specs have the ability to express resource constraints for
 the virtual machines to prevent a guest OS initiated DOS-attack
 
   - I/O load from live migration is attributable to the running
 virtual machine. As such I'd expect that any resource controls
 associated with the guest (from the flavour specs) should be
 applied to control the load from live migration.
 
 Unfortunately life isn't quite this simple with KVM/libvirt
 currently. For networking we've associated each virtual TAP
 device with traffic shaping filters. For migration you have
 to set a bandwidth cap explicitly via the API. For network
 based storage backends, you don't directly control network
 usage, but instead I/O operations/bytes. Ultimately though
 there should be a way to enforce limits on anything KVM does,
 similarly I expect other hypervisors can do the same
 
   - I/O load from operations that Nova does on behalf of a guest
 that may be running, or may yet to be launched. These are not
 directly known to the hypervisor, so existing resource limits
 won't apply. Nova however should have some capability for
 applying resource limits to I/O intensive things it does and
 somehow associate them with the flavour limits  or some global
 per user cap perhaps.
 
  Thoughts?
  Overall I think that trying to apply caps on the number of API calls
  that can be made is not really a credible way to avoid users inflicting
  DOS attack on the host OS. Not least because it does nothing to control
  what a guest OS itself may do. If you do caps based on num of APIs calls
  in a time period, you end up having to do an extremely pessistic
  calculation - basically have to consider the worst case for any single
  API call, even if most don't hit the worst case. This is going to hurt
  scalability of the system as a whole IMHO.
 
  Regards,
  Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.
 
 ...or maybe this is just pie in the sky after all.

I don't think it is valid to ignore tenant-based attacks in this. You
have a single resource here and it can be consumed by the tenant
OS, by the VM associated with the tenant or by Nova itself. As such,
IMHO adding rate limiting to Nova APIs alone is a non-solution because
you've still left it wide open to starvation by any number of other
routes which are arguably even more critical to address than the API
calls.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- 

Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-27 Thread Alan Pevec
2013/11/27 Sean Dague s...@dague.net:
 The problem is you can't really support both iso8601 was dormant for
 years, and the revived version isn't compatible with the old version.
 So supporting both means basically forking iso8601 and maintaining you
 own version of it monkey patched in your own tree.

Right, hence glance was added https://review.openstack.org/55998 to
unblock the previous gate failure.
Issue now is that stable/grizzly Tempest uses clients from git trunk,
which is not going to work since trunk will add more and more
incompatible dependencies, even if backward compatbility is preserved
against the old service APIs!

Solutions could be that Tempest installs clients into separate venv to
avoid dependecy conflicts or establish stable/* branches for
clients[1] which are created around OpenStack release time.

Cheers,
Alan

[1] we have those for openstack client packages in Fedora/RDO
 e.g. https://github.com/redhat-openstack/python-novaclient/branches
 Here's nice explanation by Jakub: http://openstack.redhat.com/Clients

 On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang
 yaguang.t...@canonical.com wrote:
 after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests,
 because stable/glance requires  iso8601=0.1.4, log info
 https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console,
 I have filed a bug to track this
 https://bugs.launchpad.net/glance/+bug/1255419.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Zane Bitter

On 27/11/13 18:16, Jay Pipes wrote:

On 11/27/2013 12:02 PM, Zane Bitter wrote:

Your proposal on the wiki page retains this URL but removes the tenant
ID:

GET /v2/stacks/{stack_name}

This now no longer uniquely identifies a stack, and is therefore not
ReST.


It would be along with a Vary: X-Project-Id header, no?


Sure, but the point of HTTP is that a URI addresses exactly one 
resource. A resource may have multiple representations and the one 
returned can depend on the headers, but a URI should never address more 
than one resource.


For example, you could easily imagine a situation where you end up 
getting served data for another tenant from a cache somewhere, which 
would be Very Bad.



So if we drop the tenant_id then we should also start addressing stacks
only by UUID:

 GET v2/stacks/{stack_id}

and do lookups by name using stack list or something. However other
projects do it.


++ for this anyway :)


This seems clearly worse than the API we have, but it would at least be
consistent with other OpenStack projects. For them this API makes more
sense, because they don't enforce unique names for things in a tenant
and probably couldn't start if they wanted to. This was IMHO a mistake,
and one that Heat avoided, but I accept that it's hard to escape gravity
on this. I, for one, welcome our new UUID overlords.


LOL. Actually, in Nova and other places it's now moving to UNIQUE
CONSTRAINT (name, project, deleted) ;)


Oh, perfect :) Then we should just keep the tenant_id and wait for 
everyone else to circle back around :D


Even better would be if we had the keystone domain (instead of the 
tenant id) incorporated into the endpoint in the keystone catalog and 
then we could use the tenant^W project *name* in the URL and users would 
never have to deal with UUIDs and invisible headers again - your server 
is always at /my-project-name/servers/my-server-name. Where you might 
expect it to be.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] EC2 Filter

2013-11-27 Thread Joe Gordon
On Tue, Nov 26, 2013 at 1:56 AM, Sebastian Porombka 
porom...@uni-paderborn.de wrote:

 Oh hai!

 That's nice to hear! I would like to help by adapting this patch set to
 latest and
 I will try to work out the tempest test cases.

 Is there any form of documentation for new developers?


http://git.openstack.org/cgit/openstack/nova/tree/CONTRIBUTING.rst



 Thanks in advance
 Sebastian

 --
 Sebastian Porombka, M.Sc.
 Zentrum für Informations- und Medientechnologien (IMT)
 Universität Paderborn

 E-Mail: porom...@uni-paderborn.de
 Tel.: 05251/60-5999
 Fax: 05251/60-48-5999
 Raum: N5.314

 
 Q: Why is this email five sentences or less?
 A: http://five.sentenc.es

 Please consider the environment before printing this email.




 Von:  Joe Gordon joe.gord...@gmail.com
 Antworten an:  OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Datum:  Montag, 25. November 2013 20:03
 An:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Betreff:  Re: [openstack-dev] EC2 Filter


 
 
 
 On Mon, Nov 25, 2013 at 6:22 AM, Sebastian Porombka
 porom...@uni-paderborn.de wrote:
 
 Hi Folks.
 
 I stumbled over the lacking of EC2 Api filter support on Openstack and
 justinsb¹s (and the other people I forgot to mention here) attempt [1] to
 implement this against diabolo.
 
 I¹m highly interested in this feature and would like (to try) to implement
 this features against the latest base. My problem is, that I¹m unable find
 informations about the rejection of this set of patches in 2011.
 
 
 
 
 I am also not sure why this didn't get merged, but I don't see why we
 would block this code today, especially if it comes with some tempests
 tests too.
 
 
 
 Can anybody help me?
 
 Greetings
   Sebastian
 
 [1]
 https://code.launchpad.net/~justin-fathomdb/nova/ec2-filters
 https://code.launchpad.net/~justin-fathomdb/nova/ec2-filters
 --
 Sebastian Porombka, M.Sc.
 Zentrum für Informations- und Medientechnologien (IMT)
 Universität Paderborn
 
 E-Mail: porom...@uni-paderborn.de
 Tel.: 05251/60-5999
 Fax: 05251/60-48-5999
 Raum: N5.314
 
 
 Q: Why is this email five sentences or less?
 A: http://five.sentenc.es
 
 Please consider the environment before printing this email.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Is there a way for the VM to identify that it is getting booted in OpenStack

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 01:48:29AM +, Vijay Venkatachalam wrote:
 Hi,
 Is there a way for the VM to identify that it is getting 
 booted in OpenStack?

The mails you've forwarded below pretty much answer this
question already.

 As said in the below mail, once the VM knows it is booting in 
 OpenStack it will alter the boot sequence.

 AWS provides signatures in the BIOS strings.

As below, so does the KVM/QEMU driver in Nova. The XenAPI driver was planing
to follow this for HVM guests at least, though it doesn't help paravirt Xen
guests which lack BIOS.

 From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
 Sent: Tuesday, November 26, 2013 8:58 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Are BIOS strings configured in Hyper-V  
 ESX similar to KVM instantiated VMs?
 
 It is basically used to tailor the boot sequence if the VM gets booted in 
 OpenStack. For ex. it could do the following
 
 
 1.   Get IP through DHCP if booted in OpenStack
 
 2.   Read config drive or contact metadata service and init the system if 
 booted in OpenStack

Instead of trying to figure out if booted under OpenStack, just look
for a config driver, or try to contact the metadata service directly
and handle the failure when they're not there. You need to have this
kind of failure handling anyway for reliability

Likewise if your guest should just try todo DHCP regardless if it
does not have any other way to configure its IP addr statically.

 From: Bob Ball [mailto:bob.b...@citrix.com]
 Sent: Tuesday, November 26, 2013 7:25 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Are BIOS strings configured in Hyper-V  
 ESX similar to KVM instantiated VMs?
 
 It's not certain that this will be implemented for the XenAPI driver.
 
 Our view is that the BIOS strings shouldn't be relied upon - the hypervisor 
 can clearly set them to anything so it's not really a reliable way to 
 configure the application.  Also note that in some scenarios, such as PV 
 guests in Xen clouds, you will not have any BIOS to query.  Finally we're not 
 clear on the use case here - What's the use case for needing to know whether 
 you VM is running under OpenStack or not?
 
 Bob
 
 From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com]
 Sent: 26 November 2013 01:44
 To: OpenStack Development Mailing List 
 (openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org)
 Subject: [openstack-dev] [Nova] Are BIOS strings configured in Hyper-V  ESX 
 similar to KVM instantiated VMs?
 
 Hi,
 
 In a KVM instantiated VM the following signature is present in the BIOS of 
 the VM
 The 'Manufacturer ' field in 'System Information' group is set to OpenStack 
 Foundation
 
 
 
 This helps the VM to identify that it is getting used in an OpenStack 
 environment.
 
 
 
 As far as I know XenServer is planning to set the BIOS Strings in IceHouse 
 release.
 
 
 
 Is this functionality available in other Hypervisors, especially Hyper-V  
 ESX?
 


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Jason Dunsmore
On Wed, Nov 27 2013, Zane Bitter wrote:


  Parameters:
  db_name:
  group: db
  order: 0
  db_username:
  group: db
  order: 1
  db_password:
  group: db
  order: 2
  web_node_name:
  group: web_node
  order: 0
  keypair:
  group: web_node
  order: 1

 -2 this is horrible.

 Imagine how much work this is for the poor author! At least they don't
 have to maintain parallel hierarchies of matching key names like in
 the original proposal, but they still have to manually maintain
 multiple lists of orderings. What if you wanted to add another
 parameter at the beginning? Maybe we should encourage authors to
 number parameters with multiples of 10. Like BASIC programmers in the
 80s.

 And of course if you don't specify the order explicitly then you get
 random order again. Sigh.

 There's only one way that this is even remotely maintainable for a
 template author, and that's if they group and order stuff manually
 anyway (like you have in your example - people will do this
 automatically by themselves even if the syntax doesn't require them
 to). Since they have to do this, just display the parameters in the UI
 in the same order that they are defined in the file. This does the
 Right Thing even if the author doesn't know about it, unlike the
 explicit order thing which completely breaks down if the order is not
 explicitly stated. You probably won't even have to document it because
 literally 100% of people will either (a) not care, or (b) expect it to
 work that way anyway. In fact, you will almost certainly get bug
 reports if you don't display them in the same order as written.

+1 for implicit ordering.  I think this will be intuitive for users.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Is it time for a v2 Heat API?

2013-11-27 Thread Chris Friesen

On 11/27/2013 11:50 AM, Zane Bitter wrote:


Even better would be if we had the keystone domain (instead of the
tenant id) incorporated into the endpoint in the keystone catalog and
then we could use the tenant^W project *name* in the URL and users would
never have to deal with UUIDs and invisible headers again - your server
is always at /my-project-name/servers/my-server-name. Where you might
expect it to be.


That sounds way too logical...

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 17:43, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.

 ...or maybe this is just pie in the sky after all.
 I don't think it is valid to ignore tenant-based attacks in this. You
 have a single resource here and it can be consumed by the tenant
 OS, by the VM associated with the tenant or by Nova itself. As such,
 IMHO adding rate limiting to Nova APIs alone is a non-solution because
 you've still left it wide open to starvation by any number of other
 routes which are arguably even more critical to address than the API
 calls.

 Daniel
Daniel, maybe I have misunderstood you here but with this optional
extension I am (a) not intending to solve DOS issues and (b) not
ignoring DOS issues since I do not expect to be adding any beyond or
accentuating those that 

Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
 On 27/11/13 17:43, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
  On 27/11/13 15:49, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
  Moving this to the ml as requested, would appreciate
  comments/thoughts/feedback.
 
  So, I recently proposed a small patch to the oslo rpc code (initially in
  oslo-incubator then moved to oslo.messaging) which extends the existing
  support for limiting the rpc thread pool so that concurrent requests can
  be limited based on type/method. The blueprint and patch are here:
 
  https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
  The basic idea is that if you have server with limited resources you may
  want restrict operations that would impact those resources e.g. live
  migrations on a specific hypervisor or volume formatting on particular
  volume node. This patch allows you, admittedly in a very crude way, to
  apply a fixed limit to a set of rpc methods. I would like to know
  whether or not people think this is sort of thing would be useful or
  whether it alludes to a more fundamental issue that should be dealt with
  in a different manner.
  Based on this description of the problem I have some observations
 
   - I/O load from the guest OS itself is just as important to consider
 as I/O load from management operations Nova does for a guest. Both
 have the capability to impose denial-of-service on a host. IIUC, the
 flavour specs have the ability to express resource constraints for
 the virtual machines to prevent a guest OS initiated DOS-attack
 
   - I/O load from live migration is attributable to the running
 virtual machine. As such I'd expect that any resource controls
 associated with the guest (from the flavour specs) should be
 applied to control the load from live migration.
 
 Unfortunately life isn't quite this simple with KVM/libvirt
 currently. For networking we've associated each virtual TAP
 device with traffic shaping filters. For migration you have
 to set a bandwidth cap explicitly via the API. For network
 based storage backends, you don't directly control network
 usage, but instead I/O operations/bytes. Ultimately though
 there should be a way to enforce limits on anything KVM does,
 similarly I expect other hypervisors can do the same
 
   - I/O load from operations that Nova does on behalf of a guest
 that may be running, or may yet to be launched. These are not
 directly known to the hypervisor, so existing resource limits
 won't apply. Nova however should have some capability for
 applying resource limits to I/O intensive things it does and
 somehow associate them with the flavour limits  or some global
 per user cap perhaps.
 
  Thoughts?
  Overall I think that trying to apply caps on the number of API calls
  that can be made is not really a credible way to avoid users inflicting
  DOS attack on the host OS. Not least because it does nothing to control
  what a guest OS itself may do. If you do caps based on num of APIs calls
  in a time period, you end up having to do an extremely pessistic
  calculation - basically have to consider the worst case for any single
  API call, even if most don't hit the worst case. This is going to hurt
  scalability of the system as a whole IMHO.
 
  Regards,
  Daniel
  Daniel, thanks for this, these are all valid points and essentially tie
  with the fundamental issue of dealing with DOS attacks but for this bp I
  actually want to stay away from this area i.e. this is not intended to
  solve any tenant-based attack issues in the rpc layer (although that
  definitely warrants a discussion e.g. how do we stop a single tenant
  from consuming the entire thread pool with requests) but rather I'm
  thinking more from a QOS perspective i.e. to allow an admin to account
  for a resource bias e.g. slow RAID controller, on a given node (not
  necessarily Nova/HV) which could be alleviated with this sort of crude
  rate limiting. Of course one problem with this approach is that
  blocked/limited requests still reside in the same pool as other requests
  so if we did want to use this it may be worth considering offloading
  blocked requests or giving them their own pool altogether.
 
  ...or maybe this is just pie in the sky after all.
  I don't think it is valid to ignore tenant-based attacks in this. You
  have a single resource here and it can be consumed by the tenant
  OS, by the VM associated with the tenant or by Nova itself. As such,
  IMHO adding rate limiting to Nova APIs alone is a non-solution because
  you've still left it wide open to starvation by any number of other
  routes which are arguably even more critical to address than the API
  calls.
 
  Daniel
 Daniel, maybe I have misunderstood you here but with this optional
 

Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-27 Thread Rafał Jaworowski
On Mon, Nov 25, 2013 at 3:50 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Fri, Nov 22, 2013 at 10:46:19AM -0500, Russell Bryant wrote:
 On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
  Russell,
  First, thank you for the whiteboard input regarding the blueprint for
  FreeBSD hypervisor nova driver:
  https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node
 
  We were considering libvirt support for bhyve hypervisor as well, only
  wouldn't want to do this as the first approach for FreeBSD+OpenStack
  integration. We'd rather bring bhyve bindings for libvirt later as
  another integration option.
 
  For FreeBSD host support a native hypervisor driver is important and
  desired long-term and we would like to have it anyways. Among things
  to consider are the following:
  - libvirt package is additional (non-OpenStack), external dependency
  (maintained in the 'ports' collection, not included in base system),
  while native API (via libvmmapi.so library) is integral part of the
  base system.
  - libvirt license is LGPL, which might be an important aspect for some 
  users.

 That's perfectly fine if you want to go that route as a first step.
 However, that doesn't mean it's appropriate for merging into Nova.
 Unless there are strong technical justifications for why this approach
 should be taken, I would probably turn down this driver until you were
 able to go the libvirt route.

 The idea of a FreeBSD bhyve driver for libvirt has been mentioned
 a few times. We've already got a FreeBSD port of libvirt being
 actively maintained to support QEMU (and possibly Xen, not 100% sure
 on that one), and we'd be more than happy to see further contributions
 such as a bhyve driver.

As mentioned, in general we like the idea of libvirt bhyve driver, but
sometimes it may not fit the bill (licensing, additional external
dependency to keep track of) and hence we consider the native option.

 I am of course biased, as libvirt project maintainer, but I do agree
 that supporting bhyve via libvirt would make sense, since it opens up
 opportunities beyond just OpenStack. There are a bunch of applications
 built on libvirt that could be used to manage bhyve, and a fair few
 applications which have plugins using libvirt

Could you perhaps give some pointers on the libvirt development
process, how to contribute changes and so on?

Another quick question: for cases like this, how does Nova manage
syncing with the required libvirt codebase when a new hypervisor
driver is added or for similar major updates happen?

 Taking on maint work for a new OpenStack driver is a non-trivial amount
 of work in itself. If the burden for OpenStack maintainers can be reduced
 by, pushing work out to / relying on support from, libvirt, that makes
 sense from OpenStack/Nova's POV.

The maintenance aspect and testing coverage are valid points, on the
other hand future changes would have to go a longer way for us: first
upstream to libvirt, then downstream to the FreeBSD ports collection
(+ perhaps some OpenStack code bits as well), which makes the process
more complicated.

Rafal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Jay Pipes

On 11/27/2013 06:23 AM, Tom Deckers (tdeckers) wrote:

I understand the an Assembly can be a larger group of components.
However, those together exist to provide a capability which we want
to capture in some catalog so the capability becomes discoverable.
I'm not sure how the 'listing' mechanism works out in practice.  If
this can be used in an enterprise ecosystem to discover services then
that's fine.  We should capture a work item flesh out discoverability
of both Applications and Assemblies.  I make that distinction because
both scenarios should be provided. As a service consumer, I should be
able to look at the 'Applications' listed in the Openstack
environment and provision them.  In that case, we should also support
flavors of the service.  Depending on the consumer-provider
relationship, we might want to provide different configuratons of the
same Application. (e.g. gold-silver-bronze tiering).  I believe this
is covered by the 'listing' you mentioned. Once deployed, there
should also be a mechanism to discover the deployed assemblies.  One
example of such deployed Assembly is a persistence service that can
in its turn be used as a Service in another Assembly.  The specific
details of the capability provided by the Assembly needs to be
discoverable in order to allow successful auto-wiring (I've seen a
comment about this elsewhere in the project - I believe in last
meeting).


Another thought around the naming of Assembly... there's no reason why 
the API cannot just ditch the entire notion of an assembly, and just use 
Component in a self-referential way.


In other words, an Application (or whatever is agree on for that 
resource name) contains one or more Components. Components may further 
be composed of one or more (sub)Components, which themselves may be 
composed of further (sub)Components.


That way you keep the notion of a Component as generic and encompassing 
as possible and allow for an unlimited generic hierarchy of Component 
resources to comprise an Application.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Zane Bitter

On 27/11/13 18:12, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2013-11-27 08:09:33 -0800:

In the longer term, there seems to be a lot of demand for some sort of
template catalog service, like Glance for templates. (I disagree with
Clint that it should actually _be_ Glance the project as we know it, for
the reasons Steve B mentioned earlier, but the concept is right.) And
this brings us back to a very similar situation to the operator-provided
template catalog (indeed, that use case would likely be subsumed by this
one).



Could you provide a stronger link to Steve B's comments, I think I
missed them. Thanks!


Hrm, glance is for large binary data and has a metadata model only for 
OS images - the overlap with the needs of a template catalog don't seem 
very obvious to me.


It was longer and more detailed in my recollection of it :D Sorry!

- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Edward Hope-Morley
On 27/11/13 18:20, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
 On 27/11/13 17:43, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
 On 27/11/13 15:49, Daniel P. Berrange wrote:
 On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
 Moving this to the ml as requested, would appreciate
 comments/thoughts/feedback.

 So, I recently proposed a small patch to the oslo rpc code (initially in
 oslo-incubator then moved to oslo.messaging) which extends the existing
 support for limiting the rpc thread pool so that concurrent requests can
 be limited based on type/method. The blueprint and patch are here:

 https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control

 The basic idea is that if you have server with limited resources you may
 want restrict operations that would impact those resources e.g. live
 migrations on a specific hypervisor or volume formatting on particular
 volume node. This patch allows you, admittedly in a very crude way, to
 apply a fixed limit to a set of rpc methods. I would like to know
 whether or not people think this is sort of thing would be useful or
 whether it alludes to a more fundamental issue that should be dealt with
 in a different manner.
 Based on this description of the problem I have some observations

  - I/O load from the guest OS itself is just as important to consider
as I/O load from management operations Nova does for a guest. Both
have the capability to impose denial-of-service on a host. IIUC, the
flavour specs have the ability to express resource constraints for
the virtual machines to prevent a guest OS initiated DOS-attack

  - I/O load from live migration is attributable to the running
virtual machine. As such I'd expect that any resource controls
associated with the guest (from the flavour specs) should be
applied to control the load from live migration.

Unfortunately life isn't quite this simple with KVM/libvirt
currently. For networking we've associated each virtual TAP
device with traffic shaping filters. For migration you have
to set a bandwidth cap explicitly via the API. For network
based storage backends, you don't directly control network
usage, but instead I/O operations/bytes. Ultimately though
there should be a way to enforce limits on anything KVM does,
similarly I expect other hypervisors can do the same

  - I/O load from operations that Nova does on behalf of a guest
that may be running, or may yet to be launched. These are not
directly known to the hypervisor, so existing resource limits
won't apply. Nova however should have some capability for
applying resource limits to I/O intensive things it does and
somehow associate them with the flavour limits  or some global
per user cap perhaps.

 Thoughts?
 Overall I think that trying to apply caps on the number of API calls
 that can be made is not really a credible way to avoid users inflicting
 DOS attack on the host OS. Not least because it does nothing to control
 what a guest OS itself may do. If you do caps based on num of APIs calls
 in a time period, you end up having to do an extremely pessistic
 calculation - basically have to consider the worst case for any single
 API call, even if most don't hit the worst case. This is going to hurt
 scalability of the system as a whole IMHO.

 Regards,
 Daniel
 Daniel, thanks for this, these are all valid points and essentially tie
 with the fundamental issue of dealing with DOS attacks but for this bp I
 actually want to stay away from this area i.e. this is not intended to
 solve any tenant-based attack issues in the rpc layer (although that
 definitely warrants a discussion e.g. how do we stop a single tenant
 from consuming the entire thread pool with requests) but rather I'm
 thinking more from a QOS perspective i.e. to allow an admin to account
 for a resource bias e.g. slow RAID controller, on a given node (not
 necessarily Nova/HV) which could be alleviated with this sort of crude
 rate limiting. Of course one problem with this approach is that
 blocked/limited requests still reside in the same pool as other requests
 so if we did want to use this it may be worth considering offloading
 blocked requests or giving them their own pool altogether.

 ...or maybe this is just pie in the sky after all.
 I don't think it is valid to ignore tenant-based attacks in this. You
 have a single resource here and it can be consumed by the tenant
 OS, by the VM associated with the tenant or by Nova itself. As such,
 IMHO adding rate limiting to Nova APIs alone is a non-solution because
 you've still left it wide open to starvation by any number of other
 routes which are arguably even more critical to address than the API
 calls.

 Daniel
 Daniel, maybe I have misunderstood you here but with this optional
 extension I am (a) not intending to 

[openstack-dev] [Solum] API worker architecture

2013-11-27 Thread Murali Allada
Hi all,

I'm working on a new blueprint to define the API service architecture.

https://blueprints.launchpad.net/solum/+spec/api-worker-architecture

It is still a draft for now. I've proposed a simple architecture for us to get 
started with implementing a few useful use cases.

Please chime in on the mailing list with your thoughts.

Thanks,
Murali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread George Shuklin

Good day.

I looking at the internals of bridge layout of openvswitch agent at 
http://docs.openstack.org/network-admin/admin/content/figures/2/figures/under-the-hood-scenario-1-ovs-compute.png
and wondering, why this scheme is so complicated and why it use linux 
bridge and vethes with openvswitch together? Why no just plug tap device 
directly to openvswitch bridge without intermediate brctl bridge?


I guess that was caused by some important consideration, but I unable to 
find any documents about this.


If someone know reasons for that complex construction with different 
bridges, please response.


Thanks.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] Gate bug 'libvirtError: Unable to read from monitor: Connection reset by peer'

2013-11-27 Thread Joe Gordon
Hi Daniel, all

This morning I noticed we are seeing libvirt stacktrace in the gate
occasionally.

https://bugs.launchpad.net/nova/+bug/1255624

Since your are a libvirt maintainer I was hoping you could take a look at
this one.


best,
Joe

P.S.

We also have a second libvirt gate bug

https://bugs.launchpad.net/nova/+bug/1254872
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all project] Time to fail tempest gate jobs when new log errors appear

2013-11-27 Thread David Kranz
tl;dr Soon, perhaps next week, tempest gate jobs will start failing if 
there are any ERROR lines in the logs that are not matched by an entry 
in https://github.com/openstack/tempest/blob/master/etc/whitelist.yaml. 
There is an exception for neutron because

more work needs to be done there for this to be feasible.

The whitelist file contains a lot of entries that look more like 
substantial bugs than incorrectly logging an ERROR due to bad
client data. I have been tracking this for a while and new things show 
up frequently. But since the tests pass no one looks at the logs and 
notices the bug indicators. We need to stop these bugs from getting 
merged. I have filed individual bugs for many of the items in the 
whitelist, but by no means all of them. The neutron team is taking on 
the task of getting rid of their errors and there are so many it is too 
much work to keep an up-to-date whitelist for neutron. So for the time 
being, neutron runs will dump all errors to the console but not fail.


In addition to the fact that these log errors indicate bugs, they make 
it more difficult to diagnose a problem when builds actually fail in the 
tempest tests because it can be hard to tell which log errors are 
known and which might be causing the failure. Hopefully some priority 
will be given to fixing these bugs and removing entries from the 
whitelist until it is driven to zero.


If any one has any comments or suggestions to improve this process, 
please speak up.


 -David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread Lorin Hochstein
Hi George:



On Wed, Nov 27, 2013 at 1:45 PM, George Shuklin george.shuk...@gmail.comwrote:

 Good day.

 I looking at the internals of bridge layout of openvswitch agent at
 http://docs.openstack.org/network-admin/admin/content/
 figures/2/figures/under-the-hood-scenario-1-ovs-compute.png
 and wondering, why this scheme is so complicated and why it use linux
 bridge and vethes with openvswitch together? Why no just plug tap device
 directly to openvswitch bridge without intermediate brctl bridge?

 I guess that was caused by some important consideration, but I unable to
 find any documents about this.

 If someone know reasons for that complex construction with different
 bridges, please response.


If you look a little further down on the page with that figure, the
documentation reads

Ideally, the TAP device vnet0 would be connected directly to the
integration bridge, br-int. Unfortunately, this isn't possible because of
how OpenStack security groups are currently implemented. OpenStack uses
iptables rules on the TAP devices such as vnet0 to implement security
groups, and Open vSwitch is not compatible with iptables rules that are
applied directly on TAP devices that are connected to an Open vSwitch port.


Take care,

Lorin



-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Jay Pipes

On 11/27/2013 01:28 AM, Clint Byrum wrote:

I propose adding an additional field to the parameter definition:

 Parameters:
 parameter name:
 description: This is the name of a nova key pair that will be used 
to
ssh to the compute instance.
 help: To learn more about nova key pairs click on this a
href=/some/url/help article/a.


+1. A help string per parameter is a fantastic idea. Sounds like a nice
analog to doc-strings.


Agreed. The above is a nice, straightforward addition.

snip


+1 for grouping. Your use case is perfectly specified above. Love it.

However, your format feels very forced. Integers for ordering? We have
lists for that.

How about this:

parameters:
   db:
 type: group
 parameters:
   - db_name
   - db_username
   - db_password
   web_node:
 type: group
 parameters:
   - web_node_name
   - keypair
   db_name:
 type: string
   ...

This way we are just providing the groupings outside of the parameters
themselves. Using a list means the order is as it appears here.


Actually, even simpler than that...

parameters:
  db:
   - db_name:
 description: blah
 help: blah
   - db_username:
 description: blah
 help: blah

After all, can't we assume that if the parameter value is a list, then 
it is a group of parameters?


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove][savanna] summit wrap-up: clustering

2013-11-27 Thread Sergey Lukjanov
Hi folks,

There was a thread around the Clustering API [0] and we decided to continue it 
on summit, so, I’d like to summarize things that was done before and at the 
summit on this topic.

First of all, thanks Michael Basnight (hub_cap) and all Trove folks for 
productive discussions on Design Summit in Hong Kong. During it we agreed that 
there is no intersection between projects’ roadmaps and scopes right now and we 
could collaborate to coordinate our efforts on some common parts not to 
duplicate each other, for example, by extracting some common code to libs if so 
or working together on some infra stuff. The main point is that our Clustering 
APIs are very different and both of them are really domain-specific. Due to the 
fact that we already have some and it’s working ok for our needs, we’re trying 
to share our knowledge to help Trove guys make better API.

In orchestration part both projects will use only Heat for resources 
orchestration and, so, some common code used for Heat templates generation 
could be extracted some day.

Trove already uses guest agents and we need agents for Savanna too, so, we’re 
thinking about unified extensionable guest agents that could be used by any 
projects, some discussions was already started [1][2].

One more common direction is that both Trove and Savanna need to make jobs 
that’ll build and publish guest images and I hope that we’ll collaborate on 
this topic to build common way for doing it [3].

Thanks.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/thread.html#14958
[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018276.html
[2] https://etherpad.openstack.org/p/UnifiedAgents
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/thread.html#19844

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 07:32:13PM +0100, Rafał Jaworowski wrote:
 On Mon, Nov 25, 2013 at 3:50 PM, Daniel P. Berrange berra...@redhat.com 
 wrote:
  I am of course biased, as libvirt project maintainer, but I do agree
  that supporting bhyve via libvirt would make sense, since it opens up
  opportunities beyond just OpenStack. There are a bunch of applications
  built on libvirt that could be used to manage bhyve, and a fair few
  applications which have plugins using libvirt
 
 Could you perhaps give some pointers on the libvirt development
 process, how to contribute changes and so on?

The libvirt development process is follows a time-based release cycle.
We aim to release once a month, on/near the 1st of the month to ensure
there's always fast turn around for user delivery once a feature is
merged.

We of course use GIT for development but in constrast to OpenStack we
follow a traditional mailing list based workflow. We recommend that
people use 'git send-email' to submit patch(-series) against the
libvirt GIT master branch. We don't require any kind of CLA to be
signed - people can just jump right in and contribute. There is some
more guidance on general development practices here

  http://libvirt.org/hacking.html

Before starting on a major dev effort we'd recommend people join the
main dev mailing list and just outline what they want to achieve, so
we can give early feedback and/or assistance. 

  http://libvirt.org/contact.html

If you want to get into nitty-gritty of how to actually write a new
hypervisor driver in libvirt, we should probably move the conversation
to the libvirt list.

 Another quick question: for cases like this, how does Nova manage
 syncing with the required libvirt codebase when a new hypervisor
 driver is added or for similar major updates happen?

Nova declares a minimum required libvirt version, currently 0.9.6
but with a patch pending to increase this to 0.9.11. This is mostly
about ensuring a core feature set is available. Any given libvirt
hypervisor driver may decline to implement any feature it wishes.
So there are also version checks done for specific features, or a
feature may be blindly used and any exception reported is handled
in some way.

For the current Libvirt drivers (xen/lxc/qemu/kvm) we look at what
Linux distros have which versions when deciding what is reasonable
version to mandate

  https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

periodically we'll re-examine the distros to see if there's a reason
to update the min required libvirt. Currently we don't distinguish
different libvirt versions for different hypervisors, but there's
no reason we shouldn't do that.

So if you were to write a bhyve driver for libvirt, then we'd end
up adding a specific version check to Nova that mandates use of
a suitably new libvirt version with that HV platform.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all project] Time to fail tempest gate jobs when new log errors appear

2013-11-27 Thread Jay Pipes

On 11/27/2013 01:53 PM, David Kranz wrote:

tl;dr Soon, perhaps next week, tempest gate jobs will start failing if
there are any ERROR lines in the logs that are not matched by an entry
in https://github.com/openstack/tempest/blob/master/etc/whitelist.yaml.
There is an exception for neutron because
more work needs to be done there for this to be feasible.

The whitelist file contains a lot of entries that look more like
substantial bugs than incorrectly logging an ERROR due to bad
client data. I have been tracking this for a while and new things show
up frequently. But since the tests pass no one looks at the logs and
notices the bug indicators. We need to stop these bugs from getting
merged. I have filed individual bugs for many of the items in the
whitelist, but by no means all of them. The neutron team is taking on
the task of getting rid of their errors and there are so many it is too
much work to keep an up-to-date whitelist for neutron. So for the time
being, neutron runs will dump all errors to the console but not fail.

In addition to the fact that these log errors indicate bugs, they make
it more difficult to diagnose a problem when builds actually fail in the
tempest tests because it can be hard to tell which log errors are
known and which might be causing the failure. Hopefully some priority
will be given to fixing these bugs and removing entries from the
whitelist until it is driven to zero.

If any one has any comments or suggestions to improve this process,
please speak up.


Yay! \o/

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting *cancelled* Nov 28

2013-11-27 Thread Sergey Lukjanov
Hi folks,

Savanna team meeting cancelled this week due to the major US holiday. We’ll be 
having irc meeting as usual at Dec 5 1800 UTC.

Thanks.

P.S. Have a nice holiday!

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Adrian Otto
Jay,

On Nov 27, 2013, at 10:36 AM, Jay Pipes jaypi...@gmail.com
 wrote:

 On 11/27/2013 06:23 AM, Tom Deckers (tdeckers) wrote:
 I understand the an Assembly can be a larger group of components.
 However, those together exist to provide a capability which we want
 to capture in some catalog so the capability becomes discoverable.
 I'm not sure how the 'listing' mechanism works out in practice.  If
 this can be used in an enterprise ecosystem to discover services then
 that's fine.  We should capture a work item flesh out discoverability
 of both Applications and Assemblies.  I make that distinction because
 both scenarios should be provided. As a service consumer, I should be
 able to look at the 'Applications' listed in the Openstack
 environment and provision them.  In that case, we should also support
 flavors of the service.  Depending on the consumer-provider
 relationship, we might want to provide different configuratons of the
 same Application. (e.g. gold-silver-bronze tiering).  I believe this
 is covered by the 'listing' you mentioned. Once deployed, there
 should also be a mechanism to discover the deployed assemblies.  One
 example of such deployed Assembly is a persistence service that can
 in its turn be used as a Service in another Assembly.  The specific
 details of the capability provided by the Assembly needs to be
 discoverable in order to allow successful auto-wiring (I've seen a
 comment about this elsewhere in the project - I believe in last
 meeting).
 
 Another thought around the naming of Assembly... there's no reason why the 
 API cannot just ditch the entire notion of an assembly, and just use 
 Component in a self-referential way.
 
 In other words, an Application (or whatever is agree on for that resource 
 name) contains one or more Components. Components may further be composed of 
 one or more (sub)Components, which themselves may be composed of further 
 (sub)Components.
 
 That way you keep the notion of a Component as generic and encompassing as 
 possible and allow for an unlimited generic hierarchy of Component resources 
 to comprise an Application.

As currently proposed, an Assembly (a top level grouping of Components) 
requires only one Component, but may contain many. The question is whether we 
should even have an Assembly. I admit that Assembly is a new term, and 
therefore requires definition, explanation, and examples. However, I think 
eliminating it and just using Components is getting a bit too abstract, and 
requires a bit too much explanation. 

I consider this subject analogous to the fundamentals concepts of Chemistry. 
Imagine trying to describe a molecule by only using the concept of an atom. 
Each atom can be different, and have more or less electrons etc. But if we did 
not have the concept of a molecule (a top level grouping of atoms), and tried 
to explain them as atoms contained within other atoms, Chemistry would get 
harder to teach.

We want this API to be understandable to Application Developers. I am afraid of 
simplifying matters too much, and making things a bit too abstract.

Thanks,

Adrian

 
 Best,
 -jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate broken right now

2013-11-27 Thread Robert Collins
On 27 November 2013 07:19, James E. Blair jebl...@openstack.org wrote:

 There is not, currently, though we would love to have more people on the
 infra-core (and infra-root) teams.  If you know someone, or you
 yourself, or your company are interested in contributing in this area,
 please get in contact and we can help.  There is a significant time
 commitment to working on infra due to the wide range of complex systems,
 however, a number of companies are working with these systems internally
 and should be able to spare some expertise in contributing to their
 upstream development and operation without undue burden.

I'm interested in contributing directly (to both -root and -core) -
but I'd need a more specific time estimate than 'significant' to be
able to decide if I can put that much time aside per week.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-27 Thread Zane Bitter

On 27/11/13 18:16, Tim Schnell wrote:


On 11/27/13 10:09 AM, Zane Bitter zbit...@redhat.com wrote:


On 26/11/13 22:24, Tim Schnell wrote:

Use Case #1
I see valid value in being able to group templates based on a type or


+1, me too.


keyword. This would allow any client, Horizon or a Template Catalog
service, to better organize and handle display options for an end-user.


I believe these are separate use cases and deserve to be elaborated as
such. If one feature can help with both that's great, but we're putting
the cart before the horse if we jump in and implement the feature
without knowing why.

Let's consider first a catalog of operator-provided templates as
proposed (IIUC) by Keith. It seems clear to me in that instance the
keywords are a property of the template's position in the catalog, and
not of the template itself.

Horizon is a slightly different story. Do we even allow people to upload
a bunch of templates and store them in Horizon? If not then there
doesn't seem much point in this feature for current Horizon users. (And
if we do, which would surprise me greatly, then the proposed
implementation doesn't seem that practical - would we want to retrieve
and parse every template to get the keyword?)


Correct, at the moment, Horizon has no concept of a template catalog of
any kind.

Here is my use case for including this in the template for Horizon:
(I'm going to start moving these to the wiki that Steve Baker setup)

Let's assume that an end-user of Heat has spun up 20 stacks and has now
requested help from a Support Operator of heat. In this case, the end-user
did not have a solid naming convention for naming his stacks, they are all
named tim1, tim2, etcŠ And also his request to the Support Operator
was really vague, like My Wordpress stack is broken.

The first thing that the Support Operator would do, would be to pull up
end-user's stacks in either Horizon or via the heat client cli. In both
cases, at the moment, he would then have to either stack-show on each
stack to look at the description of the stack or ask the end-user for a
stack-id/stack-name. This currently gets the job done but a better
experience would be for stack-list to already display some keywords about
each stack so the Support Operator would have to do less digging.

In this case the end-user only has one Wordpress stack so he would have
been annoyed if the Support Operator requested more information from him.
(Or maybe he has more than one wordpress stack, but only one currently in
CREATE_FAILED state).


This is a truly excellent example of a use case, btw. Kudos.


As a team, we have already encountered this exact situation just doing
team testing so I imagine that others would find value in a consistent way
to determine at least a general purpose of a stack, from the stack-list
page. Putting the stack-description in the stack-list table would take up
too much room from a design standpoint.

Once keywords has been added to the template then part of the blueprint
would be to return it with the stack-list information.


Other OpenStack APIs have user-defined 'tags' associated with resources. 
Maybe we should implement something like this for Heat stacks also. 
(i.e. it's in the API, not the template.) When the dashboard launches a 
template out of the template catalog, it could automatically populate 
the tags with the ones from the catalog metadata?



In the longer term, there seems to be a lot of demand for some sort of
template catalog service, like Glance for templates. (I disagree with
Clint that it should actually _be_ Glance the project as we know it, for
the reasons Steve B mentioned earlier, but the concept is right.) And
this brings us back to a very similar situation to the operator-provided
template catalog (indeed, that use case would likely be subsumed by this
one).


I believe that Ladislav initially proposed a solution that will work
here.
So I will second a proposal that we add a new top-level field to the HOT
specification called keywords that contains this template type.

keywords: wordpress, mysql, etcŠ


+1. If we decide that the template is the proper place for these tags
then this is the perfect way to do it IMO (assuming that it's actually a
list, not a comma-separated string). It's a standard format that we can
document and any tool can recognise, the name keywords describes
exactly what it does and there's no confusion with tags in Nova and EC2.


Use Case #2
The template author should also be able to explicitly define a help
string
that is distinct and separate from the description of an individual


This is not a use case, it's a specification. There seems to be a lot of
confusion about the difference, so let me sum it up:

Why - Use Case
What - Specification
How - Design Document (i.e. Code)

I know this all sounds very top-down, and believe me that's not my
philosophy. But design is essentially a global optimisation problem - we
need to see the whole picture to properly 

Re: [openstack-dev] [nova][libvirt] Gate bug 'libvirtError: Unable to read from monitor: Connection reset by peer'

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 10:50:12AM -0800, Joe Gordon wrote:
 Hi Daniel, all
 
 This morning I noticed we are seeing libvirt stacktrace in the gate
 occasionally.
 
 https://bugs.launchpad.net/nova/+bug/1255624
 
 Since your are a libvirt maintainer I was hoping you could take a look at
 this one.

I've commented on the bug with some hints - basically you'll need to
examine more logs files to find out why QEMU is dieing a horrible
death. Can we capture the /var/log/libvirt/ files on each run (and
obviously blow them away before start of each cycle).

 We also have a second libvirt gate bug
 
 https://bugs.launchpad.net/nova/+bug/1254872

Oh I hate this error message. It is another sign of bad stuff happening.
Made some comments there too.

It would be desirable if the gate logs included details of all software
package versions installed. eg so we can see what libvirt, qemu and
kernel are present. Also perhaps enable the libvirt logging by default
too per my comments in this bug.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Gate bug 'libvirtError: Unable to read from monitor: Connection reset by peer'

2013-11-27 Thread Jeremy Stanley
On 2013-11-27 19:18:12 + (+), Daniel P. Berrange wrote:
[...]
 It would be desirable if the gate logs included details of all software
 package versions installed. eg so we can see what libvirt, qemu and
 kernel are present.
[...]

It seems reasonable to me that we could dpkg -l or rpm -qa at the
end of the job similar to how we currently already do a pip freeze.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Definition feedback

2013-11-27 Thread Jay Pipes

On 11/27/2013 02:03 PM, Adrian Otto wrote:

Jay,

On Nov 27, 2013, at 10:36 AM, Jay Pipes jaypi...@gmail.com wrote:


On 11/27/2013 06:23 AM, Tom Deckers (tdeckers) wrote:

I understand the an Assembly can be a larger group of
components. However, those together exist to provide a capability
which we want to capture in some catalog so the capability
becomes discoverable. I'm not sure how the 'listing' mechanism
works out in practice.  If this can be used in an enterprise
ecosystem to discover services then that's fine.  We should
capture a work item flesh out discoverability of both
Applications and Assemblies.  I make that distinction because
both scenarios should be provided. As a service consumer, I
should be able to look at the 'Applications' listed in the
Openstack environment and provision them.  In that case, we
should also support flavors of the service.  Depending on the
consumer-provider relationship, we might want to provide
different configuratons of the same Application. (e.g.
gold-silver-bronze tiering).  I believe this is covered by the
'listing' you mentioned. Once deployed, there should also be a
mechanism to discover the deployed assemblies.  One example of
such deployed Assembly is a persistence service that can in its
turn be used as a Service in another Assembly.  The specific
details of the capability provided by the Assembly needs to be
discoverable in order to allow successful auto-wiring (I've seen
a comment about this elsewhere in the project - I believe in
last meeting).


Another thought around the naming of Assembly... there's no
reason why the API cannot just ditch the entire notion of an
assembly, and just use Component in a self-referential way.

In other words, an Application (or whatever is agree on for that
resource name) contains one or more Components. Components may
further be composed of one or more (sub)Components, which
themselves may be composed of further (sub)Components.

That way you keep the notion of a Component as generic and
encompassing as possible and allow for an unlimited generic
hierarchy of Component resources to comprise an Application.


As currently proposed, an Assembly (a top level grouping of
Components) requires only one Component, but may contain many. The
question is whether we should even have an Assembly. I admit that
Assembly is a new term, and therefore requires definition,
explanation, and examples. However, I think eliminating it and just
using Components is getting a bit too abstract, and requires a bit
too much explanation.

I consider this subject analogous to the fundamentals concepts of
Chemistry. Imagine trying to describe a molecule by only using the
concept of an atom. Each atom can be different, and have more or less
electrons etc. But if we did not have the concept of a molecule (a
top level grouping of atoms), and tried to explain them as atoms
contained within other atoms, Chemistry would get harder to teach.

We want this API to be understandable to Application Developers. I am
afraid of simplifying matters too much, and making things a bit too
abstract.


Understood, but I actually think that the Component inside Component 
approach would work quite well with a simple component type attribute 
of the Component resource.


In your particle physics example, it would be the equivalent of saying 
that an Atom is composed of subatomic particles, with those subatomic 
particles having different types (hadrons, baryons, mesons, etc) and 
those subatomic particles being composed of zero or more subatomic 
particles of various types (neutrons, protons, fermions, bosons, etc).


In fact, particle physics has the concept of elementary particles -- 
those particles whose composition is unknown -- and composite particles 
-- those particles that are composed of other particles. The congruence 
between the taxonomy of particles and what I'm proposing is actually 
remarkable :)


Elementary particle is like a Component with no sub Components
Composite particle is like a Component with sub Components.
Each particle has a type, and each Component would also have a type.

Other possibility:

Call an Assembly exactly what it is: ComponentGroup

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread George Shuklin
Thank you for reply!

Few more question:

AFAIK bridge tools is not very fast (compare to OVS), so adding them
between OVS and tap (instead of yet another OVS switch) is kinda slow
everything down. Why just not use yet another openvswitch switch to connect
tap to veth devices?

Why iptables, not internal openvswitch flow rules? Those rules allows to
filter packets on L2-L4 headers and operates very fast. Is some
iptables-only features used in ovs-agent?

Thanks.
27.11.2013 20:55 пользователь Lorin Hochstein lo...@nimbisservices.com
написал:

 Hi George:



 On Wed, Nov 27, 2013 at 1:45 PM, George Shuklin 
 george.shuk...@gmail.comwrote:

 Good day.

 I looking at the internals of bridge layout of openvswitch agent at
 http://docs.openstack.org/network-admin/admin/content/
 figures/2/figures/under-the-hood-scenario-1-ovs-compute.png
 and wondering, why this scheme is so complicated and why it use linux
 bridge and vethes with openvswitch together? Why no just plug tap device
 directly to openvswitch bridge without intermediate brctl bridge?

 I guess that was caused by some important consideration, but I unable to
 find any documents about this.

 If someone know reasons for that complex construction with different
 bridges, please response.


 If you look a little further down on the page with that figure, the
 documentation reads

 Ideally, the TAP device vnet0 would be connected directly to the
 integration bridge, br-int. Unfortunately, this isn't possible because of
 how OpenStack security groups are currently implemented. OpenStack uses
 iptables rules on the TAP devices such as vnet0 to implement security
 groups, and Open vSwitch is not compatible with iptables rules that are
 applied directly on TAP devices that are connected to an Open vSwitch port.


 Take care,

 Lorin



 --
 Lorin Hochstein
 Lead Architect - Cloud Services
 Nimbis Services, Inc.
 www.nimbisservices.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2011-11-27

2013-11-27 Thread Shawn Hartsock
Greetings Stackers!

We have Icehouse-1 on December 5th! Here's the reviews that the VMwareAPI team 
has in flight in order of fitness for review by core reviewers. It's also 
holiday season now so please keep in mind folks may be in and out during this 
period. Let's see if we can't get some of these top most patches merged for 
Icehouse-1 thanks to everyone who is doing reviews and a gentle reminder to 
some to step up your review output.


Ordered by fitness for review:

== needs one more +2/approval ==
* https://review.openstack.org/55505
title: 'VMware: Handle cases when there are no hosts in cluster'
votes: +2:2, +1:3, -1:0, -2:0. +20 days in progress, revision: 1 is 20 
days old 
* https://review.openstack.org/56746
title: 'VMware: Detach volume should not delete vmdk'
votes: +2:1, +1:2, -1:0, -2:0. +11 days in progress, revision: 1 is 9 
days old 
* https://review.openstack.org/47743
title: 'VMWare: bug fix for Vim exception handling'
votes: +2:1, +1:3, -1:0, -2:0. +66 days in progress, revision: 11 is 4 
days old 
* https://review.openstack.org/53109
title: 'VMware: enable driver to work with postgres database'
votes: +2:2, +1:9, -1:0, -2:0. +36 days in progress, revision: 2 is 36 
days old 
* https://review.openstack.org/55934
title: 'VMware: Always upload a snapshot as a preallocated disk'
votes: +2:1, +1:3, -1:0, -2:0. +16 days in progress, revision: 4 is 8 
days old 

== ready for core ==
* https://review.openstack.org/53990
title: 'VMware ESX: Boot from volume must not relocate vol'
votes: +2:0, +1:6, -1:0, -2:0. +32 days in progress, revision: 3 is 32 
days old 
* https://review.openstack.org/54361
title: 'VMware: fix datastore selection when token is returned'
votes: +2:0, +1:7, -1:0, -2:0. +29 days in progress, revision: 5 is 28 
days old 
* https://review.openstack.org/49692
title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:6, -1:0, -2:0. +54 days in progress, revision: 9 is 15 
days old 

== needs review ==
* https://review.openstack.org/58705
title: 'VMware: bug fix for incomplete statistics/datastore reports'
votes: +2:0, +1:1, -1:0, -2:0. +0 days in progress, revision: 2 is 0 
days old 
* https://review.openstack.org/58598
title: 'VMware: deal with concurrent downloads'
votes: +2:0, +1:1, -1:0, -2:0. +1 days in progress, revision: 2 is 0 
days old 
* https://review.openstack.org/58703
title: 'VMware: fix snapshot failure when host in maintenance mode'
votes: +2:0, +1:1, -1:0, -2:0. +0 days in progress, revision: 3 is 0 
days old 
* https://review.openstack.org/52630
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:2, -1:0, -2:0. +40 days in progress, revision: 14 is 3 
days old 
* https://review.openstack.org/52630
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:2, -1:0, -2:0. +40 days in progress, revision: 14 is 3 
days old 
* https://review.openstack.org/52557
title: 'VMware Driver update correct disk usage stat'
votes: +2:0, +1:1, -1:0, -2:0. +41 days in progress, revision: 2 is 1 
days old 
* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
votes: +2:0, +1:3, -1:0, -2:0. +24 days in progress, revision: 2 is 3 
days old 
* https://review.openstack.org/58142
title: 'VMware: validate data returned by task prior to access'
votes: +2:0, +1:2, -1:0, -2:0. +3 days in progress, revision: 2 is 3 
days old 
* https://review.openstack.org/57519
title: 'VMware: use .get() to access 'summary.accessible''
votes: +2:0, +1:4, -1:0, -2:0. +7 days in progress, revision: 1 is 2 
days old 
* https://review.openstack.org/57376
title: 'VMware: delete vm snapshot after nova snapshot'
votes: +2:0, +1:4, -1:0, -2:0. +7 days in progress, revision: 4 is 2 
days old 
* https://review.openstack.org/55038
title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:0, +1:1, -1:0, -2:0. +25 days in progress, revision: 3 is 3 
days old 
* https://review.openstack.org/53648
title: 'VMware: fix image snapshot with attached volume'
votes: +2:0, +1:4, -1:0, -2:0. +34 days in progress, revision: 1 is 34 
days old 
* https://review.openstack.org/54808
title: 'VMware: fix bug for exceptions thrown in _wait_for_task'
votes: +2:0, +1:4, -1:0, -2:0. +27 days in progress, revision: 3 is 12 
days old 
* https://review.openstack.org/55509
title: 'VMware: fix VM resize bug'
votes: +2:0, +1:4, -1:0, -2:0. +20 days in progress, revision: 1 is 20 
days old 
* https://review.openstack.org/43270
title: 'vmware driver selection of 

Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-27 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 06:43:42PM +, Edward Hope-Morley wrote:
 On 27/11/13 18:20, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
  On 27/11/13 17:43, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
  On 27/11/13 15:49, Daniel P. Berrange wrote:
  On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
  Moving this to the ml as requested, would appreciate
  comments/thoughts/feedback.
 
  So, I recently proposed a small patch to the oslo rpc code (initially 
  in
  oslo-incubator then moved to oslo.messaging) which extends the existing
  support for limiting the rpc thread pool so that concurrent requests 
  can
  be limited based on type/method. The blueprint and patch are here:
 
  https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
 
  The basic idea is that if you have server with limited resources you 
  may
  want restrict operations that would impact those resources e.g. live
  migrations on a specific hypervisor or volume formatting on particular
  volume node. This patch allows you, admittedly in a very crude way, to
  apply a fixed limit to a set of rpc methods. I would like to know
  whether or not people think this is sort of thing would be useful or
  whether it alludes to a more fundamental issue that should be dealt 
  with
  in a different manner.
  Based on this description of the problem I have some observations
 
   - I/O load from the guest OS itself is just as important to consider
 as I/O load from management operations Nova does for a guest. Both
 have the capability to impose denial-of-service on a host. IIUC, the
 flavour specs have the ability to express resource constraints for
 the virtual machines to prevent a guest OS initiated DOS-attack
 
   - I/O load from live migration is attributable to the running
 virtual machine. As such I'd expect that any resource controls
 associated with the guest (from the flavour specs) should be
 applied to control the load from live migration.
 
 Unfortunately life isn't quite this simple with KVM/libvirt
 currently. For networking we've associated each virtual TAP
 device with traffic shaping filters. For migration you have
 to set a bandwidth cap explicitly via the API. For network
 based storage backends, you don't directly control network
 usage, but instead I/O operations/bytes. Ultimately though
 there should be a way to enforce limits on anything KVM does,
 similarly I expect other hypervisors can do the same
 
   - I/O load from operations that Nova does on behalf of a guest
 that may be running, or may yet to be launched. These are not
 directly known to the hypervisor, so existing resource limits
 won't apply. Nova however should have some capability for
 applying resource limits to I/O intensive things it does and
 somehow associate them with the flavour limits  or some global
 per user cap perhaps.
 
  Thoughts?
  Overall I think that trying to apply caps on the number of API calls
  that can be made is not really a credible way to avoid users inflicting
  DOS attack on the host OS. Not least because it does nothing to control
  what a guest OS itself may do. If you do caps based on num of APIs calls
  in a time period, you end up having to do an extremely pessistic
  calculation - basically have to consider the worst case for any single
  API call, even if most don't hit the worst case. This is going to hurt
  scalability of the system as a whole IMHO.
 
  Regards,
  Daniel
  Daniel, thanks for this, these are all valid points and essentially tie
  with the fundamental issue of dealing with DOS attacks but for this bp I
  actually want to stay away from this area i.e. this is not intended to
  solve any tenant-based attack issues in the rpc layer (although that
  definitely warrants a discussion e.g. how do we stop a single tenant
  from consuming the entire thread pool with requests) but rather I'm
  thinking more from a QOS perspective i.e. to allow an admin to account
  for a resource bias e.g. slow RAID controller, on a given node (not
  necessarily Nova/HV) which could be alleviated with this sort of crude
  rate limiting. Of course one problem with this approach is that
  blocked/limited requests still reside in the same pool as other requests
  so if we did want to use this it may be worth considering offloading
  blocked requests or giving them their own pool altogether.
 
  ...or maybe this is just pie in the sky after all.
  I don't think it is valid to ignore tenant-based attacks in this. You
  have a single resource here and it can be consumed by the tenant
  OS, by the VM associated with the tenant or by Nova itself. As such,
  IMHO adding rate limiting to Nova APIs alone is a non-solution because
  you've still left it wide open to starvation by any number of other
  routes which are arguably 

Re: [openstack-dev] [Solum] API worker architecture

2013-11-27 Thread Roshan Agrawal
We should probably include support for asynchronous responses right from the 
beginning to handle calls that need a long time to process. Is this in line 
with what you were thinking ? I am referring to your comment in the blueprint 
To start things off, we can implement workflow #1 shown above and make all 
requests synchronous

From: Murali Allada [mailto:murali.all...@rackspace.com]
Sent: Wednesday, November 27, 2013 12:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] API worker architecture

Hi all,

I'm working on a new blueprint to define the API service architecture.

https://blueprints.launchpad.net/solum/+spec/api-worker-architecture

It is still a draft for now. I've proposed a simple architecture for us to get 
started with implementing a few useful use cases.

Please chime in on the mailing list with your thoughts.

Thanks,
Murali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-27 Thread David Kranz

On 11/27/2013 12:46 PM, Alan Pevec wrote:

2013/11/27 Sean Dague s...@dague.net:

The problem is you can't really support both iso8601 was dormant for
years, and the revived version isn't compatible with the old version.
So supporting both means basically forking iso8601 and maintaining you
own version of it monkey patched in your own tree.

Right, hence glance was added https://review.openstack.org/55998 to
unblock the previous gate failure.
Issue now is that stable/grizzly Tempest uses clients from git trunk,
which is not going to work since trunk will add more and more
incompatible dependencies, even if backward compatbility is preserved
against the old service APIs!
I think when we decided to unpin clients and said that current clients 
should work with older servers, we really wanted that to mean current 
client can talk the REST API of older servers so users don't have to 
deal with this but we ended up with current clients can be installed 
in the same python env as older servers, and implemented the 
compatibility testing assuming the latter.


Solutions could be that Tempest installs clients into separate venv to
avoid dependecy conflicts or establish stable/* branches for
clients[1] which are created around OpenStack release time.
We might be able to get the gate testing to work with a separate venv, 
but I don't know enough to be sure. Even if we could do that, users 
could have the same problem if they try to install a current library on 
a machine with an older server. It is a problem that we are stuck in the 
python version of http://en.wikipedia.org/wiki/DLL_Hell but where there 
is not even an attempt by some libraries to be compatible. Or am I 
missing something? Is there some way to support side-by-side libraries 
the way Microsoft eventually did to get out of this mess?


 -David


Cheers,
Alan

[1] we have those for openstack client packages in Fedora/RDO
  e.g. https://github.com/redhat-openstack/python-novaclient/branches
  Here's nice explanation by Jakub: http://openstack.redhat.com/Clients


On Wed, Nov 27, 2013 at 1:58 AM, Yaguang Tang
yaguang.t...@canonical.com wrote:

after update to iso8601=0.1.8, it breaks stable/neutron jenkins tests,
because stable/glance requires  iso8601=0.1.4, log info
https://jenkins02.openstack.org/job/periodic-tempest-devstack-vm-neutron-stable-grizzly/43/console,
I have filed a bug to track this
https://bugs.launchpad.net/glance/+bug/1255419.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Planning for IPv6 Features

2013-11-27 Thread Collins, Sean (Contractor)
Moving this from the Gerrit review about change #58186 [1], to the
mailing list for discussion.

Let's start making some blueprints under the ipv6-feature-parity
blueprint[2]. I've already registered a blueprint for the work
that I've been doing, to get provider networking with V6 running,
which links to the bug reports that I've filed.

Short term, my goal is to get provider networks up and running, 
where instances can get RA's from an upstream router outside of
OpenStack and configure themselves.

Medium term, we want to make dnsmasq configuration more
flexible[3], which Dazhao Yu and I are working on.

More long term, I'd like to make it so that if there is an upstream
router doing RA's - Neutron should send a PD automatically on network
creation, and populate a subnet from the response given by the upstream
router.

[1] https://review.openstack.org/#/c/58186/
[2] https://blueprints.launchpad.net/neutron/+spec/ipv6-feature-parity
[3] https://blueprints.launchpad.net/neutron/+spec/dnsmasq-mode-keyword

On Wed, Nov 27, 2013 at 01:02:29PM -0500, Shixiong Shang wrote:
 Hi, Sean:
 
 I agree with your suggestion. Just read these two changes and they are in 
 good shape. Don’t want to re-invent the wheel.
 
 Being said, there are still a few outstanding issues. We plan to modify the 
 current submission to bridge the rest of the gaps. However, before we do 
 that, we would like to understand what has been done already and what other 
 efforts are going on, so we will not duplicate the efforts.
 
 Thoughts? Comments?
 
 Thanks!
 
 Shixiong
 
 
 
 
 
 
 On Nov 27, 2013, at 10:17 AM, Sean M. Collins (Code Review) 
 rev...@openstack.org wrote:
 
  Sean M. Collins has posted comments on this change.
  
  Change subject: Adds IPv6 SLAAC Support to Neutron
  ..
  
  
  Patch Set 2: I would prefer that you didn't merge this
  
  It may be worth rebasing your change on top of the following changes - 
  that'll at least reduce size of the patch and de-duplicate effort.
  
  https://review.openstack.org/#/c/53028/
  
  https://review.openstack.org/#/c/56184/
  
  --
  To view, visit https://review.openstack.org/58186
  To unsubscribe, visit https://review.openstack.org/settings
  
  Gerrit-MessageType: comment
  Gerrit-Change-Id: Ieb9b76fa106a9228d9d68e25e8d5c0152336d9b7
  Gerrit-PatchSet: 2
  Gerrit-Project: openstack/neutron
  Gerrit-Branch: master
  Gerrit-Owner: Shixiong Shang sparkofwisdom.cl...@gmail.com
  Gerrit-Reviewer: Brian Haley brian.ha...@hp.com
  Gerrit-Reviewer: Dazhao Yu d...@cn.ibm.com
  Gerrit-Reviewer: Jenkins
  Gerrit-Reviewer: Kyle Mestery kmest...@cisco.com
  Gerrit-Reviewer: Paul Michali p...@cisco.com
  Gerrit-Reviewer: Sean M. Collins sean_colli...@cable.comcast.com
  Gerrit-Reviewer: Shixiong Shang sparkofwisdom.cl...@gmail.com
  Gerrit-Reviewer: Yang Yu yuyan...@cn.ibm.com
 

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread Kyle Mestery (kmestery)

On Nov 27, 2013, at 1:29 PM, George Shuklin george.shuk...@gmail.com wrote:

 Thank you for reply!
 
 Few more question:
 
 AFAIK bridge tools is not very fast (compare to OVS), so adding them between 
 OVS and tap (instead of yet another OVS switch) is kinda slow everything 
 down. Why just not use yet another openvswitch switch to connect tap to veth 
 devices?
 
 Why iptables, not internal openvswitch flow rules? Those rules allows to 
 filter packets on L2-L4 headers and operates very fast. Is some iptables-only 
 features used in ova-agent?
 
George:

There is work ongoing now to implement security groups using
OVS flow rules in the OVS agent with the ML2 plugin. That would
do what you're looking at above. Stay tuned on this, the authors
of these patches hope to have some WIP code available soon.

Thanks,
Kyle

 Thanks.
 
 27.11.2013 20:55 пользователь Lorin Hochstein lo...@nimbisservices.com 
 написал:
 Hi George:
 
 
 
 On Wed, Nov 27, 2013 at 1:45 PM, George Shuklin george.shuk...@gmail.com 
 wrote:
 Good day.
 
 I looking at the internals of bridge layout of openvswitch agent at 
 http://docs.openstack.org/network-admin/admin/content/figures/2/figures/under-the-hood-scenario-1-ovs-compute.png
 and wondering, why this scheme is so complicated and why it use linux bridge 
 and vethes with openvswitch together? Why no just plug tap device directly to 
 openvswitch bridge without intermediate brctl bridge?
 
 I guess that was caused by some important consideration, but I unable to find 
 any documents about this.
 
 If someone know reasons for that complex construction with different bridges, 
 please response.
 
 
 If you look a little further down on the page with that figure, the 
 documentation reads
 
 Ideally, the TAP device vnet0 would be connected directly to the integration 
 bridge, br-int. Unfortunately, this isn't possible because of how OpenStack 
 security groups are currently implemented. OpenStack uses iptables rules on 
 the TAP devices such as vnet0 to implement security groups, and Open vSwitch 
 is not compatible with iptables rules that are applied directly on TAP 
 devices that are connected to an Open vSwitch port.
 
 
 Take care,
 
 Lorin
 
 
 
 -- 
 Lorin Hochstein
 Lead Architect - Cloud Services
 Nimbis Services, Inc.
 www.nimbisservices.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [Security]

2013-11-27 Thread Nathan Kinder
On 11/27/2013 08:58 AM, Paul Montgomery wrote:
 I created some relatively high level security best practices that I
 thought would apply to Solum.  I don't think it is ever too early to get
 mindshare around security so that developers keep that in mind throughout
 the project.  When a design decision point could easily go two ways,
 perhaps these guidelines can sway direction towards a more secure path.
 
 This is a living document, please contribute and let's discuss topics.
 I've worn a security hat in various jobs so I'm always interested. :)
 Also, I realize that many of these features may not directly be
 encapsulated by Solum but rather components such as KeyStone or Horizon.
 
 https://wiki.openstack.org/wiki/Solum/Security

This is a great start.

I think we really need to work towards a set of overarching security
guidelines and best practices that can be applied to all of the
projects.  I know that each project may have unique security needs, but
it would be really great to have a central set of agreed upon
cross-project guidelines that a developer can follow.

This is a goal that we have in the OpenStack Security Group.  I am happy
to work on coordinating this.  For defining these guidelines, I think a
working group approach might be best, where we have an interested
representative from each project be involved.  Does this approach make
sense to others?

Thanks,
-NGK

 
 I would like to build on this list and create blueprints or tasks based on
 topics that the community agrees upon.  We will also need to start
 thinking about timing of these features.
 
 Is there an OpenStack standard for code comments that highlight potential
 security issues to investigate at a later point?  If not, what would the
 community think of making a standard for Solum?  I would like to identify
 these areas early while the developer is still engaged/thinking about the
 code.  It is always harder to go back later and find everything in my
 experience.  Perhaps something like:
 
 # (SECURITY) This exception may contain database field data which could
 expose passwords to end users unless filtered.
 
 Or
 
 # (SECURITY) The admin password is read in plain text from a configuration
 file.  We should fix this later.
 
 Regards,
 Paulmo
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread Collins, Sean (Contractor)
On Wed, Nov 27, 2013 at 09:29:16PM +0200, George Shuklin wrote:
 Why iptables, not internal openvswitch flow rules? Those rules allows to
 filter packets on L2-L4 headers and operates very fast. Is some
 iptables-only features used in ovs-agent?

I've seen a couple references floating around about a Security Group
driver, implemented using OpenFlow[1] as well as some mailing list
discussions[2]. Perhaps it is time for a blueprint to be registered?  

[1] https://wiki.openstack.org/wiki/Neutron/SecurityGroups#Implementations
[2] http://openstack.markmail.org/thread/gxzb2opgm7mvb7h4

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why neutron-openvswitch-agent use linux-bridge?

2013-11-27 Thread Amir Sadoughi
Hi George,

I’m working on a blueprint to implement OVS flows for security groups. 
https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver Currently, 
neutron only implements security groups with iptables even when Open vSwitch is 
used.

Amir

On Nov 27, 2013, at 1:29 PM, George Shuklin 
george.shuk...@gmail.commailto:george.shuk...@gmail.com wrote:


Thank you for reply!

Few more question:

AFAIK bridge tools is not very fast (compare to OVS), so adding them between 
OVS and tap (instead of yet another OVS switch) is kinda slow everything down. 
Why just not use yet another openvswitch switch to connect tap to veth devices?

Why iptables, not internal openvswitch flow rules? Those rules allows to filter 
packets on L2-L4 headers and operates very fast. Is some iptables-only features 
used in ovs-agent?

Thanks.

27.11.2013 20:55 пользователь Lorin Hochstein 
lo...@nimbisservices.commailto:lo...@nimbisservices.com написал:
Hi George:



On Wed, Nov 27, 2013 at 1:45 PM, George Shuklin 
george.shuk...@gmail.commailto:george.shuk...@gmail.com wrote:
Good day.

I looking at the internals of bridge layout of openvswitch agent at 
http://docs.openstack.org/network-admin/admin/content/figures/2/figures/under-the-hood-scenario-1-ovs-compute.png
and wondering, why this scheme is so complicated and why it use linux bridge 
and vethes with openvswitch together? Why no just plug tap device directly to 
openvswitch bridge without intermediate brctl bridge?

I guess that was caused by some important consideration, but I unable to find 
any documents about this.

If someone know reasons for that complex construction with different bridges, 
please response.


If you look a little further down on the page with that figure, the 
documentation reads

Ideally, the TAP device vnet0 would be connected directly to the integration 
bridge, br-int. Unfortunately, this isn't possible because of how OpenStack 
security groups are currently implemented. OpenStack uses iptables rules on the 
TAP devices such as vnet0 to implement security groups, and Open vSwitch is not 
compatible with iptables rules that are applied directly on TAP devices that 
are connected to an Open vSwitch port.


Take care,

Lorin



--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.comhttp://www.nimbisservices.com/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >