[openstack-dev] [Neutron][LBaaS] L7 data types

2014-02-16 Thread Avishay Balderman
Hi
There are 2 fields in the L7 model that are candidates for being a closed set 
(Enum).
I would like to hear your opinion.

Entity:  L7Rule
Field : type
Description:  this field holds the part of the request where we should look for 
a value
Possible values: URL,HEADER,BODY,(?)

Entity:  L7Rule
Field : compare_type
Description: The way we compare the value against a given value
Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?)
Note: With REG_EXP we can cover the rest of the values.

In general In the L7rule one can express the following (Example):
check if in the value of header named 'Jack'  starts with X - if this is true 
- this rule returns true


Thanks

Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Salvatore Orlando
It seems this work item is made of several blueprints, some of which are
not yet approved. This is true at least for the Neutron blueprint regarding
policy extensions.

Since I first looked at this spec I've been wondering why nova has been
selected as an endpoint for network operations rather than Neutron, but
this probably a design/implementation details whereas JC here is looking at
the general approach.

Nevertheless, my only point here is that is seems that features like this
need an all-or-none approval.
For instance, could the VPC feature be considered functional if blueprint
[1] is implemented, but not [2] and [3]?

Salvatore

[1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
[2]
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
[3]
https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy


On 11 February 2014 21:45, Martin, JC jch.mar...@gmail.com wrote:

 Ravi,

 It seems that the following Blueprint
 https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support

 has been approved.

 However, I cannot find a discussion with regard to the merit of using
 project vs. domain, or other mechanism for the implementation.

 I have an issue with this approach as it prevents tenants within the same
 domain sharing the same VPC to have projects.

 As an example, if you are a large organization on AWS, it is likely that
 you have a large VPC that will be shred by multiple projects. With this
 proposal, we loose that capability, unless I missed something.

 JC

 On Dec 19, 2013, at 6:10 PM, Ravi Chunduru ravi...@gmail.com wrote:

  Hi,
We had some internal discussions on role of Domain and VPCs. I would
 like to expand and understand community thinking of Keystone domain and
 VPCs.
 
  Is VPC equivalent to Keystone Domain?
 
  If so, as a public cloud provider - I create a Keystone domain and give
 it to an organization which wants a virtual private cloud.
 
  Now the question is if that organization wants to have  departments wise
 allocation of resources it is becoming difficult to visualize with existing
 v3 keystone constructs.
 
  Currently, it looks like each department of an organization cannot have
 their own resource management with in the organization VPC ( LDAP based
 user management, network management or dedicating computes etc.,) For us,
 Openstack Project does not match the requirements of a department of an
 organization.
 
  I hope you guessed what we wanted - Domain must have VPCs and VPC to
 have projects.
 
  I would like to know how community see the VPC model in Openstack.
 
  Thanks,
  -Ravi.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-16 Thread Robert Collins
On 16 February 2014 17:57, Jay Pipes jaypi...@gmail.com wrote:

 Personally, I believe having Gerrit and Jenkins as the default will turn
 more people off Solumn than attract them to it.

 Just because we in the OpenStack community love our gating workflow and
 think it's all groovy does not mean that view is common, wanted, or
 understood by the vast majority of users of Heroku-like solutions.

There are quite a number of such solutions; do we want to write yet
another? What makes solum special ? Is it supporting OpenStack? [If so
- I'd argue that our success elsewhere will guarantee other such
platforms support OpenStack - and e.g. OpenShift already do AFAIK].
Why are we writing Solum?

 Who is the audience here? It is not experienced developers who already
 understand things like Gerrit and Jenkins. It's developers who just want
 to simplify the process of pushing code up to some system other than
 Github or their workstation. Adding the awkwardness of Gerrit's code

If thats all, then I don't think we should be writing Solum. We should
just pick some of the existing tooling for that and say 'use that'.

 review system -- and the associated pain of trying to understand how to
 define Jenkins jobs -- is something that I don't think the *default*
 Solum experience should invite.

 The default experience should be a simple push code, run merge tests,

What are merge tests? How are they defined? What about that definition
makes it impossible for us to run them in Jenkins or similar? What
makes it impossible to run them before the merge?

 and deploy into the deployment unit (whatever that is called in Solum
 nowadays). There should be well-documented ways to add commit hooks into
 this workflow, but having a complex Gerrit and Jenkins gated workflow is
 just overkill for the default experience.

Even in small teams - 4-5 people - gated workflow and CI/CD is a game
changer for velocity. If your point is 'doing the sophisticated thing
OpenStack does is hard and we shouldn't try to sell people on working
hard' - I agree. But your point currently sounds like 'there's no way
we can make this stuff easy, discoverable, and straight forward, so we
should focus on something much less capable'. Perhaps thats not what
you mean?

I think there is a spectrum here. We need to aim high, or we achieve
low. We also need to deliver incrementally, and we need to provide a
-really- gentle onramp for users. Make the first basic steps easy and
simple: but thats not in any way incompatible with gating by default.

It *might* be incompatible with Jenkins and zuul: but thats a slightly
different discussion- my +1, which you replied to, was about /gating/
by default, not about the implementation thereof.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-16 Thread Robert Collins
On 15 February 2014 12:15, Dirk Müller d...@dmllr.de wrote:

 I agree, and changing defaults has a cost as well: Every deployment
 solution out there has to detect the value change, update their config
 templates and potentially also migrate the setting from the old to the
 new default for existing deployments. Being in that situation, it has
 happened that we were surprised by default changes that had
 undesireable side effects, just because we chose to overwrite a
 different default elsewhere.

 I'm totally on board with having production ready defaults, but that
 also includes that they seldomly change and change only for a very
 good, possibly documented reason.

Indeed! And in classic ironic fashion -
https://bugs.launchpad.net/keystone/+bug/1280692 was caused by
https://review.openstack.org/#/c/73621 - a patch of yours, changing
defaults and breaking anyone running with them!

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-16 Thread Adrian Otto

On Feb 16, 2014, at 2:31 AM, Robert Collins robe...@robertcollins.net
 wrote:

 On 16 February 2014 17:57, Jay Pipes jaypi...@gmail.com wrote:
 
 Personally, I believe having Gerrit and Jenkins as the default will turn
 more people off Solumn than attract them to it.
 
 Just because we in the OpenStack community love our gating workflow and
 think it's all groovy does not mean that view is common, wanted, or
 understood by the vast majority of users of Heroku-like solutions.
 
 There are quite a number of such solutions; do we want to write yet
 another? What makes solum special ? Is it supporting OpenStack? [If so
 - I'd argue that our success elsewhere will guarantee other such
 platforms support OpenStack - and e.g. OpenShift already do AFAIK].
 Why are we writing Solum?

Excellent focusing question. We are focused on the Application Developer 
persona as a class. We want clouds to be easier to use, and more accessible for 
them. We'd like to offer streamlined experiences that help them focus on 
application development, and worry less about how to get their applications to 
run on the cloud. We expect to be able to achieve this goal by letting the 
developers connect their code repositories with Solum enabled clouds, and use 
development best practices to automate the building, testing, and deploying of 
code.

Why? Because Application Developers have enough to worry about, without 
contemplating all of the above in a DIY approach. We'd like to address this 
focus area in a way that offers the developers a sense of insurance that when 
their application becomes successful one day (and they want to begin focusing 
on the details and potentially customizing the various aspects of how code is 
handled and deployed, and what environments it it run in) that they will not 
need to forklift their code from one cloud to another, but can begin to use a 
wider variety of control system API's to tap into what's already there.

Cloud operators (public and private alike) want to offer these streamlined user 
experiences to the Application Developers that use their clouds. They want to 
do so without bringing in the operational complexity and overhead of running a 
PaaS system that largely overlaps with numerous aspects of what's already 
offered in OpenStack. By selecting Solum, they leverage what's here already, 
and add in additional ease of use, and the ability to abstract and simplify a 
variety of complexity relating to the code-app lifecycle, and gradually adding 
in new capabilities for managing apps once they are on the cloud.

 Who is the audience here? It is not experienced developers who already
 understand things like Gerrit and Jenkins. It's developers who just want
 to simplify the process of pushing code up to some system other than
 Github or their workstation. Adding the awkwardness of Gerrit's code
 
 If thats all, then I don't think we should be writing Solum. We should
 just pick some of the existing tooling for that and say 'use that'.

If we do a good job of offering a sensible set of defaults, balanced with a 
compelling set of features, then Solum will be very useful. We're not aiming to 
replace or re-invent every CI tool that was ever invented. To the contrary, we 
want to allow sophisticated dev shops who have investments in CI  
infrastructure already an opportunity to use Solum as a CD tool (submit a plan 
file and a container image to the Solum API by using a simple Git post-commit 
hook trigger).

However, we also want to provide developers who possibly have never seen a 
comprehensive CI/CD setup an opportunity to access one that works for general 
use, and a place where the existing tool makers can integrate and offer their 
value-added software and solutions to make those experiences even better. This 
is where we are right now, deicing how to take an iterative approach to making 
this default CI/CD setup useful and interesting.

 review system -- and the associated pain of trying to understand how to
 define Jenkins jobs -- is something that I don't think the *default*
 Solum experience should invite.
 
 The default experience should be a simple push code, run merge tests,
 
 What are merge tests? How are they defined? What about that definition
 makes it impossible for us to run them in Jenkins or similar? What
 makes it impossible to run them before the merge?

Excellent questions again. There's no reason you could not use any variety of 
existing applications to do this. There is value in allowing developers to 
start simple with reduced effort, and start to use more advanced CI features 
when they are ready to opt into them. Focusing on ease of use is key.

 and deploy into the deployment unit (whatever that is called in Solum
 nowadays). There should be well-documented ways to add commit hooks into
 this workflow, but having a complex Gerrit and Jenkins gated workflow is
 just overkill for the default experience.

We can all agree that we are not aiming to offer something 

[openstack-dev] [Neutron][LBaaS] L7 data types

2014-02-16 Thread Avishay Balderman
(removing extra space from the subject - let email clients apply their filters)

From: Avishay Balderman
Sent: Sunday, February 16, 2014 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] L7 data types

Hi
There are 2 fields in the L7 model that are candidates for being a closed set 
(Enum).
I would like to hear your opinion.

Entity:  L7Rule
Field : type
Description:  this field holds the part of the request where we should look for 
a value
Possible values: URL,HEADER,BODY,(?)

Entity:  L7Rule
Field : compare_type
Description: The way we compare the value against a given value
Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?)
Note: With REG_EXP we can cover the rest of the values.

In general In the L7rule one can express the following (Example):
check if in the value of header named 'Jack'  starts with X - if this is true 
- this rule returns true


Thanks

Avishay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-16 Thread Dong Liu
Hi stackers:

I found that when creating network subnet and other resources, the attribute 
tenant_id 
can be set by admin tenant. But we did not verify that if the tanent_id is real 
in keystone.

I know that we could use neutron without keystone, but do you think tenant_id 
should 
be verified when we using neutron with keystone.

thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-16 Thread Jay Lau
Thanks Gary, clear now. ;-)


2014-02-16 21:40 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 There are two issues here.
 The first is a bug fix that is in review:
 - https://review.openstack.org/#/c/69209/ (this is where they have the
 same configuration)
 The second is WIP:
 - https://review.openstack.org/#/c/69262/ (we need to restore)
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Sunday, February 16, 2014 6:39 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to
 resize/cold migration

 Hey,

 I have one question related with OpenStack vmwareapi.VMwareVCDriver
 resize/cold migration.

 The following is my configuration:

  DC
 |
 |Cluster1
 |  |
 |  |9.111.249.56
 |
 |Cluster2
|
|9.111.249.49

 *Scenario 1:*
 I started two nova computes manage the two clusters:
 1) nova-compute1.conf
 cluster_name=Cluster1

 2) nova-compute2.conf
 cluster_name=Cluster2

 3) Start up two nova computes on host1 and host2 separately
 4) Create one VM instance and the VM instance was booted on Cluster2 node
 9.111.249.49
 | OS-EXT-SRV-ATTR:host | host2 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c16(Cluster2) |
 5) Cold migrate the VM instance
 6) After migration finished, the VM goes to VERIFY_RESIZE status, and
 nova show indicates that the VM now located on host1:Cluster1
 | OS-EXT-SRV-ATTR:host | host1 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c12(Cluster1) |
 7) But from vSphere client, it indicates the the VM was still running on
 Cluster2
 8) Try to confirm the resize, confirm will be failed. The root cause is
 that nova compute on host2 has no knowledge of domain-c12(Cluster1)

 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2810, in
 do_confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 migration=migration)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2836, in
 _confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 network_info)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 420,
 in confirm_migration
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 _vmops = self._get_vmops_for_compute_node(instance['node'])
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 523,
 in _get_vmops_for_compute_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 resource = self._get_resource_for_node(nodename)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 515,
 in _get_resource_for_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 raise exception.NotFound(msg)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


 *Scenario 2:*

 1) Started two nova computes manage the two clusters, but the two computes
 have same nova conf.
 1) nova-compute1.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 2) nova-compute2.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 3) Then create and resize/cold migrate a VM, it can always succeed.


 *Questions:*
 For multi-cluster management, does vmware require all nova compute have
 same cluster configuration to make sure resize/cold migration can succeed?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][policy] Using network services with network policies

2014-02-16 Thread Mohammad Banikazemi

During the last IRC call we started talking about network services and how
they can be integrated into the group Policy framework.

In particular, with the redirect action we need to think how we can
specify the network services we want to redirect the traffic to/from. There
has been a substantial work in the area of service chaining and service
insertion and in the last summit advanced service in VMs were discussed.
I think the first step for us is to find out the status of those efforts
and then see how we can use them. Here are a few questions that come to
mind.
1- What is the status of service chaining, service insertion and advanced
services work?
2- How could we use a service chain? Would simply referring to it in the
action be enough? Are there considerations wrt creating a service chain
and/or a service VM for use with the Group Policy framework that need to be
taken into account?

Let's start the discussion on the ML before taking it to the next call.

Thanks,

Mohammad___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-16 Thread Jay Lau
Hi Gary,

One more question, when using VCDriver, I can use it in the following two
ways:
1) start up many nova computes and those nova computes manage same vcenter
clusters.
2) start up many nova computes and those nova computes manage different
vcenter clusters.

Do we have some best practice for above two scenarios or else can you
please provide some best practise for VCDriver? I did not get much info
from admin guide.

Thanks,

Jay


2014-02-16 23:01 GMT+08:00 Jay Lau jay.lau@gmail.com:

 Thanks Gary, clear now. ;-)


 2014-02-16 21:40 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 There are two issues here.
 The first is a bug fix that is in review:
 - https://review.openstack.org/#/c/69209/ (this is where they have the
 same configuration)
 The second is WIP:
 - https://review.openstack.org/#/c/69262/ (we need to restore)
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Sunday, February 16, 2014 6:39 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver related to
 resize/cold migration

 Hey,

 I have one question related with OpenStack vmwareapi.VMwareVCDriver
 resize/cold migration.

 The following is my configuration:

  DC
 |
 |Cluster1
 |  |
 |  |9.111.249.56
 |
 |Cluster2
|
|9.111.249.49

 *Scenario 1:*
 I started two nova computes manage the two clusters:
 1) nova-compute1.conf
 cluster_name=Cluster1

 2) nova-compute2.conf
 cluster_name=Cluster2

 3) Start up two nova computes on host1 and host2 separately
 4) Create one VM instance and the VM instance was booted on Cluster2
 node  9.111.249.49
 | OS-EXT-SRV-ATTR:host | host2 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c16(Cluster2) |
 5) Cold migrate the VM instance
 6) After migration finished, the VM goes to VERIFY_RESIZE status, and
 nova show indicates that the VM now located on host1:Cluster1
 | OS-EXT-SRV-ATTR:host | host1 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  |
 domain-c12(Cluster1) |
 7) But from vSphere client, it indicates the the VM was still running on
 Cluster2
 8) Try to confirm the resize, confirm will be failed. The root cause is
 that nova compute on host2 has no knowledge of domain-c12(Cluster1)

 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2810, in
 do_confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 migration=migration)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2836, in
 _confirm_resize
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 network_info)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 420,
 in confirm_migration
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 _vmops = self._get_vmops_for_compute_node(instance['node'])
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 523,
 in _get_vmops_for_compute_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 resource = self._get_resource_for_node(nodename)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py, line 515,
 in _get_resource_for_node
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 raise exception.NotFound(msg)
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
 NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
 2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


 *Scenario 2:*

 1) Started two nova computes manage the two clusters, but the two
 computes have same nova conf.
 1) nova-compute1.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 2) nova-compute2.conf
 cluster_name=Cluster1
 cluster_name=Cluster2

 3) Then create and resize/cold migrate a VM, it can always succeed.


 *Questions:*
 For multi-cluster management, does vmware require all nova compute have
 same cluster configuration to make sure resize/cold migration can succeed?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Harshad Nakil
Yes, [1] can be done without [2] and [3].
As you are well aware [2] is now merged with group policy discussions.
IMHO all or nothing approach will not get us anywhere.
By the time we line up all our ducks in row. New features/ideas/blueprints
will keep Emerging.

Regards
-Harshad


On Feb 16, 2014, at 2:30 AM, Salvatore Orlando sorla...@nicira.com wrote:

It seems this work item is made of several blueprints, some of which are
not yet approved. This is true at least for the Neutron blueprint regarding
policy extensions.

Since I first looked at this spec I've been wondering why nova has been
selected as an endpoint for network operations rather than Neutron, but
this probably a design/implementation details whereas JC here is looking at
the general approach.

Nevertheless, my only point here is that is seems that features like this
need an all-or-none approval.
For instance, could the VPC feature be considered functional if blueprint
[1] is implemented, but not [2] and [3]?

Salvatore

[1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
[2]
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
[3]
https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy


On 11 February 2014 21:45, Martin, JC jch.mar...@gmail.com wrote:

 Ravi,

 It seems that the following Blueprint
 https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support

 has been approved.

 However, I cannot find a discussion with regard to the merit of using
 project vs. domain, or other mechanism for the implementation.

 I have an issue with this approach as it prevents tenants within the same
 domain sharing the same VPC to have projects.

 As an example, if you are a large organization on AWS, it is likely that
 you have a large VPC that will be shred by multiple projects. With this
 proposal, we loose that capability, unless I missed something.

 JC

 On Dec 19, 2013, at 6:10 PM, Ravi Chunduru ravi...@gmail.com wrote:

  Hi,
We had some internal discussions on role of Domain and VPCs. I would
 like to expand and understand community thinking of Keystone domain and
 VPCs.
 
  Is VPC equivalent to Keystone Domain?
 
  If so, as a public cloud provider - I create a Keystone domain and give
 it to an organization which wants a virtual private cloud.
 
  Now the question is if that organization wants to have  departments wise
 allocation of resources it is becoming difficult to visualize with existing
 v3 keystone constructs.
 
  Currently, it looks like each department of an organization cannot have
 their own resource management with in the organization VPC ( LDAP based
 user management, network management or dedicating computes etc.,) For us,
 Openstack Project does not match the requirements of a department of an
 organization.
 
  I hope you guessed what we wanted - Domain must have VPCs and VPC to
 have projects.
 
  I would like to know how community see the VPC model in Openstack.
 
  Thanks,
  -Ravi.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Harshad Nakil
Comments Inline

Regards
-Harshad


On Sat, Feb 15, 2014 at 11:39 PM, Allamaraju, Subbu su...@subbu.org wrote:

 Harshad,

 Curious to know if there is a broad interest in an AWS compatible API in
 the community?


We started looking at this as some our customers/partners were interested
in get AWS API compatibility. We have this blueprint and code review
pending for long time now. We will know based on this thread wether the
community is interested. But I assumed that community was interested as the
blueprint was approved and code review has no -1(s) for long time now.


 To clarify, a clear incremental path from an AWS compatible API to an
 OpenStack model is not clear.


In my mind AWS compatible API does not need new openstack model. As more
discussion happen on JC's proposal and implementation becomes clear we will
know how incremental is the path. But at high level there two major
differences
1. New first class object will be introduced which effect all components
2. more than one project can be supported within VPC.
But it does not change AWS API(s). So even in JC(s) model if you want AWS
API then we will have to keep VPC to project mapping 1:1, since the API
will not take both VPC ID and project ID.

As more users want to migrate from AWS or IaaS providers who want compete
with AWS should be interested in this compatibility.

There also seems to be terminology issue here Whats is definition of VPC
if we assume what AWS implements is VPC
then what JC is proposing VOS or VDC (virtual openstack or virtual DC)
as all or most of current openstack features are available to user in  this
new Abstraction. I actually like this new abstraction.


 Subbu

 On Feb 15, 2014, at 10:04 PM, Harshad Nakil hna...@contrailsystems.com
 wrote:

 
  I agree with problem as defined by you and will require more fundamental
 changes.
  Meanwhile many users will benefit from AWS VPC api compatibility.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Allamaraju, Subbu
Harshad,

Thanks for clarifying.

 We started looking at this as some our customers/partners were interested in 
 get AWS API compatibility. We have this blueprint and code review pending for 
 long time now. We will know based on this thread wether the community is 
 interested. But I assumed that community was interested as the blueprint was 
 approved and code review has no -1(s) for long time now.

Makes sense. I would leave it to others on this list to chime in if there is 
sufficient interest or not.

 To clarify, a clear incremental path from an AWS compatible API to an 
 OpenStack model is not clear.
  
 In my mind AWS compatible API does not need new openstack model. As more 
 discussion happen on JC's proposal and implementation becomes clear we will 
 know how incremental is the path. But at high level there two major 
 differences
 1. New first class object will be introduced which effect all components
 2. more than one project can be supported within VPC.
 But it does not change AWS API(s). So even in JC(s) model if you want AWS API 
 then we will have to keep VPC to project mapping 1:1, since the API will not 
 take both VPC ID and project ID.
 
 As more users want to migrate from AWS or IaaS providers who want compete 
 with AWS should be interested in this compatibility.

IMHO that's a tough sell. Though an AWS compatible API does not need an 
OpenStack abstraction, we would end up with two independent ways of doing 
similar things. That would OpenStack repeating itself! 

Subbu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Harshad Nakil
IMHO I don't see two implementations. Since right now we have only
one. As a community if we decide to add new abstractions then we will
have to change software in every component where the new abstraction
makes difference. That's normal software development process.
Regards
-Harshad


 On Feb 16, 2014, at 9:03 AM, Allamaraju, Subbu su...@subbu.org wrote:

 Harshad,

 Thanks for clarifying.

 We started looking at this as some our customers/partners were interested in 
 get AWS API compatibility. We have this blueprint and code review pending 
 for long time now. We will know based on this thread wether the community is 
 interested. But I assumed that community was interested as the blueprint was 
 approved and code review has no -1(s) for long time now.

 Makes sense. I would leave it to others on this list to chime in if there is 
 sufficient interest or not.

 To clarify, a clear incremental path from an AWS compatible API to an 
 OpenStack model is not clear.

 In my mind AWS compatible API does not need new openstack model. As more 
 discussion happen on JC's proposal and implementation becomes clear we will 
 know how incremental is the path. But at high level there two major 
 differences
 1. New first class object will be introduced which effect all components
 2. more than one project can be supported within VPC.
 But it does not change AWS API(s). So even in JC(s) model if you want AWS 
 API then we will have to keep VPC to project mapping 1:1, since the API will 
 not take both VPC ID and project ID.

 As more users want to migrate from AWS or IaaS providers who want compete 
 with AWS should be interested in this compatibility.

 IMHO that's a tough sell. Though an AWS compatible API does not need an 
 OpenStack abstraction, we would end up with two independent ways of doing 
 similar things. That would OpenStack repeating itself!

 Subbu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Martin, JC
Harshad,

I tried to find some discussion around this blueprint.
Could you provide us with some notes or threads  ?

Also, about the code review you mention. which one are you talking about :
https://review.openstack.org/#/c/40071/
https://review.openstack.org/#/c/49470/
https://review.openstack.org/#/c/53171

because they are all abandoned.

Could you point me to the code, and update the BP because it seems that the 
links are not correct.

Thanks,

JC
On Feb 16, 2014, at 9:04 AM, Allamaraju, Subbu su...@subbu.org wrote:

 Harshad,
 
 Thanks for clarifying.
 
 We started looking at this as some our customers/partners were interested in 
 get AWS API compatibility. We have this blueprint and code review pending 
 for long time now. We will know based on this thread wether the community is 
 interested. But I assumed that community was interested as the blueprint was 
 approved and code review has no -1(s) for long time now.
 
 Makes sense. I would leave it to others on this list to chime in if there is 
 sufficient interest or not.
 
 To clarify, a clear incremental path from an AWS compatible API to an 
 OpenStack model is not clear.
 
 In my mind AWS compatible API does not need new openstack model. As more 
 discussion happen on JC's proposal and implementation becomes clear we will 
 know how incremental is the path. But at high level there two major 
 differences
 1. New first class object will be introduced which effect all components
 2. more than one project can be supported within VPC.
 But it does not change AWS API(s). So even in JC(s) model if you want AWS 
 API then we will have to keep VPC to project mapping 1:1, since the API will 
 not take both VPC ID and project ID.
 
 As more users want to migrate from AWS or IaaS providers who want compete 
 with AWS should be interested in this compatibility.
 
 IMHO that's a tough sell. Though an AWS compatible API does not need an 
 OpenStack abstraction, we would end up with two independent ways of doing 
 similar things. That would OpenStack repeating itself! 
 
 Subbu
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Allamaraju, Subbu
Harshad,

This is great. At least there is consensus on what it is and what it is not. I 
would leave it to others to discuss merits of a an AWS compat VPC API for 
Icehouse.

Perhaps this is a good topic to discuss at the Juno design summit.

Subbu

On Feb 16, 2014, at 10:15 AM, Harshad Nakil hna...@contrailsystems.com wrote:

 As said I am not disagreeing with you or Ravi or JC. I also agree that
 Openstack VPC implementation will benefit from these proposals.
 What I am saying is it is not required AWS VPC API compatibility at
 this point.  Which is what our blueprint is all about. We are not
 defining THE VPC.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [TripleO] promoting devtest_seed and devtest_undercloud to voting, + experimental queue for nova/neutron etc.

2014-02-16 Thread Robert Collins
On 15 February 2014 09:58, Sean Dague s...@dague.net wrote:

 Lastly, I'm going to propose a merge to infra/config to put our
 undercloud story (which exercises the seed's ability to deploy via
 heat with bare metal) as a check experimental job on our dependencies
 (keystone, glance, nova, neutron) - if thats ok with those projects?

 -Rob


 My biggest concern with adding this to check experimental, is the
 experimental results aren't published back until all the experimental
 jobs are done.

If we add a new pipeline - https://review.openstack.org/#/c/73863/ -
then we can avoid that.

 We've seen really substantial delays, plus a 5 day complete outage a
 week ago, on the tripleo cloud. I'd like to see that much more proven
 before it starts to impact core projects, even in experimental.

I believe that with a new pipeline it won't impact core projects at all.

The outage, FWIW, was because I deleted the entire cloud, at the same
time that we had a firedrill with some other upstream-of-us issue (I
forget the exact one). The multi-region setup we're aiming for should
mitigate that substantially :)


-Rob


--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [TripleO] promoting devtest_seed and devtest_undercloud to voting, + experimental queue for nova/neutron etc.

2014-02-16 Thread Robert Collins
On 15 February 2014 12:21, James E. Blair jebl...@openstack.org wrote:

 You won't end up with -1's everywhere, you'll end up with jobs stuck in
 the queue indefinitely, as we saw when the tripleo cloud failed
 recently.  What's worse is that now that positive check results are
 required for enqueuing into the gate, you will also not be able to merge
 anything.

Ok. So the cost of voting [just in tripleo] would be that a) [tripleo]
infrastructure failures and b) breakage from other projects - both
things that can cause checks to fail, would stall all tripleo landings
until rectified, or until voting is turned off via a change to config
which makes this infra's problem.

Hmm - so from a tripleo perspective, I think we're ok with this -
having a clear indication that 'this is ok' is probably more important
to us right now than the more opaque thing we have now where we have
to expand every jenkins comment to be sure.

But- will infra be ok, if we end up having a firedrill 'please make
this nonvoting' change to propose?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Ravi Chunduru
I agree with JC that we need to pause and discuss VPC model with in
openstack before considering AWS compatibility. As Subbu said, We need this
discussion in Juno summit and get consensus.

Thanks,
-Ravi.


On Sun, Feb 16, 2014 at 10:31 AM, Allamaraju, Subbu su...@subbu.org wrote:

 Harshad,

 This is great. At least there is consensus on what it is and what it is
 not. I would leave it to others to discuss merits of a an AWS compat VPC
 API for Icehouse.

 Perhaps this is a good topic to discuss at the Juno design summit.

 Subbu

 On Feb 16, 2014, at 10:15 AM, Harshad Nakil hna...@contrailsystems.com
 wrote:

  As said I am not disagreeing with you or Ravi or JC. I also agree that
  Openstack VPC implementation will benefit from these proposals.
  What I am saying is it is not required AWS VPC API compatibility at
  this point.  Which is what our blueprint is all about. We are not
  defining THE VPC.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-16 Thread Ravi Chunduru
IMO, VPC means to have managed set of resources not just limited to
networks but also projects.
I feel its not about incrementally starting with AWS compatibility, But
doing it right with AWS compatibility into consideration.

Thanks,
-Ravi.


On Sun, Feb 16, 2014 at 8:47 AM, Harshad Nakil
hna...@contrailsystems.comwrote:

 Comments Inline

 Regards
 -Harshad


 On Sat, Feb 15, 2014 at 11:39 PM, Allamaraju, Subbu su...@subbu.orgwrote:

 Harshad,

 Curious to know if there is a broad interest in an AWS compatible API in
 the community?


 We started looking at this as some our customers/partners were interested
 in get AWS API compatibility. We have this blueprint and code review
 pending for long time now. We will know based on this thread wether the
 community is interested. But I assumed that community was interested as the
 blueprint was approved and code review has no -1(s) for long time now.


 To clarify, a clear incremental path from an AWS compatible API to an
 OpenStack model is not clear.


 In my mind AWS compatible API does not need new openstack model. As more
 discussion happen on JC's proposal and implementation becomes clear we will
 know how incremental is the path. But at high level there two major
 differences
 1. New first class object will be introduced which effect all components
 2. more than one project can be supported within VPC.
 But it does not change AWS API(s). So even in JC(s) model if you want AWS
 API then we will have to keep VPC to project mapping 1:1, since the API
 will not take both VPC ID and project ID.





 As more users want to migrate from AWS or IaaS providers who want compete
 with AWS should be interested in this compatibility.

 There also seems to be terminology issue here Whats is definition of VPC
 if we assume what AWS implements is VPC
 then what JC is proposing VOS or VDC (virtual openstack or virtual DC)
 as all or most of current openstack features are available to user in  this
 new Abstraction. I actually like this new abstraction.


 Subbu

 On Feb 15, 2014, at 10:04 PM, Harshad Nakil hna...@contrailsystems.com
 wrote:

 
  I agree with problem as defined by you and will require more
 fundamental changes.
  Meanwhile many users will benefit from AWS VPC api compatibility.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 22, Issue 39

2014-02-16 Thread Vishvananda Ishaya

On Feb 15, 2014, at 4:36 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

 
 Dear Vish,
 
 I completely agree with you. Its like a trade off between getting 
 re-authenticated (when in a hierarchy user has different roles at different 
 levels) or parsing the entire hierarchy till the leaf and include all the 
 roles the user has at each level in the scope.
 
 I am ok with any one (both has some advantages and dis-advantages).
 
 But one point i didn't understand why should we parse the tree above the 
 level where the user gets authenticated (as you specified in the reply). Like 
 if user is authenticated at level 3, then do we mean that the roles at level 
 2 and level 1 also should be passed?
 Why this is needed? I only see either we pass only the role at the level the 
 user is getting authenticated or pass the roles at the level till the leaf 
 starting from the level the user is getting authenticated.


This is needed because in my proposed model roles are inherited down the 
heirarchy. That means if you authenticate against ProjA.ProjA2 and you have a 
role like “netadmin” in ProjA, you will also have it in ProjA2. So it is 
necessary to walk up the tree to find the full list of roles.

Vish

 
 Regards,
 Vinod Kumar Boppanna
 
 Message: 21
 Date: Fri, 14 Feb 2014 10:13:59 -0800
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
 Message-ID: 4508b18f-458b-4a3e-ba66-22f9fa47e...@gmail.com
 Content-Type: text/plain; charset=windows-1252
 
 Hi Vinod!
 
 I think you can simplify the roles in the hierarchical model by only passing 
 the roles for the authenticated project and above. All roles are then 
 inherited down. This means it isn?t necessary to pass a scope along with each 
 role. The scope is just passed once with the token and the project-admin role 
 (for example) would be checking to see that the user has the project-admin 
 role and that the project_id prefix matches.
 
 There is only one case that this doesn?t handle, and that is when the user 
 has one role (say member) in ProjA and project-admin in ProjA2. If the user 
 is authenticated to ProjA, he can?t do project-adminy stuff for ProjA2 
 without reauthenticating. I think this is a reasonable sacrifice considering 
 how much easier it would be to just pass the parent roles instead of going 
 through all of the children.
 
 Vish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Ceilometer] Is the l3-metering-agent broken just for me?

2014-02-16 Thread Aurynn Shaw
Hey;

I'm trying to get the l3 metering agent included in Neutron to do...
something, and I suspect that I'm doing something terribly wrong.

I'm looking at the docs on
https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth

and trying to run it on Devstack (yes, I know), and I was wondering if
anyone else had hit any issues or got it working or if it's Known
Broken on Havana, before I go off on a journey of
Oslo/Ceilometer/Neutron code spleunking.

Cheers!
~a

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat run_tests.sh fails with one huge line of output

2014-02-16 Thread Robert Collins
On 17 February 2014 10:20, Mike Spreitzer mspre...@us.ibm.com wrote:
 Kevin, I changed no code, it was a fresh DevStack install.


 Thanks Robert, I tried following your leads but got nowhere, perhaps I need
 a few more clues.

 I am not familiar with bzr (nor baz), and it wasn't obvious to me how to fit
 that into my workflow --- which was:
 (1) install DevStack
 (2) install libmysqlclient-dev
 (3) install flake8
 (4) cd /opt/stack/heat
 (5) ./run_tests.sh

I would have expected run_tests.sh to tox which creates a venv, but
heat seems different. So you'll need to install testrepository via
your system tox, not one from a venv.

So the steps to install testrepository outside a venv will be:
$ bzr branch lp:testrepository
$ sudo pip install ./testrepository

 I guessed that your (A) would apply if I use a venv and go between (1) the
 `python tools/install_venv.py` inside run_tests.sh and (2) the invocation
 inside run_tests.sh of its run_tests function.

No - see above.

 So I manually invoked
 `python tools/install_venv.py`, then entered that venv, then issued your

I suspect heat hasn't kept up with run_tests evolution from other
projects - install_venv was a pre-tox thing IIRC I think its
somewhat unhelpful, since AFAICT it's main purpose is to confuse
people :).

 commands of (A) (discovered I needed to install bzr and did so), then exited
 that venv, then invoked heat's `run_tests -V -u` to use the venv thus
 constructed.  It still produced one huge line of output.  Here I attach a
 typescript of that:

The 'd' is because you're human-reading a binary protocol (which the
testrepository trunk fixes). Try the import without the 'd'. Also be
sure to be trying the import within the tox venv, which run_tests will
have triggered.

 You will see that the huge line still ends with something about import
 error, and now lists one additional package ---
 heat.tests.test_neutron_firewalld.  I then tried your (B), testing manual
 imports.   All worked except for the last, which failed because there is
 indeed no such thing (why is there a spurrious 'd' at the end of the package
 name?).  Here is a typescript of that:

So, all the listed imports fail to import from the test run. It's odd
that they are succeeding manually.

You could try patching your testtools with
https://github.com/testing-cabal/testtools/pull/77 to get a direct
readout of the error.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat run_tests.sh fails with one huge line of output

2014-02-16 Thread Alexander Gorodnev
Hi Mike,

Looks like you have some syntax error for some accidental reason. Could you
please do the following:
0) install flake8;
1) run ./run_tests.sh -p  pep8.log into your Heat directory;
2) Attach to the letter and send it to the list.

Thank you,
Alexander



2014-02-17 1:20 GMT+04:00 Mike Spreitzer mspre...@us.ibm.com:

 Kevin, I changed no code, it was a fresh DevStack install.

 Robert Collins robe...@robertcollins.net wrote on 02/16/2014 05:33:59
 AM:
  A) [fixed in testrepository trunk] the output from subunit.run
  discover  --list is being shown verbatim when an error happens,
  rather than being machine processed and the test listings elided.
 
  To use trunk - in your venv:
  bzr branch lp:testrepository
  pip install testrepository
 
  B) If you look at the end of that wall of text you'll see 'Failed
  imports' in there, and the names after that are modules that failed
  to import - for each of those if you try to import it in python,
  you'll find the cause, and there's likely just one cause.

 Thanks Robert, I tried following your leads but got nowhere, perhaps I
 need a few more clues.

 I am not familiar with bzr (nor baz), and it wasn't obvious to me how to
 fit that into my workflow --- which was:
 (1) install DevStack
 (2) install libmysqlclient-dev
 (3) install flake8
 (4) cd /opt/stack/heat
 (5) ./run_tests.sh

 I guessed that your (A) would apply if I use a venv and go between (1) the
 `python tools/install_venv.py` inside run_tests.sh and (2) the invocation
 inside run_tests.sh of its run_tests function.  So I manually invoked
 `python tools/install_venv.py`, then entered that venv, then issued your
 commands of (A) (discovered I needed to install bzr and did so), then
 exited that venv, then invoked heat's `run_tests -V -u` to use the venv
 thus constructed.  It still produced one huge line of output.  Here I
 attach a typescript of that:



 You will see that the huge line still ends with something about import
 error, and now lists one additional package ---
 heat.tests.test_neutron_firewalld.  I then tried your (B), testing manual
 imports.   All worked except for the last, which failed because there is
 indeed no such thing (why is there a spurrious 'd' at the end of the
 package name?).  Here is a typescript of that:



 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-16 Thread Dolph Mathews
keystoneclient.middlware.auth_token passes a project ID (and name, for
convenience) to the underlying application through the WSGI environment,
and already ensures that this value can not be manipulated by the end user.

Project ID's (redundantly) passed through other means, such as URLs, are up
to the service to independently verify against keystone (or equivalently,
against the WSGI environment), but can be directly manipulated by the end
user if no checks are in place.

Without auth_token in place to manage multitenant authorization, I'd still
expect services to blindly trust the values provided in the environment
(useful for both debugging the service and alternative deployment
architectures).

On Sun, Feb 16, 2014 at 8:52 AM, Dong Liu willowd...@gmail.com wrote:

 Hi stackers:

 I found that when creating network subnet and other resources, the
 attribute tenant_id
 can be set by admin tenant. But we did not verify that if the tanent_id is
 real in keystone.

 I know that we could use neutron without keystone, but do you think
 tenant_id should
 be verified when we using neutron with keystone.

 thanks
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] User Signup

2014-02-16 Thread Kieran Spear
On 15 February 2014 16:52, Soren Hansen so...@linux2go.dk wrote:
 Den 15/02/2014 00.19 skrev Adam Young ayo...@redhat.com:


 Could you please spend 5 minutes on the blueprint
 https://blueprints.launchpad.net/horizon/+spec/user-registration and add
 your suggestions in the white board.
 Does it make sense for this to be in Keystone first, and then Horizon just
 consumes it?  I would think that user-registration-request would be a
 reasonable Keystone extension.  Then, you would add a role  user-approver
 for a specific domain to approve a user, which would trigger the create
 event.

 This makes perfect sense to me.

+1. It certainly is Keystone's domain, so an API extension sounds like
the right way to go.

Kieran


 /Soren


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat run_tests.sh fails with one huge line of output

2014-02-16 Thread Mike Spreitzer
Robert Collins robe...@robertcollins.net wrote on 02/16/2014 05:26:50 
PM:
 I would have expected run_tests.sh to tox which creates a venv, but
 heat seems different. So you'll need to install testrepository via
 your system tox, not one from a venv.

I don't think I have a system tox.  `pip list | grep -i tox` comes up 
empty.  Should I be looking for a system tox in some other way?  I'm a 
little unclear on what you are saying here.


 The 'd' is because you're human-reading a binary protocol (which the
 testrepository trunk fixes). Try the import without the 'd'. Also be
 sure to be trying the import within the tox venv, which run_tests will
 have triggered.

If I take the 'd' off then that import succeeds like all the others,
when tried manually in the venv created by run_tests.sh -V.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat run_tests.sh fails with one huge line of output

2014-02-16 Thread Robert Collins
On 17 February 2014 13:50, Mike Spreitzer mspre...@us.ibm.com wrote:
 Robert Collins robe...@robertcollins.net wrote on 02/16/2014 05:26:50 PM:

 I would have expected run_tests.sh to tox which creates a venv, but
 heat seems different. So you'll need to install testrepository via
 your system tox, not one from a venv.


 I don't think I have a system tox.  `pip list | grep -i tox` comes up empty.
 Should I be looking for a system tox in some other way?  I'm a little
 unclear on what you are saying here.

s/system tox/system pip/.

 The 'd' is because you're human-reading a binary protocol (which the
 testrepository trunk fixes). Try the import without the 'd'. Also be
 sure to be trying the import within the tox venv, which run_tests will
 have triggered.


 If I take the 'd' off then that import succeeds like all the others,
 when tried manually in the venv created by run_tests.sh -V.

I'm unclear - are you running 'run_tests.sh' or 'run_tests.sh -V'.
*either way*, you need to be using the matching pip to install
testrepository, and you need to be using the matching Python to test
imports.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][review] Please treat -1s on check-tripleo-*-precise as voting.

2014-02-16 Thread Robert Collins
Hi!

The nascent tripleo-gate is now running on all tripleo repositories,
*and should pass*, but are not yet voting. They aren't voting because
we cannot submit to the gate unless jenkins votes verified... *and* we
have no redundancy for the tripleo-ci cloud now, so any glitch in the
current region will take out our ability to land changes.

We're working up the path to having two regions as fast as we can- and
once we do we should be up to check or perhaps even gate in short
order :).

Note: unless you *expand* the jenkins vote, you can't tell if a -1 occurred.

If, for some reason, we have an infrastructure failure that means
spurious -1's will be occurring, then we'll put that in the #tripleo
topic.

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Policy questions

2014-02-16 Thread Mohammad Banikazemi
Hi Carlos,

We have just started looking at the same issues you have mentioned. Please
follow the email thread started by me earlier today ([openstack-dev]
[neutron][policy] Using network services with network policies). We will be
spending more time on these topics during our upcoming weekly IRC calls as
well.

Best,

-Mohammad




From:   Carlos Gonçalves m...@cgoncalves.pt
To: openstack-dev@lists.openstack.org,
Date:   02/12/2014 01:43 PM
Subject:[openstack-dev] [Neutron] Group Policy questions



Hi,

I’ve a couple of questions regarding the ongoing work on Neutron Group
Policy proposed in [1].

1. One of the described actions is redirection to a service chain. How do
you see BPs [2] and [3] addressing service chaining? Will this BP implement
its own service chaining mechanism enforcing traffic steering or will it
make use of, and thus depending on, those BPs?

2. In the second use case presented in the BP document, “Tired application
with service insertion/chaining”, do you consider that the two firewalls
entities can represent the same firewall instance or two running and
independent instances? In case it’s a shared instance, how would it support
multiple chains? This is, HTTP(s) traffic from Inet group would be
redirected to the firewall and then passes through the ADC; traffic from
App group with destination DB group would also be redirected to the very
same firewall instance, although to a different destination group as the
chain differs.

Thanks.

Cheers,
Carlos Gonçalves

[1]
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
[2]
https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
[3]
https://blueprints.launchpad.net/neutron/+spec/nfv-and-network-service-chain-implementation
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-16 Thread Wangpan
Hi sahid, 
I have tested `scp -l xxx src dst` (local scp copy) and believe that the `-l` 
option is invalid in this situation,
it seems that `-l` only valid in remote copy.


2014-02-17



Wangpan



发件人:sahid sahid.ferdja...@cloudwatt.com
发送时间:2014-02-14 17:58
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in 
copy_image while creating new instance?
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:

It could be a good idea but as Sylvain said how to configure this? Then, what 
about using scp instead of rsync for a local copy? 

- Original Message - 
From: Wangpan hzwang...@corp.netease.com 
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
Sent: Friday, February 14, 2014 4:52:20 AM 
Subject: [openstack-dev] [nova] Should we limit the disk IO bandwidth in
copy_image while creating new instance? 

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave. 

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating. 

the example codes: 
nova/virt/libvit/utils.py: 
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py 
index e926d3d..5d7c935 100644 
--- a/nova/virt/libvirt/utils.py 
+++ b/nova/virt/libvirt/utils.py 
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None): 
 # sparse files.  I.E. holes will not be written to DEST, 
 # rather recreated efficiently.  In addition, since 
 # coreutils 8.11, holes can be read efficiently too. 
-execute('cp', src, dest) 
+if CONF.mbps_in_copy_image  0: 
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest) 
+else: 
+execute('cp', src, dest) 
 else: 
 dest = %s:%s % (host, dest) 
 # Try rsync first as that can compress and create sparse dest files. 
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None): 
 # Do a relatively light weight test first, so that we 
 # can fall back to scp, without having run out of space 
 # on the destination for example. 
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest) 
+if CONF.mbps_in_copy_image  0: 
+execute('rsync', '--sparse', '--compress', '--dry-run', 
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest) 
+else: 
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest) 
 except processutils.ProcessExecutionError: 
-execute('scp', src, dest) 
+if CONF.mbps_in_copy_image  0: 
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest) 
+else: 
+execute('scp', src, dest) 
 else: 
-execute('rsync', '--sparse', '--compress', src, dest) 
+if CONF.mbps_in_copy_image  0: 
+execute('rsync', '--sparse', '--compress', 
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest) 
+else: 
+execute('rsync', '--sparse', '--compress', src, dest) 


2014-02-14 



Wangpan 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-16 Thread Wangpan
Hi yunhong,
I agree with you of the taking I/O bandwidth as a resource, but it may be not 
so easy to implement.
Your another thinking about the launch time may be not so terrible, only the 
first boot it will be affected.

2014-02-17



Wangpan



发件人:yunhong jiang yunhong.ji...@linux.intel.com
发送时间:2014-02-15 08:21
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in 
copy_image while creating new instance?
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:

On Fri, 2014-02-14 at 10:22 +0100, Sylvain Bauza wrote: 
 Instead of limitating the consumed bandwidth by proposiong a 
 configuration flag (yet another one, and which default value to be 
 set ?), I would propose to only decrease the niceness of the process 
 itself, so that other processes would get first the I/O access. 
 That's not perfect I assume, but that's a quick workaround limitating 
 the frustration. 
  
  
 -Sylvain 
  
Decrease goodness is good for a short term, Some small concerns are, 
will that cause long launch time if the host is I/O intensive? And if 
launch time is billed also, then not fair for the new instance also. 

I think the ideal world is I/O QoS like through cgroup, take I/O 
bandwidth as a resource, and take the copy_image as an consumption of 
the I/O bandwidth resource. 

Thanks 
--jyh 
  
 2014-02-14 4:52 GMT+01:00 Wangpan hzwang...@corp.netease.com: 
 Currently nova doesn't limit the disk IO bandwidth in 
 copy_image() method while creating a new instance, so the 
 other instances on this host may be affected by this high disk 
 IO consuming operation, and some time-sensitive business(e.g 
 RDS instance with heartbeat) may be switched between master 
 and slave. 
   
 So can we use the `rsync --bwlimit=${bandwidth} src dst` 
 command instead of `cp src dst` while copy_image in 
 create_image() of libvirt driver, the remote image copy 
 operation also can be limited by `rsync --bwlimit= 
 ${bandwidth}` or `scp -l=${bandwidth}`, this parameter 
 ${bandwidth} can be a new configuration in nova.conf which 
 allow cloud admin to config it, it's default value is 0 which 
 means no limitation, then the instances on this host will be 
 not affected while a new instance with not cached image is 
 creating. 
   
 the example codes: 
 nova/virt/libvit/utils.py: 
 diff --git a/nova/virt/libvirt/utils.py 
 b/nova/virt/libvirt/utils.py 
 index e926d3d..5d7c935 100644 
 --- a/nova/virt/libvirt/utils.py 
 +++ b/nova/virt/libvirt/utils.py 
 @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None): 
  # sparse files.  I.E. holes will not be written to 
 DEST, 
  # rather recreated efficiently.  In addition, since 
  # coreutils 8.11, holes can be read efficiently too. 
 -execute('cp', src, dest) 
 +if CONF.mbps_in_copy_image  0: 
 +execute('rsync', '--bwlimit=%s' % 
 CONF.mbps_in_copy_image * 1024, src, dest) 
 +else: 
 +execute('cp', src, dest) 
  else: 
  dest = %s:%s % (host, dest) 
  # Try rsync first as that can compress and create 
 sparse dest files. 
 @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None): 
  # Do a relatively light weight test first, so 
 that we 
  # can fall back to scp, without having run out of 
 space 
  # on the destination for example. 
 -execute('rsync', '--sparse', '--compress', 
 '--dry-run', src, dest) 
 +if CONF.mbps_in_copy_image  0: 
 +execute('rsync', '--sparse', '--compress', 
 '--dry-run', 
 +'--bwlimit=%s' % 
 CONF.mbps_in_copy_image * 1024, src, dest) 
 +else: 
 +execute('rsync', '--sparse', '--compress', 
 '--dry-run', src, dest) 
  except processutils.ProcessExecutionError: 
 -execute('scp', src, dest) 
 +if CONF.mbps_in_copy_image  0: 
 +execute('scp', '-l', '%s' % 
 CONF.mbps_in_copy_image * 1024 * 8, src, dest) 
 +else: 
 +execute('scp', src, dest) 
  else: 
 -execute('rsync', '--sparse', '--compress', src, 
 dest) 
 +if CONF.mbps_in_copy_image  0: 
 +execute('rsync', '--sparse', '--compress', 
 +'--bwlimit=%s' % 
 CONF.mbps_in_copy_image * 1024, src, dest) 
 +else: 
 +

Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-16 Thread Joe Gordon
On Sun, Feb 16, 2014 at 3:26 AM, Salvatore Orlando sorla...@nicira.com wrote:
 It seems this work item is made of several blueprints, some of which are not
 yet approved. This is true at least for the Neutron blueprint regarding
 policy extensions.

 Since I first looked at this spec I've been wondering why nova has been
 selected as an endpoint for network operations rather than Neutron, but this
 probably a design/implementation details whereas JC here is looking at the
 general approach.

[1] is only about AWS VPC support, not OpenStack API based network operations.


 Nevertheless, my only point here is that is seems that features like this
 need an all-or-none approval.
 For instance, could the VPC feature be considered functional if blueprint
 [1] is implemented, but not [2] and [3]?

 Salvatore

 [1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
 [2]
 https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
 [3]
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy


 On 11 February 2014 21:45, Martin, JC jch.mar...@gmail.com wrote:

 Ravi,

 It seems that the following Blueprint
 https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support

 has been approved.

 However, I cannot find a discussion with regard to the merit of using
 project vs. domain, or other mechanism for the implementation.

 I have an issue with this approach as it prevents tenants within the same
 domain sharing the same VPC to have projects.

 As an example, if you are a large organization on AWS, it is likely that
 you have a large VPC that will be shred by multiple projects. With this
 proposal, we loose that capability, unless I missed something.

 JC

 On Dec 19, 2013, at 6:10 PM, Ravi Chunduru ravi...@gmail.com wrote:

  Hi,
We had some internal discussions on role of Domain and VPCs. I would
  like to expand and understand community thinking of Keystone domain and
  VPCs.
 
  Is VPC equivalent to Keystone Domain?
 
  If so, as a public cloud provider - I create a Keystone domain and give
  it to an organization which wants a virtual private cloud.
 
  Now the question is if that organization wants to have  departments wise
  allocation of resources it is becoming difficult to visualize with existing
  v3 keystone constructs.
 
  Currently, it looks like each department of an organization cannot have
  their own resource management with in the organization VPC ( LDAP based 
  user
  management, network management or dedicating computes etc.,) For us,
  Openstack Project does not match the requirements of a department of an
  organization.
 
  I hope you guessed what we wanted - Domain must have VPCs and VPC to
  have projects.
 
  I would like to know how community see the VPC model in Openstack.
 
  Thanks,
  -Ravi.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Neutron] Urgent python-neutronclient release needed

2014-02-16 Thread Robert Collins
https://bugs.launchpad.net/neutron/+bug/1280941

I haven't done a chapter-and-verse trace of what happened, but it
looks like a latent bug in the neutronclient lib was tickled by some
neutron change, fixed in neutronclient, and then neutron started using
the feature without bumping requirements.txt (as is needed for all
hard dependencies).

As a result, TripleO clouds currently break, because we're meeting
requirements.txt, but requirements.txt is faulty.

If someone with Neutron can-do-relase acls can do a release asap I
think that would be the best fix. Alternatively, we could hunt down
and backout whatever Neutron change tickles this (but a client release
seems best overall to me).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-16 Thread Armando M.
+1
On Feb 13, 2014 5:52 PM, Nachi Ueno na...@ntti3.com wrote:

 +1

 2014年2月12日水曜日、Mayur Patilram.nath241...@gmail.comさんは書きました:

 +1

 *--*
 *Cheers,*
 *Mayur*


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Trove-Gate timeouts

2014-02-16 Thread Craig Vyvial
Trovesters,

One reason for the longer running test was that for the configuration
groups i added a creation of a new instance. This is to test a new instance
will be created with a configuration group applied. This might be causing
the run to be a little longer but i am surprised that its taking over an
hour to run through everything still.

-Craig Vyvial


On Sun, Feb 16, 2014 at 12:25 AM, Mirantis dmako...@mirantis.com wrote:

 Hello, Mathew.

 I'm seeing same issues with the gate.
 I also tried to found out why gate job is failing. First ran into issue
 related to cinder installation failure in devstack. But then I found same
 problem as you described. The best option is to increase job time range.
 Thanks for such research. I hope gate will be fixed in the easiest way and
 for the shortest period of time.

 Best regards
 Denis Makogon.
 Sent from an iPad

 16 февр. 2014, в 00:46, Lowery, Mathew mlow...@ebay.com написал(а):

  Hi all,

  *Issue #1: Jobs that need more than one hour*

  Of the last 30 Trove-Gate 
 https://rdjenkins.dyndns.org/job/Trove-Gate/builds (spanning three days), 7 
 have failed due to a Jenkins job-level
 timeout (not a proboscis timeout). These jobs had no failed tests when the
 timeout occurred.

  Not having access to the job config to see what the job looks like, I
 used the console output to guess what was going on. It appears that a
 Jenkins plugin named 
 boot-hpcloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/2272770b0ce54752eabb84229dc8939d79b2be50/models/boot_vm_concurrent.rb#L181
  is
 booting a VM and running the commands given, including redstack int-tests.
 From the console output, it states that it was supplied with an
 ssh_shell_timeout=7200. This is passed down to another library called
 net-ssh-simplehttps://github.com/busyloop/net-ssh-simple/blob/e3834f259a47606bfb06a487ca701fc20dbad8a5/lib/net/ssh/simple.rb#L632.
 net-ssh-simple has two timeouts: an idle timeout and an operation timeout.

  In the latest 
 boot-hpcloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/2272770b0ce54752eabb84229dc8939d79b2be50/models/boot_vm_concurrent.rb#L182,
 ssh_shell_timeout is passed down to net-ssh-simple for both the idle
 timeout and the operation timeout. But in older versions of
 boot-hp-cloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/9260e957d6c54142c33dd9e9632b86e17fd5c02f/models/boot_vm_concurrent.rb#L141,
 ssh_shell_timeout is passed down to net-ssh-simple for only the idle
 timeout, leaving a default operation timeout of 3600. This is why I believe
 these jobs are failing after exactly one hour.

  FYI: Here are the jobs that failed due to the Jenkins job-level timeout
 (and had no test failures when the timeout occurred) along with their
 associated patch sets:
 https://rdjenkins.dyndns.org/job/Trove-Gate/2532/console (
 http://review.openstack.org/73786)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2530/console (
 http://review.openstack.org/73736)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2517/console (
 http://review.openstack.org/63789)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2514/console (
 https://review.openstack.org/50944)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2513/console (
 https://review.openstack.org/50944)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2504/console (
 https://review.openstack.org/73147)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2503/console (
 https://review.openstack.org/73147)

   *Suggested action items:*

- If it is acceptable to have jobs that run over one hour, then
install the latest boot-hpcloud-vm plugin for Jenkins which will increase
the make the operation timeout match the idle timeout.


  *Issue #2: The running time of all jobs is 1 hr 1 min*

  While the Jenkins job-level timeout will end the job after one hour, it
 also appears to keep every job running for a minimum of one hour.  To be
 more precise, the timeout (or minimum running time) occurs on the part of
 the Jenkins job that runs commands on the VM; the VM provision (which takes
 about one minute) is excluded from this timeout which is why the running
 time of all jobs is around 1 hr 1 
 minhttps://rdjenkins.dyndns.org/job/Trove-Gate/buildTimeTrend.
 A sampling of console logs showing the time the int-tests completed and
 when the timeout kicks in:

  https://rdjenkins.dyndns.org/job/Trove-Gate/2531/console (00:01:03
 wasted)

 *04:51:12* COMMAND_0: echo refs/changes/36/73736/2

 ...

 *05:50:10* 335.41 proboscis.case.MethodTest 
 (test_instance_created)*05:50:10* 194.05 proboscis.case.MethodTest 
 (test_instance_returns_to_active_after_resize)*05:51:13* 
 ***05:51:13* ** STDERR-BEGIN **


  https://rdjenkins.dyndns.org/job/Trove-Gate/2521/console (00:06:44
 wasted)

 *21:11:44* COMMAND_0: echo refs/changes/89/63789/13

 ...

 *22:05:00* 195.11 proboscis.case.MethodTest 
 (test_instance_returns_to_active_after_resize)*22:05:00* 186.89 
 

Re: [openstack-dev] [neutron] [group-policy] Changing the meeting time

2014-02-16 Thread Isaku Yamahata
I'd like to make it sure.
The followings pages seems still have old time.
Which is correct? 1700UTC or 1900UTC Thursday?

https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy
https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting

Thanks,

On Tue, Feb 11, 2014 at 02:25:12PM -0600,
Kyle Mestery mest...@siliconloons.com wrote:

 FYI, I’ve made the change on the meeting pages as well [1]. The Neutron
 Group Policy meeting is now at 1700UTC Thursday’s on #openstack-meeting-alt.
 
 Thanks!
 Kyle
 
 [1] 
 https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting
 
 On Feb 11, 2014, at 11:30 AM, Sumit Naiksatam sumitnaiksa...@gmail.com 
 wrote:
 
  Hi Kyle,
  
  The new time sounds good to me as well, thanks for initiating this.
  
  ~sumit.
  
  On Tue, Feb 11, 2014 at 9:02 AM, Stephen Wong s3w...@midokura.com wrote:
  Hi Kyle,
  
 Almost missed this - sounds good to me.
  
  Thanks,
  - Stephen
  
  
  
  On Mon, Feb 10, 2014 at 7:30 PM, Kyle Mestery mest...@siliconloons.com
  wrote:
  
  Folks:
  
  I'd like to propose moving the Neutron Group Policy meeting going
  forward, starting with this Thursday. The meeting has been at 1600
  UTC on Thursdays, I'd like to move this to 1700UTC Thursdays
  on #openstack-meeting-alt. If this is a problem for anyone who
  regularly attends this meeting, please reply here. If I don't hear
  any replies by Wednesday, I'll officially move the meeting.
  
  Thanks!
  Kyle
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-16 Thread Jamie Lennox
On Thu, 2014-02-13 at 19:35 -0700, Christopher Yeoh wrote:
 On Thu, 13 Feb 2014 21:10:01 -0500
 Sean Dague s...@dague.net wrote:
  On 02/13/2014 08:28 PM, Christopher Yeoh wrote:
   On Thu, 13 Feb 2014 15:54:23 -0500
   Sean Dague s...@dague.net wrote:
 
   
   So one question I have around a global version is what happens when
   we have the following situation:
   
   - Extension (not core) A is bumped to version 3, global version
   bumped to 3.01
   - Extension B (not core) is bumped to version 6, global version
   bumped to 3.02
   
   but the deployer for $REASONS (perhaps stability/testing/whatever)
   really wants to deploy with version 2 of A but version 6 of B. 
   
   With versioning just on the extensions individually they're ok, but
   I don't think there's any real sane way to get a global micro
   version calculated for this scenario that makes sense to the end
   user.
  
  So there remains a question about extensions vs. global version. I
  think a big piece of this is anything which is a core extension,
 
 So to reduce confusion I've been trying to introduce the nomenclature of
 everything is a plugin. And then some plugins are compulsory (eg
 core) and others are optional (extensions)
 
  stops getting listed as an extension and instead is part of properly
  core and using the global version.
  
  How extensions impact global version is I think an open question. But
  Nova OS API is actually really weird if you think about it relative to
  other cloud APIs (ec2, gce, softlayer). We've defined it not as the
  Nova API, but as a small core compute API, and many dozens optional
  features, which every deployer makes decisions on what comes and goes.
  
  I agree we need to think through a few things. But I think that if we
  get to v3, only to have to do a ton more stuff for v4, and take 2 more
  years to get there, we're in a world of hurt. The current model of API
  revisions as giant big bangs isn't good for any one. A way to make an
  API be able to grow over time, in a backwards compatible way, and some
  mechanism to deprecate and remove a feature over time would be much
  more advantageous to our consumers.
  
 
 I agree we don't want to avoid another big bang version change for as
 long as we can. Given that we have extensions (and I know that some
 people really don't like that) however I'd be a lot more comfortable 
 if this minor global version was only bumped when there were changes to
 the core plugins or a plugin was added to the core (I don't think we
 can ever remove them from core within a major version). There should be
 a high bar on making any changes to core plugins (even though
 they are backwards compatible).
 
 I'm also fine with core plugins not appearing in the /v3/extensions
 list. Its a simple enough change and agree that it will reduce
 confusion over interoperability between openstack clouds. 
 
 Chris

Caveat: IANAND (I am not a nova dev) and i'm not familiar with nova's
plugins so i'm working with a standard definition of a plugin and define
for the rest of this email that: 
- An extension is something that adds paths or features to the API. 
- A plugin is code that is loaded to do some work.

To my mind talking of plugins is confusing the API with the
implementation. The sum total of the core plugins is the interface that
a project has to implement to be compatible with nova and is defined as
the core API. This API is the thing that we are versioning and that
should be done globally and semantically. If plugins are allowed to
directly define the API and some combination of plugins is loaded that
does not provided the API specified by the version then the server is in
error advertising that it supports that API version. 

An extension provides additional methods or data to the API (whether
this is loaded via plugin or always available). Being that extensions
are not covered by the API definition then they should provide there own
version discovery information. This means that an extension can define
an interface and that can co-exist with any version of the core API
(though i can definitely see the use case for this extension requires at
least this version of the API). By definition anything in the core API
is not an extension and should not be listed as such.

Version discovery could be done for extensions either as a part of GET /
but i think it makes sense for that to be part of the project specific
API (eg GET /v3/extensions) as that information may need to change in
future API versions. Also this is kind of outside the scope of what we
are talking about here. For now all we need is a way of determining core
API versions and then extension discovery is a subset of that. 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list

Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-16 Thread Renat Akhmerov
On 15 Feb 2014, at 04:01, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Dmitri, in our concerns under word 'input' we assume a block contains the 
 info about how the input data will be taken for corresponding task from 
 initial context. So, it will be a kind of expression (e.g. YAQL). 
 
 Renat, am I right?

Yes, basically we should keep in mind that we have 2 types of parameters for an 
action:
Parameters that specify nature of the action itself (for example, “method: 
POST” for REST_API action type).
Input parameters determined dynamically using a selector expression defined on 
task level (it’s what we have as “input: $.people.[$.age  30].address” in 
[0]). These define are a real action input interesting from a workflow logic 
perspective.

[0] https://etherpad.openstack.org/p/mistral-poc

Renat___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-16 Thread Renat Akhmerov
Dmitri,

Right now https://etherpad.openstack.org/p/mistral-poc is the only place where 
we described it. It shouldn’t be considered a specification, it was rather a 
playground where we tried to shape up our ideas. We’ll fix it using our latest 
ideas and changes captured in the code and create another etherpad for further 
long-term discussions.

Renat Akhmerov
@ Mirantis Inc.

On 15 Feb 2014, at 06:26, Dmitri Zimine d...@stackstorm.com wrote:

 Ok, I see.  
 
 Do we have a spec that describes this?
 Lets spell it out and describe the whole picture of input, output, 
 parameters, and result. 
 
 DZ 
 
 
 On Feb 14, 2014, at 1:01 PM, Nikolay Makhotkin nmakhot...@mirantis.com 
 wrote:
 
 Dmitri, in our concerns under word 'input' we assume a block contains the 
 info about how the input data will be taken for corresponding task from 
 initial context. So, it will be a kind of expression (e.g. YAQL). 
 
 Renat, am I right?
 
 
 On Fri, Feb 14, 2014 at 9:51 PM, Dmitri Zimine d...@stackstorm.com wrote:
 I like output, too. But it should go with 'input'
 In summary, there are two alternatives. 
 Note that I moved task-parameters under parameters. Ok with this?
 
 actions:
my-action
   input:
  foo: bar
  task-parameters: 
 flavor_id:
 image_id:
   output: 
   select: '$.server_id'  
   store_as: v1
 
 this maps to action(input, *output)
 
 actions:
my-action
   parameters:
  foo: bar
  task-parameters: 
 flavor_id:
 image_id:
   result: 
   select: '$.server_id'  
   store_as: v1
 
 this maps to result=action(parameters)
 
 
 On Feb 14, 2014, at 8:40 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 “output” looks nice!
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 On 14 Feb 2014, at 20:26, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results' 
   select: '$.server_id'  
   store_as: v1
 
 'result' sounds better than 'response' and, I think, more fit to action 
 description.
 And I suggest for a new word - 'output'; actually, this block describe how 
 the output will be taken and stored.
 
 However, I agree that this block should be at action-property level:
 
 actions:
my-action
   result: 
  select: '$.server_id'  
  store_as: vm_id
   parameters:
  foo: bar
   
 
 
 On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results’ 
 
 Just a note: “response” indentation here is not correct, it’s not a 
 parameter called “response” but rather a property of “my-action”.
 
   select: '$.server_id'  
   store_as: v1
 
 In the code, we refer to action.result_helper
 
 1) Note that response is not exactly a parameter. It doesn't doesn't 
 refer to data. It's  (query, variable) pairs, that are used to parse the 
 results and post data to global context [1]. The terms response, or 
 result, do not reflect what is actually happening here. Suggestions? 
 Save? Publish? Result Handler? 
 
 For explicitness we can use something like “result-handler” and initially 
 I thought about this option. But I personally love conciseness and I think 
 name “result” would be ok for this section meaning it defines the 
 structure of the action result. “handler” is not 100% precise too because 
 we don’t actually handle a result here, we define the rules how to get 
 this result.
 
 I would appreciate to see other suggestions though.
 
 2) Whichever name we use for this output transformer, shall it be under 
 parameters?
 
 No, what we have in this section is like a return value type for a regular 
 method. Parameters define action input.
 
 3) how do we call action/task parameters? Think 'model' (which reflects 
 in yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)  
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)
 
 Could you please clarify your questions here? Not sure I’m following...
 
 4) Syntax simplification: can we drop 'parameters' keyword? Anything 
 under action is action parameters, unless it's a reserved keyword, which 
 the implementation can parse out. 
 
 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'  
   store_as: v1
 
 It will create problems like name ambiguity in case we need to have a 

Re: [openstack-dev] Glance plugin in Nova should switch to using buffered http

2014-02-16 Thread Sridevi K R Koushik
Hi John,

I have made the changes you suggested.
And, yes, we've tested this out in a live system.


On Fri, Feb 14, 2014 at 1:43 AM, John Garbutt
john.garb...@rackspace.co.ukwrote:

  Its too late for this cycle. We will have to wait until Juno for any
 blueprints.


 https://blueprints.launchpad.net/nova/+spec/use-buffered-http-in-glance-plugin

 Please rename to start with xenapi, just makes it a bit easier to spot.

  Most of the issues we have are when the token expires, from what I
 remember.
 Do we have logs of this error and tested to see if your idea will help?


 https://blueprints.launchpad.net/nova/+spec/should-use-100-continue-header
 This is a bug really. Please close the blueprint and raise a bug. Unless I
 am missing something?

  Have you got a patch and tested it with a live system yet?

  Thanks,
 John

  Moving Russell to BCC, to save him some email traffic.

   From: Sridevi K R Koushik sridevi.kous...@thoughtworks.com
 Date: Thursday, 13 February 2014 08:46
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Cc: russell.bry...@gmail.com russell.bry...@gmail.com, John Garbutt 
 john.garb...@rackspace.co.uk, tw-glance tw-gla...@thoughtworks.com
 Subject: Glance plugin in Nova should switch to using buffered http

   Hi,

  I would like to get some comments on
  
 https://blueprints.launchpad.net/nova/+spec/use-buffered-http-in-glance-pluginhttps://blueprints.launchpad.net/nova/+spec/use-buffered-http-in-glance-plugin
 and https://blueprints.launchpad.net/nova/+spec/should-use-100-continue-header
 blueprints.
 These blueprints will address many of the auth related failures during the
 upload process.

  Thanks,
 Sridevi



   John Garbutt
 Software Developer IV - UK [image: experience Fanatical Support]  [image:
 LINE]Tel: +442087344853   [image: Rackspace]



 Rackspace International GmbH a company registered in the Canton of Zurich,
 Switzerland (company identification number CH-020.4.047.077-1) whose
 registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
 Rackspace International GmbH privacy policy can be viewed at
 www.rackspace.co.uk/legal/swiss-privacy-policy
 -
 Rackspace Hosting Australia PTY LTD a company registered in the state of
 Victoria, Australia (company registered number ACN 153 275 524) whose
 registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW
 2000, Australia. Rackspace Hosting Australia PTY LTD privacy policy can be
 viewed at www.rackspace.com.au/company/legal-privacy-statement.php
 -
 Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United
 States of America
 Rackspace US, Inc privacy policy can be viewed at
 www.rackspace.com/information/legal/privacystatement
 -
 Rackspace Limited is a company registered in England  Wales (company
 registered number 03897010) whose registered office is at 5 Millington
 Road, Hyde Park Hayes, Middlesex UB3 4AZ.
 Rackspace Limited privacy policy can be viewed at
 www.rackspace.co.uk/legal/privacy-policy
 -
 Rackspace Benelux B.V. is a company registered in the Netherlands (company
 KvK nummer 34276327) whose registered office is at Teleportboulevard 110,
 1043 EJ Amsterdam.
 Rackspace Benelux B.V privacy policy can be viewed at
 www.rackspace.nl/juridisch/privacy-policy
 -
 Rackspace Asia Limited is a company registered in Hong Kong (Company no:
 1211294) whose registered office is at 9/F, Cambridge House, Taikoo Place,
 979 King's Road, Quarry Bay, Hong Kong.
 Rackspace Asia Limited privacy policy can be viewed at
 www.rackspace.com.hk/company/legal-privacy-statement.php
 -
 This e-mail message (including any attachments or embedded documents) is
 intended for the exclusive and confidential use of the individual or entity
 to which this message is addressed, and unless otherwise expressly
 indicated, is confidential and privileged information of Rackspace. Any
 dissemination, distribution or copying of the enclosed material is
 prohibited. If you receive this transmission in error, please notify us
 immediately by e-mail at ab...@rackspace.com and delete the original
 message. Your cooperation is appreciated.

inline: imagede3238.JPGinline: image1556a9.JPGinline: imaged6d70c.JPGinline: image0a6914.JPG___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-16 Thread Jamie Lennox
On Thu, 2014-02-13 at 08:45 -0600, Dean Troyer wrote:
 FWIW, an early proposal to address this, as well as capability
 discovery, still lives
 at https://etherpad.openstack.org/p/api-version-discovery-proposal.
  I've lost track of where this went, and even which design summit this
 is from, but I've been using it as a sanity check for the discovery
 bits in OSC.

Yes, i've seen that one. It's more client side how we should work with
it though. If we can standardize the server side response then the
client side becomes much more reusable. 

 
 On Thu, Feb 13, 2014 at 6:50 AM, Jamie Lennox jamielen...@redhat.com
 wrote:
 6. GET '/' is unrestricted. GET '/vX' is often token
 restricted.
 
 Keystone allows access to /v2.0 and /v3 but most services give
 a HTTP Unauthorized. This is a real problem for discovery
 because we need to be able to evaluate the endpoints in the
 service catalog. I think we need to make these unauthorized.
 
 
 I agree, however from a client discovery process point-of-view, you do
 not necessarily have an endpoint until after you auth and get a
 service catalog anyway.  For example, in the specific case of
 OpenStackClient Help command output, the commands listed may depend on
 the desired API version.  To get the endpoints to query for version
 support still requires a service catalog so nothing really changes
 there.

Yes, i had thought of that afterward and we can send the token to the
endpoint. From a standards document though it'd be better if we didn't
have to. 

 And this doesn't even touch on the SC endpoints that include things
 like tenant/project id...

Yuk. Yes, a problem with this is that entries in the service catalog
that have a project id in them don't currently present version
information at that endpoint. I'd be interested in fixing this but as
it's going to need to be backwards compatible with the current state
it's not really a priority. 

 Please have a look over the wiki page and how it addresses the
 above and fits into the existing schemes and reply with any
 comments or problems that you see. Is this going to mess with
 any pre-existing clients?
 
 
 * id: Let's either make this a real semantic version so we can parse
 and use the major.minor.patch components (and dump the 'v') or make it
 an identifier that matches the URL path component.  Right now 

Do we need patch on API versions? I was aiming for standards rather than
improvement here but i guess this is the right time to fix something
like this.

 * updated: I think it would be a friendly gesture to update this for
 unstable changes as the id is likely to not be updated mid-stream.
  During debugging I would want to be able to verify exactly which
 implementation I was talking to anyway.

So i was thinking the way to do that would be to define two endpoints eg
a stable v3.2 and an unstable v3.3 that reference the same endpoint. 

 
 There are two transitional things to also consider:
 
 
 * We have to produce a discovery mechanism that takes in to account
 the historical authentication URLs published by deployments that
 include a version.  ayoung's Ml thread last week discussed this a bit,
 we should document the approaches that we're testing and why they do
 or do not work.

Right and we need another talk about this, but let's leave this about
defining a standard that new projects should adhere to and slowly moving
the existing ones.  

 * There are real-world deployments that do not configure
 admin_endpoint and/or public_endpoint in keystone.conf.  Discovery is
 really useless if the URL you are given is
 https://localhost:5000/v2.0.  Do we need to talk about another
 horrible horrible hack to deal with these or are these deployments
 going to be left out in the cold?

Right, this is more infuriating with keystone that uses admin_url which
is not defined as either public or internal. Which URL should be present
on that discover page? 

From the server side to counter this i very specifically put a clause in
the wiki page to say that discover links can be relative and i'll change
that now to SHOULD be relative. This way a link is just defined as
/v2.0 and you can use whatever host got you to that point. The only
problem i see here is what is the relative self link from GET /v2.0
should it just be { rel: self, href:  }, is href excluded or
None?

The client side is going to be ugly again and we should talk but it's
outside scope for this. 

 dt
 
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Multiple backends

2014-02-16 Thread iKhan
Hi All,

I'm just curious on how the manager.py is choosing backend while creating
volume, I know volume type is set but where is this being processed?

I am sorry if this is a basic question, but didn't got any help from
#openstack-dev IRC channel so was left without option to post here.

-- 
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-16 Thread Jamie Lennox
On Thu, 2014-02-13 at 08:37 -0500, Sean Dague wrote:
 On 02/13/2014 07:50 AM, Jamie Lennox wrote:
  Hi all,
  
  I am one of i think a number of efforts trying to make clients be 
  interoperable between different versions of an API.
  
  What i would like to talk about specifically here are the inconsistencies 
  in the version listing of the different servers when you query the root GET 
  '/' and GET '/vX' address for versions. This is a badly formatted sampling 
  of the policies out there: http://paste.openstack.org/show/64770/
  
  This is my draft of a common solution 
  https://wiki.openstack.org/wiki/VersionDiscovery which has some changes for 
  everyone, but I at least hope can be a guide for new services and a target 
  for the existing
  
  There are a number of major inconsistencies that i hope to address:
  
  1. The 'status' of an API. 
  
  Keystone uses the word 'stable' to indicate a stable API, there are a 
  number of services using 'CURRENT' and i'm not sure what 'SUPPORTED' is 
  supposed to mean here. In general I think 'stable' makes the most sense and 
  in many ways keystone has to be the leader here as it is the first contact. 
  Any ideas how to convert existing APIs to this scheme? 
 
 From that link, only Keystone is different. Glance, Cinder, Neutron,
 Nova all use CURRENT. So while not ideal, I'm not sure why we'd change
 the rest of the world to match keystone, vs. changing keystone to match
 the rest of the world.
 
 Also realize changing version discovery itself is an API change, so my
 feeling is this should be done in the smallest number of places possible.

So firstly i absolutely agree that version discovery is an API change.
More that that it is the only API that we cannot version so we are stuck
with whatever we choose indefinitely. 

The reason i suggested 'stable' is because keystone is the point of
contact here. Theoretically ever other service could add a new route
(eg /info) which defined whatever scheme we choose, and have the service
catalog point to that and there should be no difference in the
experience. Keystone is different in that it is always called and
probably the only client even attempting to do discovery at the moment.

Having said that we have the opportunity thanks to point 3 to add new
endpoints without the 'values' key that can follow the other examples. 

What i would then like to know is what are the definitions and priority
of the 'CURRENT' scheme? 'EXPERIMENTAL' is somewhat obvious but what is
the difference between 'CURRENT' and 'SUPPORTED'? We need to define a
strict definition of what is allowed here, what values are considered
stable vs unstable, and what is the priority ordering. 

  2. HTTP Status
  
  Some services are 200, some 300. It also doesn't matter how many responses 
  there are in this status. 
 
 Ideally - 300 should be returned if there are multiple versions, and 200
 otherwise.

Strictly speaking probably. I thought the 300 made some sense as
requests to this page are sort of like a directory where you choose
where to go next, even if there is only one choice. Probably we should
just return 200 all the time as it was a successful operation and we
aren't redirecting automatically. 

  3. Keystone uses ['versions']['values']
  
  Yep. Not sure why that is. Sorry, we should be able to have a copy under 
  'values' and one in the root 'versions' simultaneously for a while and then 
  drop the 'values' in some future release. 
 
 Again, keystone seems to be the odd man out here.
 
  4. Glance does a version entry for each minor version. 
  
  Seperate entries for v2.2, v2.1, v2.0. They all point to the same place so 
  IMO this is unnecessary. 
 
 Probably agreed, curious if any Glance folks know of w reason for it.
 
  5. Differences between entry in GET '/' and GET '/vX'
  
  There is often a log more information in GET '/vX' like media-type that is 
  not present in the root. I'm not sure if this was on purpose but i think it 
  easier (and less lookups) to have this information consistent.
 
 Agreed, I expect it's historical following of nova that media-type is
 not in the root. I think it's fixable.
 
  6. GET '/' is unrestricted. GET '/vX' is often token restricted. 
  
  Keystone allows access to /v2.0 and /v3 but most services give a HTTP 
  Unauthorized. This is a real problem for discovery because we need to be 
  able to evaluate the endpoints in the service catalog. I think we need to 
  make these unauthorized.
 
 Agreed, however due to the way the wsgi stacks work in these projects,
 this might not be trivial. I'd set that as a goal to address.

Yes, this is something that will need to be supported for a long time
anyway. This guide is as much as anything a reference for new projects
for what the best behaviour is so whilst we may not be able to address
it for existing projects new projects should try to follow best
practice. 

  Please have a look over the wiki page and how it addresses the above and 
  fits 

Re: [openstack-dev] [cinder]Multiple backends

2014-02-16 Thread Subramanian
I think you should look at the cinder scheduler code that selects a host
based on the volume type.
https://github.com/openstack/cinder/tree/master/cinder/scheduler.

BTW, cinder dev tend to hang out on #openstack-cinder channel.

Thanks,
Subbu


On Mon, Feb 17, 2014 at 11:35 AM, iKhan ik.ibadk...@gmail.com wrote:

 Hi All,

 I'm just curious on how the manager.py is choosing backend while creating
 volume, I know volume type is set but where is this being processed?

 I am sorry if this is a basic question, but didn't got any help from
 #openstack-dev IRC channel so was left without option to post here.

 --
 Thanks,
 IK

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] pci device hotplug

2014-02-16 Thread Gouzongmei
Hello,



In current PCI passthrough implementation, a pci device is only allowed to be 
assigned to a instance while the instance is being created, it is not allowed 
to be assigned or removed from the instance while the instance is running or 
stop.

Besides, I noticed that the basic ability--remove a pci device from the 
instance(not by delete the flavor) has never been implemented or prompted by 
anyone.

The current implementation:

https://wiki.openstack.org/wiki/Pci_passthrough


I have tested the nic hotplug on my experimental environment, it's supported by 
the latest libvirt and qemu.



My problem is, why the pci device hotplug is not proposed in openstack until 
now, and is there anyone planning to do the pci device hotplug?



Thanks,

gouzongmei

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-swiftclient] Python swiftclient should switch to using bufferedhttp.

2014-02-16 Thread Amala Basha Alungal
Hi,

I'm looking for few reviews on *use-bufferedhttp-in-swiftclient
https://blueprints.launchpad.net/python-swiftclient/+spec/use-bufferedhttp-in-swiftclient*
and
use-expect-100-continue-header-while-uploadinghttps://blueprints.launchpad.net/python-swiftclient/+spec/should-use-100-continue-header
 bps.

These blueprints will address many of the auth related failures during
upload.


-- 
Thanks And Regards
Amala Basha
+91-7760972008
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Policy questions

2014-02-16 Thread Sumit Naiksatam
Hi, Apologies for chiming in late on this. Yes, we have been
incubating the service insertion and chaining features [2] for some
time now. The plan was to have a FW-VPN chain working by Icehouse
release. Towards that end the first step was to introduce the notion
of a service insertion context which forms the foundation for the
service chain resource. The following patch aims to address the
service insertion context:
https://review.openstack.org/#/c/62599/

and we hope to get the above merged into Icehouse. However, it does
not seem like we will be able to land the service chaining in
Icehouse. That said we are hoping to introduce WIP patches soon that
will implement the ideas so far discussed.

We had regular IRC meetings earlier on the topic of advanced
services, but we suspended those temporarily so as not to distract
from the Neutron stabilization and parity work underway in Icehouse.
We hope to get back to those meetings once the critical bugs and
features slated for Icehouse are out of the way.

Per your question in the context of the Group Policy work, these two
features are indeed complementary. As pointed out in this thread, one
of the options for rendering the Groups Policies is on top of
elemental Neutron abstractions as service chains expressed in [2].
Also as pointed out in another email thread, we will specifically
touch on this topic in the upcoming Group Policy meetings.

Thanks,
~Sumit.



On Wed, Feb 12, 2014 at 10:21 AM, Stephen Wong s3w...@midokura.com wrote:
 Hi Carlos,


 On Wed, Feb 12, 2014 at 9:37 AM, Carlos Gonçalves m...@cgoncalves.pt
 wrote:

 Hi,

 I've a couple of questions regarding the ongoing work on Neutron Group
 Policy proposed in [1].

 1. One of the described actions is redirection to a service chain. How do
 you see BPs [2] and [3] addressing service chaining? Will this BP implement
 its own service chaining mechanism enforcing traffic steering or will it
 make use of, and thus depending on, those BPs?


 We plan to support both specifying Neutron native service chain
 (reference [2] from your email below) as the object to 'redirect' traffic to
 as well as actually setting an ordered chain of services specified directly
 via the 'redirect' list. In the latter case we would need the plugins to
 perform traffic steering across these services.


 2. In the second use case presented in the BP document, Tired application
 with service insertion/chaining, do you consider that the two firewalls
 entities can represent the same firewall instance or two running and
 independent instances? In case it's a shared instance, how would it support
 multiple chains? This is, HTTP(s) traffic from Inet group would be
 redirected to the firewall and then passes through the ADC; traffic from App
 group with destination DB group would also be redirected to the very same
 firewall instance, although to a different destination group as the chain
 differs.


 We certainly do not restrict users from setting the same firewall
 instance on two different 'redirect' list - but at this point, since the
 group-policy project has no plan to perform actual configurations for the
 services, it is therefore the users' responsibility to set the rules
 correctly on the firewall instance such that the correct firewall rules will
 be applied for traffic from group A - B as well as group C - D.

 - Stephen



 Thanks.

 Cheers,
 Carlos Gonçalves

 [1]
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
 [2]
 https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
 [3]
 https://blueprints.launchpad.net/neutron/+spec/nfv-and-network-service-chain-implementation


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Using network services with network policies

2014-02-16 Thread Sumit Naiksatam
Thanks Mohammad for bringing this up. I responded in another thread:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027306.html

~Sumit.

On Sun, Feb 16, 2014 at 7:27 AM, Mohammad Banikazemi m...@us.ibm.com wrote:
 During the last IRC call we started talking about network services and how
 they can be integrated into the group Policy framework.

 In particular, with the redirect action we need to think how we can
 specify the network services we want to redirect the traffic to/from. There
 has been a substantial work in the area of service chaining and service
 insertion and in the last summit advanced service in VMs were discussed.
 I think the first step for us is to find out the status of those efforts and
 then see how we can use them. Here are a few questions that come to mind.
 1- What is the status of service chaining, service insertion and advanced
 services work?
 2- How could we use a service chain? Would simply referring to it in the
 action be enough? Are there considerations wrt creating a service chain
 and/or a service VM for use with the Group Policy framework that need to be
 taken into account?

 Let's start the discussion on the ML before taking it to the next call.

 Thanks,

 Mohammad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] L7 data types

2014-02-16 Thread Oleg Bondarev
Hi,

I would add another candidate for being a closed set:
L7VipPolicyAssociation.action (use_backend, block, etc.)

Thanks,
Oleg


On Sun, Feb 16, 2014 at 3:53 PM, Avishay Balderman avish...@radware.comwrote:

  (removing extra space from the subject - let email clients apply their
 filters)



 *From:* Avishay Balderman
 *Sent:* Sunday, February 16, 2014 9:56 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] [Neutron][LBaaS] L7 data types



 Hi

 There are 2 fields in the L7 model that are candidates for being a closed
 set (Enum).

 I would like to hear your opinion.



 Entity:  L7Rule

 Field : type

 Description:  this field holds the part of the request where we should
 look for a value

 Possible values: URL,HEADER,BODY,(?)



 Entity:  L7Rule

 Field : compare_type

 Description: The way we compare the value against a given value

 Possible values: REG_EXP, EQ, GT, LT,EQ_IGNORE_CASE,(?)

 *Note*: With REG_EXP we can cover the rest of the values.



 In general In the L7rule one can express the following (Example):

 check if in the value of header named 'Jack'  starts with X - if this is
 true - this rule returns true





 Thanks



 Avishay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Trove-Gate timeouts

2014-02-16 Thread Denis Makogon
Hi, Craig.

Yes, i thought about configurations test suits.
For now core team, maybe, should extend gate running time.
But for the tempest tests i would suggest to exclude some tests from
'gate'-group (the longest ones).
We need to deal with it asap, because gate failing for four or five days.

Best regards
Denis Makogon.

Sent from an iPad


On Mon, Feb 17, 2014 at 6:33 AM, Craig Vyvial cp16...@gmail.com wrote:

 Trovesters,

 One reason for the longer running test was that for the configuration
 groups i added a creation of a new instance. This is to test a new instance
 will be created with a configuration group applied. This might be causing
 the run to be a little longer but i am surprised that its taking over an
 hour to run through everything still.

 -Craig Vyvial


 On Sun, Feb 16, 2014 at 12:25 AM, Mirantis dmako...@mirantis.com wrote:

 Hello, Mathew.

 I'm seeing same issues with the gate.
 I also tried to found out why gate job is failing. First ran into issue
 related to cinder installation failure in devstack. But then I found same
 problem as you described. The best option is to increase job time range.
 Thanks for such research. I hope gate will be fixed in the easiest way
 and for the shortest period of time.

 Best regards
 Denis Makogon.
 Sent from an iPad

 16 февр. 2014, в 00:46, Lowery, Mathew mlow...@ebay.com написал(а):

  Hi all,

  *Issue #1: Jobs that need more than one hour*

  Of the last 30 Trove-Gate 
 https://rdjenkins.dyndns.org/job/Trove-Gate/builds (spanning three days), 
 7 have failed due to a Jenkins job-level
 timeout (not a proboscis timeout). These jobs had no failed tests when the
 timeout occurred.

  Not having access to the job config to see what the job looks like, I
 used the console output to guess what was going on. It appears that a
 Jenkins plugin named 
 boot-hpcloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/2272770b0ce54752eabb84229dc8939d79b2be50/models/boot_vm_concurrent.rb#L181
  is
 booting a VM and running the commands given, including redstack int-tests.
 From the console output, it states that it was supplied with an
 ssh_shell_timeout=7200. This is passed down to another library called
 net-ssh-simplehttps://github.com/busyloop/net-ssh-simple/blob/e3834f259a47606bfb06a487ca701fc20dbad8a5/lib/net/ssh/simple.rb#L632.
 net-ssh-simple has two timeouts: an idle timeout and an operation timeout.

  In the latest 
 boot-hpcloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/2272770b0ce54752eabb84229dc8939d79b2be50/models/boot_vm_concurrent.rb#L182,
 ssh_shell_timeout is passed down to net-ssh-simple for both the idle
 timeout and the operation timeout. But in older versions of
 boot-hp-cloud-vmhttps://github.com/mrhoades/boot-hpcloud-vm/blob/9260e957d6c54142c33dd9e9632b86e17fd5c02f/models/boot_vm_concurrent.rb#L141,
 ssh_shell_timeout is passed down to net-ssh-simple for only the idle
 timeout, leaving a default operation timeout of 3600. This is why I believe
 these jobs are failing after exactly one hour.

  FYI: Here are the jobs that failed due to the Jenkins job-level timeout
 (and had no test failures when the timeout occurred) along with their
 associated patch sets:
 https://rdjenkins.dyndns.org/job/Trove-Gate/2532/console (
 http://review.openstack.org/73786)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2530/console (
 http://review.openstack.org/73736)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2517/console (
 http://review.openstack.org/63789)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2514/console (
 https://review.openstack.org/50944)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2513/console (
 https://review.openstack.org/50944)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2504/console (
 https://review.openstack.org/73147)
 https://rdjenkins.dyndns.org/job/Trove-Gate/2503/console (
 https://review.openstack.org/73147)

   *Suggested action items:*

- If it is acceptable to have jobs that run over one hour, then
install the latest boot-hpcloud-vm plugin for Jenkins which will increase
the make the operation timeout match the idle timeout.


  *Issue #2: The running time of all jobs is 1 hr 1 min*

  While the Jenkins job-level timeout will end the job after one hour, it
 also appears to keep every job running for a minimum of one hour.  To be
 more precise, the timeout (or minimum running time) occurs on the part of
 the Jenkins job that runs commands on the VM; the VM provision (which takes
 about one minute) is excluded from this timeout which is why the running
 time of all jobs is around 1 hr 1 
 minhttps://rdjenkins.dyndns.org/job/Trove-Gate/buildTimeTrend.
 A sampling of console logs showing the time the int-tests completed and
 when the timeout kicks in:

  https://rdjenkins.dyndns.org/job/Trove-Gate/2531/console (00:01:03
 wasted)

 *04:51:12* COMMAND_0: echo refs/changes/36/73736/2

 ...

 *05:50:10* 335.41 proboscis.case.MethodTest 
 (test_instance_created)*05:50:10* 194.05 

Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-16 Thread Renat Akhmerov
Clint, 

We're collaborating with Murano. We may need to do it in a way that others 
could see it though. There are several things here:
Murano doesn’t really have a “workflow engine” similar to Mistral’s. People get 
confused with that but it’s just a legacy terminology, I think Murano folks 
were going to rename this component to be more precise about it.
Mistral DSL doesn’t seem to be a good option for solving tasks that Murano is 
intended to solve. Specifically I mean things like complex object composition, 
description of data types, contracts and so on. Like Alex and Stan mentioned 
Murano DSL tends to grow into a full programming language.
Most likely Mistral will be used in Murano for implementation, at least we see 
where it would be valuable. But Mistral is not so matured yet, we need to keep 
working hard and be patient :)

Anyway, we keep thinking on how to make both languages look similar or at least 
the possibility to use them seamlessly, if needed (call Mistral workflows from 
Murano DSL or vice versa).

Renat Akhmerov
@ Mirantis Inc.

On 16 Feb 2014, at 05:48, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Alexander Tivelkov's message of 2014-02-14 18:17:10 -0800:
 Hi folks,
 
 Murano matures, and we are getting more and more feedback from our early
 adopters. The overall reception is very positive, but at the same time
 there are some complaints as well. By now the most significant complaint is
 is hard to write workflows for application deployment and maintenance.
 
 Current version of workflow definition markup really have some design
 drawbacks which limit its potential adoption. They are caused by the fact
 that it was never intended for use for Application Catalog use-cases.
 
 
 Just curious, is there any reason you're not collaborating on Mistral
 for this rather than both having a workflow engine?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Version Discovery Standardization

2014-02-16 Thread Jamie Lennox
On Mon, 2014-02-17 at 16:09 +1000, Jamie Lennox wrote:
 On Thu, 2014-02-13 at 08:37 -0500, Sean Dague wrote:
  On 02/13/2014 07:50 AM, Jamie Lennox wrote:
   Hi all,
   
   I am one of i think a number of efforts trying to make clients be 
   interoperable between different versions of an API.
   
   What i would like to talk about specifically here are the inconsistencies 
   in the version listing of the different servers when you query the root 
   GET '/' and GET '/vX' address for versions. This is a badly formatted 
   sampling of the policies out there: http://paste.openstack.org/show/64770/
   
   This is my draft of a common solution 
   https://wiki.openstack.org/wiki/VersionDiscovery which has some changes 
   for everyone, but I at least hope can be a guide for new services and a 
   target for the existing
   
   There are a number of major inconsistencies that i hope to address:
   
   1. The 'status' of an API. 
   
   Keystone uses the word 'stable' to indicate a stable API, there are a 
   number of services using 'CURRENT' and i'm not sure what 'SUPPORTED' is 
   supposed to mean here. In general I think 'stable' makes the most sense 
   and in many ways keystone has to be the leader here as it is the first 
   contact. Any ideas how to convert existing APIs to this scheme? 
  
  From that link, only Keystone is different. Glance, Cinder, Neutron,
  Nova all use CURRENT. So while not ideal, I'm not sure why we'd change
  the rest of the world to match keystone, vs. changing keystone to match
  the rest of the world.
  
  Also realize changing version discovery itself is an API change, so my
  feeling is this should be done in the smallest number of places possible.
 
 So firstly i absolutely agree that version discovery is an API change.
 More that that it is the only API that we cannot version so we are stuck
 with whatever we choose indefinitely. 
 
 The reason i suggested 'stable' is because keystone is the point of
 contact here. Theoretically ever other service could add a new route
 (eg /info) which defined whatever scheme we choose, and have the service
 catalog point to that and there should be no difference in the
 experience. Keystone is different in that it is always called and
 probably the only client even attempting to do discovery at the moment.
 
 Having said that we have the opportunity thanks to point 3 to add new
 endpoints without the 'values' key that can follow the other examples. 
 
 What i would then like to know is what are the definitions and priority
 of the 'CURRENT' scheme? 'EXPERIMENTAL' is somewhat obvious but what is
 the difference between 'CURRENT' and 'SUPPORTED'? We need to define a
 strict definition of what is allowed here, what values are considered
 stable vs unstable, and what is the priority ordering. 
 
   2. HTTP Status
   
   Some services are 200, some 300. It also doesn't matter how many 
   responses there are in this status. 
  
  Ideally - 300 should be returned if there are multiple versions, and 200
  otherwise.
 
 Strictly speaking probably. I thought the 300 made some sense as
 requests to this page are sort of like a directory where you choose
 where to go next, even if there is only one choice. Probably we should
 just return 200 all the time as it was a successful operation and we
 aren't redirecting automatically. 
 
   3. Keystone uses ['versions']['values']
   
   Yep. Not sure why that is. Sorry, we should be able to have a copy under 
   'values' and one in the root 'versions' simultaneously for a while and 
   then drop the 'values' in some future release. 
  
  Again, keystone seems to be the odd man out here.
  
   4. Glance does a version entry for each minor version. 
   
   Seperate entries for v2.2, v2.1, v2.0. They all point to the same place 
   so IMO this is unnecessary. 
  
  Probably agreed, curious if any Glance folks know of w reason for it.
  
   5. Differences between entry in GET '/' and GET '/vX'
   
   There is often a log more information in GET '/vX' like media-type that 
   is not present in the root. I'm not sure if this was on purpose but i 
   think it easier (and less lookups) to have this information consistent.
  
  Agreed, I expect it's historical following of nova that media-type is
  not in the root. I think it's fixable.
  
   6. GET '/' is unrestricted. GET '/vX' is often token restricted. 
   
   Keystone allows access to /v2.0 and /v3 but most services give a HTTP 
   Unauthorized. This is a real problem for discovery because we need to be 
   able to evaluate the endpoints in the service catalog. I think we need to 
   make these unauthorized.
  
  Agreed, however due to the way the wsgi stacks work in these projects,
  this might not be trivial. I'd set that as a goal to address.
 
 Yes, this is something that will need to be supported for a long time
 anyway. This guide is as much as anything a reference for new projects
 for what the best behaviour is so whilst we may not be able to 

[openstack-dev] [Mistral] Community meeting reminder - 02/17/2014

2014-02-16 Thread Renat Akhmerov
Hi,

This is a reminder that we’ll have an IRC community meeting today at 
#openstack-meeting at 16.00 UTC.

Here’s the agenda:
Review action items
Discuss current status
Continue DSL discussion
Open discussion (roadblocks, suggestions, etc.)

Please let us know if you have other topics to discuss. Thanks!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev