[openstack-dev] [nova] Nova and LVM thin support

2014-04-20 Thread Cristian Tomoiaga
Hello everyone,

Before going any further with my implementation I would like to ask the
community about the LVM thin support in Nova (not Cinder).
The current implementation of the LVM backend does not support thin LVs.
Does anyone believe it is a good idea to add support for this in Nova ? (I
plan on adding support for my implementation anyway).
I would also like to know where Red Hat stands on this, since they are
primarily working on LVM.
I've seen that LVM thin would be supported in RHEL 7 (?) so we may consider
the thin target stable enough for production in Juno (cinder already has
support for this since last year).

I know there was ongoing work to bring a common storage library
implementation to oslo or nova directly (Cinder's Brick library) but I
heard nothing new for some time now. Maybe John Griffith has some thoughts
on this.

The reasons why support for LVM thin would be a nice addition should be
well known especially to people working with LVM.

Another question is related to how Nova treats snapshots when LVM is used
as a backend (I hope I didn't miss anything in the code):
Right now if we can't do a live snapshot, the instance state (memory) is
being saved (libvirt virDomainManagedSave) and qemu-img is used to backup
the instance disk(s). After that we resume the instance.
Can we insert code to snapshot the instance disk so we only keep the
instance offline just for a memory dump and copy the disk content from the
snapshot created ?

-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-20 Thread Jay Pipes
On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
 Hi all: 
 
 Currently, the EC2 API of OpenStack only has tags support (metadata)
 for instances. And there has already a blueprint about to add support
 for volumes and volume snapshots using “metadata”. 
 
 There are a lot of resources such as
 image/subnet/securityGroup/networkInterface(port) are supported add
 tags for AWS. 
 
 I think we should support tags for these resources. There may be no
 property “metadata for these resources, so we should to add
 “metadata” to support the resource tags, the change related API.

Hi Tianhua,

In OpenStack, generally, the choice was made to use maps of key/value
pairs instead of lists of strings (tags) to annotate objects exposed in
the REST APIs. OpenStack REST APIs inconsistently call these maps of
key/value pairs:

 * properties (Glance, Cinder Image, Volume respectively)
 * extra_specs (Nova InstanceType)
 * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
 * metadetails (Nova Aggregate and InstanceGroup)
 * system_metadata (Nova Instance -- differs from normal metadata in
that the key/value pairs are 'owned' by Nova, not a user...) 

Personally, I think tags are a cleaner way of annotating objects when
the annotation is coming from a normal user. Tags represent by far the
most common way for REST APIs to enable user-facing annotation of
objects in a way that is easy to search on. I'd love to see support for
tags added to any searchable/queryable object in all of the OpenStack
APIs.

I'd also like to see cleanup of the aforementioned inconsistencies in
how maps of key/value pairs are both implemented and named throughout
the OpenStack APIs. Specifically, I'd like to see this implemented in
the next major version of the Compute API:

 * Removal of the metadetails term
 * All key/value pairs can only be changed by users with elevated
privileged system-controlled (normal users should use tags)
 * Call all these key/value pair combinations properties --
technically, metadata is data about data, like the size of an
integer. These key/value pairs are just data, not data about data.
 * Identify key/value pairs that are relied on by all of Nova to be a
specific key and value combination, and make these things actual real
attributes on some object model -- since that is a much greater guard
for the schema of an object and enables greater performance by allowing
both type safety of the underlying data and removes the need to search
by both a key and a value.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-20 Thread Carlos Garza

On Apr 18, 2014, at 11:06 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Hi y'all!

Carlos: When I say 'client cert' I'm talking about the certificate / key 
combination the load balancer will be using to initiate the SSL connection to 
the back-end server. The implication here is that if the back-end server 
doesn't like the client cert, it will reject the connection (as being not from 
a trusted source). By 'CA cert' I'm talking about the certificate (sans key) 
that the load balancer will be using the authenticate the back-end server. If 
the back-end server's server certificate isn't signed by the CA, then the 
load balancer should reject the connection.

I see no problem with server auth as well as client auth making its way 
into the API.



Of course, the use of a client cert or CA cert on the load balancer should be 
optional: As Clint pointed out, for some users, just using SSL without doing 
any particular authentication (on either the part of the load balancer or 
back-end) is going to be good enough.

It should be optical for the API implementers to support it or not. This is 
an advanced feature which would lock out many venders if they can't support it.


Anyway, the case for supporting re-encryption on the load-balancers has been 
solidly made, and the API proposal we're making will reflect this capability. 
Next question:

When specific client certs / CAs are used for re-encryption, should these be 
associated with the pool or member?

I could see an argument for either case:

Pool (ie. one client cert / CA cert will be used for all members in a pool):
* Consistency of back-end nodes within a pool is probably both extremely 
common, and a best practice. It's likely all will be accessed the same way.
* Less flexible than certs associated with members, but also less complicated 
config.
* For CA certs, assumes user knows how to manage their own PKI using a CA.

Member (ie. load balancer will potentially use a different client cert / CA 
cert for each member individually):
* Customers will sometimes run with inconsistent back-end nodes (eg. local 
nodes in a pool treated differently than remote nodes in a pool).
* More flexible than certs associated with members, more complicated 
configuration.
* If back-end certs are all individually self-signed (ie. no single CA used for 
all nodes), then certs must be associated with members.


I'm not invested in an argument this far in.


What are people seeing in the wild? Are your users using 
inconsistently-signed or per-node self-signed certs in a single pool?
Thanks,
Stephen





On Fri, Apr 18, 2014 at 5:56 PM, Carlos Garza 
carlos.ga...@rackspace.commailto:carlos.ga...@rackspace.com wrote:

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Dang.  I was hoping this wasn't the case.  (I personally think it's a little 
silly not to trust your service provider to secure a network when they have 
root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it 
consumes a lot more CPU on the load balancers, but because now we potentially 
have to manage client certificates and CA certificates (for authenticating from 
the proxy to back-end app servers). And we also have to decide whether we allow 
the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to 
charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination 
is general needs to be mandatory but I think the API should allow them to be 
specified.

Yes, I realize one could potentially use no client cert or CA (ie. encryption 
but no auth)...  but that actually provides almost no extra security over the 
unencrypted case:  If you can sniff the traffic between proxy and back-end 
server, it's not much more of a stretch to assume you can figure out how to be 
a man-in-the-middle.

Yes but considering you have no problem advocating pure ssl termination for 
your customers(Decryption on the front end and plain text) on the back end I'm 
actually surprised this disturbs you. I would recommend users use Straight SSL 
passthrough or re-enecryption but I wouldn't force this on them should they 
choose naked encryption with no checking.


Do any of you have a use case where some back-end members require SSL 
authentication from the proxy and some don't? (Again, deciding whether client 
cert / CA usage should attach to a pool or to a member.)

When you say client Cert are you referring to the end users X509 Certificate 
(To be rejected by the backend server)or are you referring to the back end 
servers X509Certificate which the loadbalancer would reject if it discovered 
the back end server had a bad signature or mismatched key? I am speaking of the 
case where the user wants re-encryption but wants to be able 

[openstack-dev] Unable to get anything to pass Jenkins on stable/havana due to Neutron bugs

2014-04-20 Thread Matt Riedemann
There seems to be a perfect storm of Neutron-related bugs that are 
blocking anything from passing the check queue in stable/havana.  Is 
anyone from the Neutron team looking at these?


Here's what I seem to be hitting consistently:

https://bugs.launchpad.net/tempest/+bug/1251448

^ marked as won't fix :(

https://bugs.launchpad.net/tempest/+bug/1253896

^ marked as won't fix (for havana) :(

https://bugs.launchpad.net/swift/+bug/1224001

^ marked as a duplicate of bug 1253896 (above) which is marked as won't 
fix for havana :(


https://bugs.launchpad.net/neutron/+bug/1283522

^ There are 5 patches on master related to that bug:

https://review.openstack.org/#/c/78880/
https://review.openstack.org/#/c/80413/
https://review.openstack.org/#/c/80688/
https://review.openstack.org/#/c/81196/
https://review.openstack.org/#/c/81877/

There is only one backport to stable/havana for that bug:

https://review.openstack.org/#/c/84586/

Do the other 4 patches need to be backported as well?

As for the won't fix bugs 1251448 and 1253896 should we exclude those 
tests in tempest stable/havana?  If we're not going to fix them why are 
we living with the failures in CI?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Reviews wanted for new TripleO elements

2014-04-20 Thread Macdonald-Wallace, Matthew
Hi all,

Can I please ask for some reviews on the following:

https://review.openstack.org/#/c/87226/ - Install checkmk_agent
https://review.openstack.org/#/c/87223/ - Install icinga cgi interface

I already have a souple of +1s and jenkins is happy, all I need is +2 and +A! :)

Thanks,

Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unable to get anything to pass Jenkins on stable/havana due to Neutron bugs

2014-04-20 Thread Matt Riedemann



On 4/20/2014 2:43 PM, Matt Riedemann wrote:

There seems to be a perfect storm of Neutron-related bugs that are
blocking anything from passing the check queue in stable/havana.  Is
anyone from the Neutron team looking at these?

Here's what I seem to be hitting consistently:

https://bugs.launchpad.net/tempest/+bug/1251448

^ marked as won't fix :(

https://bugs.launchpad.net/tempest/+bug/1253896

^ marked as won't fix (for havana) :(

https://bugs.launchpad.net/swift/+bug/1224001

^ marked as a duplicate of bug 1253896 (above) which is marked as won't
fix for havana :(

https://bugs.launchpad.net/neutron/+bug/1283522

^ There are 5 patches on master related to that bug:

https://review.openstack.org/#/c/78880/
https://review.openstack.org/#/c/80413/
https://review.openstack.org/#/c/80688/
https://review.openstack.org/#/c/81196/
https://review.openstack.org/#/c/81877/

There is only one backport to stable/havana for that bug:

https://review.openstack.org/#/c/84586/

Do the other 4 patches need to be backported as well?

As for the won't fix bugs 1251448 and 1253896 should we exclude those
tests in tempest stable/havana?  If we're not going to fix them why are
we living with the failures in CI?



So the sky isn't falling, only if you are trying to backport anything to 
stable/havana for Tempest, then it's this bug:


https://bugs.launchpad.net/tempest/+bug/1310368

The neutron failures I was looking at are in non-voting jobs, probably 
because of the issues already pointed out.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

2014-04-20 Thread Adam Young

On 04/18/2014 11:21 AM, Stephen Balukoff wrote:

Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes 
sense to re-encrypt traffic traffic destined for members of a back-end 
pool?  SSL termination on the load balancer makes sense to me, but I'm 
having trouble understanding why one would be concerned about then 
re-encrypting the traffic headed toward a back-end app server. (Why 
not just use straight TCP load balancing in this case, and save the 
CPU cycles on the load balancer?)


Look at it this way.  SSL to the Endpoint protects you on the public 
internet.  That means that at each of the hops from you to the 
Datacenter, no one can read your traffic.



So, if you are at the local coffee shop, wokring on your Neutron setup,  
no one can see more than the URLs that you are using.  From there, it 
goes to the shop's ISP, thorugh a couple of hops, and then ends up at 
your datacenter.  From the ISP to the datacenter, while it is good to be 
secure, the likelihood of random attack is low: these are arelatviely 
secured links, and with companies that have economic incentive not to 
hack your traffic.  Don't get me wrong, there is a real possibility for 
attack, but that is not your big risk.



So, now you are at your datacenter, and you want to talk to Neutron' API 
server.  You hit the SSL terminiation, and your traffic is decrypted.  
And send, in the clear, with your userid and password, to Keystone to 
get a token.


Same as everyone else talking to that keystone server.

Same as everyone else talking to every public server in this data center.

So what you think no one has the ability to run custom code.

Um, this is OpenStack.  Random VMs just teeming with all sorts of code, 
malicious, friendly, intentional, whatever, is being run all over the 
place.


So what is protecting your unsecure socket connection from all of this 
code?  Neutron.Specifically, making sure that no one has messed up 
neutron connectivity and managed to keep that route from the SSL 
terminator to the Neutron API server locked up, so none of those nasty 
VMs can grab and sniff it.  Oh sure...its never gonna happen, right?


Look at it like swimming in a public pool.  There, the number of 
swimmers would be limited by the size of the pool, fire regulations, and 
physical access.  This is the Virtual world.  There are hundreds if not 
thousands of people swimming in this pool.  I'll stop the biological 
analogy because some people reading this might be eating.


SSL.  Everywhere.



We terminate a lot of SSL connections on our load balancers, but have 
yet to have a customer use this kind of functionality.  (We've had a 
few ask about it, usually because they didn't understand what a load 
balancer is supposed to do-- and with a bit of explanation they went 
either with SSL termination on the load balancer + clear text on the 
back-end, or just straight TCP load balancing.)


Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] About advice for Contribution

2014-04-20 Thread JunJie Nan
I suggest you starting from heat project, the orchestration component. One
benifit is you can gain an overall understanding on how each component
works together from API level without struggling in  implementation
details. Another reason is heat is a little younger comparing with nova.
There are much more work which needs to be done, and it's easier to getting
started. Once you get an overview knowledge on openstack projects, you can
select your favour components and dive in.
2014年4月16日 下午3:29于 Mayur Patil ram.nath241...@gmail.com写道:

 Howdy All,

 I need a small advice. I am working from last two years on Eucalyptus.

 Recently, switched to Openstack and trying to contribute to Code-Base.

 My skills are:



 *- I have good understanding of private Cloud*


 *- Total beginner in Python but somewhat good at Java*

 *   Except SWIFT  NEUTRON, I am clear with other components concepts. *

so which project/component should I study to get started for Openstack

Development ?

Seeking for guidance,

Thanks !
 --

 *Cheers,Mayur* S. Patil,
 Pune.

 Contact :
 https://www.facebook.com/mayurram  https://twitter.com/RamMayur
 https://plus.google.com/u/0/+MayurPatil/about
 http://in.linkedin.com/pub/mayur-patil/35/154/b8b/
 https://stackoverflow.com/users/1528044/rammayur 
 https://mozillians.org/en-US/u/mayurp7/
   https://github.com/ramlaxman  https://www.ohloh.net/accounts/mayurp7




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quotas: per-flavor-quotas

2014-04-20 Thread Chen Yu
1. In what ways does the current quota system not work for you? (Operations)

This is Chen from Yahoo openstack engineering team, I can explain our user case 
for “per-flavor-quota”.

We operates a large openstack baremetal cluster where flavors are defined based 
on standard hardware config (a combination of cpu code, cpu cores,  Ram,  and 
Disk). We need to assign quota to the tenant based on the flavor/hardware 
config, for example, for tenantA, allowing it to create/checkout 10 instances 
of flavor C2B-24-500 (6 cores Ivy Bridge + 24GB RAM + 500GB Disk) and 20 
instances of flavor C2B-48-1200. I guess this is quite common in real 
operational environment where the resource and finance approval process are 
hooked in. Here, the current quota system is not able to support this user 
case. The total number of instances, cores, rams are not sufficient to 
differentiate above flavor/hardware configs and thus no way to enforce the 
quota allocation.

Although our user case right now is from the baremetal provisioning, we do 
expect to apply such per-flavor-quota mechanism to vm provisioning. The idea is 
the same, the quota is allocated, and more important, enforced on flavor level, 
instead of on the sum-up number of individual resources which loses the 
information of flavor-level control.

Thanks,
Chen

From: Scott Devoid dev...@anl.govmailto:dev...@anl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, April 16, 2014 at 3:40 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
Subject: [openstack-dev] Quotas: per-flavor-quotas

 Sergio J Cazzolato wrote:
I would to see the operators opinion in this blueprint, we need to understand 
if it is useful or it is confusing for you.

https://review.openstack.org/#/c/84432/9

Sergio, I'm reposting this in a new thread since this isn't about quota 
templates. Also I'm posting it to both operators and the development list. I 
think we need feedback from both.

Hopefully we can get some discussion here on:
1. In what ways does the current quota system not work for you? (Operations)
2. Are there other ways to improve / change the quota system? And do these 
address #1?

My hope is that we can make some small improvements that have the possibility 
of landing in the Juno phase.

As clarification for anyone reading the above blueprint, this came out of the 
operators summit and a thread on the operators mailing list [1]. This blueprint 
defines quotas on the number of a particular flavor that a user or project may 
have, e.g. 3 m1.medium and 1 m1.large instances please. The operational need 
for such quotas is discussed in the mailing list.

There is another interpretation of per-flavor-quotas, which would track the 
existing resources (CPUs, RAM, etc) but do it on a per-flavor basis. As far as 
I know, there is no blueprint for this, but it was suggested in the review and 
on IRC. For clarity, we could call this proposal quota resources per flavor.

There's also a blueprint for extensible resource tracking (which I think is 
part of the quota system), which has some interesting ideas. It is more focused 
on closing the gap between flavor extra-specs and resource usage / quotas. [2]

Thank you,
~ Scott

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2014-April/004274.html
[2] Extensible Resource Tracking https://review.openstack.org/#/c/86050/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quotas: per-flavor-quotas

2014-04-20 Thread Chen Yu
1. In what ways does the current quota system not work for you? (Operations)

This is Chen from Yahoo openstack engineering team, I can explain our user case 
for “per-flavor-quota”.

We operates a large openstack baremetal cluster where flavors are defined based 
on standard hardware config (a combination of cpu code, cpu cores,  Ram,  and 
Disk). We need to assign quota to the tenant based on the flavor/hardware 
config, for example, for tenantA, allowing it to create/checkout 10 instances 
of flavor C2B-24-500 (6 cores Ivy Bridge + 24GB RAM + 500GB Disk) and 20 
instances of flavor C2B-48-1200. I guess this is quite common in real 
operational environment where the resource and finance approval process are 
hooked in. Here, the current quota system is not able to support this user 
case. The total number of instances, cores, rams are not sufficient to 
differentiate above flavor/hardware configs and thus no way to enforce the 
quota allocation.

Although our user case right now is from the baremetal provisioning, we do 
expect to apply such per-flavor-quota mechanism to vm provisioning. The idea is 
the same, the quota is allocated, and more important, enforced on flavor level, 
instead of on the sum-up number of individual resources which loses the 
information of flavor-level control.

Thanks,
Chen

From: Scott Devoid dev...@anl.govmailto:dev...@anl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, April 16, 2014 at 3:40 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
Subject: [openstack-dev] Quotas: per-flavor-quotas

 Sergio J Cazzolato wrote:
I would to see the operators opinion in this blueprint, we need to understand 
if it is useful or it is confusing for you.

https://review.openstack.org/#/c/84432/9

Sergio, I'm reposting this in a new thread since this isn't about quota 
templates. Also I'm posting it to both operators and the development list. I 
think we need feedback from both.

Hopefully we can get some discussion here on:
1. In what ways does the current quota system not work for you? (Operations)
2. Are there other ways to improve / change the quota system? And do these 
address #1?

My hope is that we can make some small improvements that have the possibility 
of landing in the Juno phase.

As clarification for anyone reading the above blueprint, this came out of the 
operators summit and a thread on the operators mailing list [1]. This blueprint 
defines quotas on the number of a particular flavor that a user or project may 
have, e.g. 3 m1.medium and 1 m1.large instances please. The operational need 
for such quotas is discussed in the mailing list.

There is another interpretation of per-flavor-quotas, which would track the 
existing resources (CPUs, RAM, etc) but do it on a per-flavor basis. As far as 
I know, there is no blueprint for this, but it was suggested in the review and 
on IRC. For clarity, we could call this proposal quota resources per flavor.

There's also a blueprint for extensible resource tracking (which I think is 
part of the quota system), which has some interesting ideas. It is more focused 
on closing the gap between flavor extra-specs and resource usage / quotas. [2]

Thank you,
~ Scott

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2014-April/004274.html
[2] Extensible Resource Tracking https://review.openstack.org/#/c/86050/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] Compatibility of extra values returned in json dicts and headers

2014-04-20 Thread Kenichi Oomichi
Hi David,

Thanks for pointing it up.

 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com]
 Sent: Saturday, April 19, 2014 4:16 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [nova][qa] Compatibility of extra values returned in 
 json dicts and headers
 
 Recently, as a result of the nova 2.1/3.0 discussion, tempest has been
 adding validation of the json dictionaries and headers returned by nova
 api calls. This is done by specifying json schema for these values.
 As proposed, these schema do not specify additionalProperties: False,
 which means that if a header is added or a new key is added to a returned
 dict, the tempest test will not fail. The current api change guidelines
 say this:
 
 Generally Considered OK
 * The change is the only way to fix a security bug
 * Fixing a bug so that a request which resulted in an error response
   before is now successful
 * Adding a new response header
 * Changing an error response code to be more accurate
 * OK when conditionally added as a new API extension
   * Adding a property to a resource representation
   * Adding an optional property to a resource representation which may
 be supplied by clients, assuming the API previously would ignore this 
 property
 
 This seems to say that you need an api extension to add a value to a
 returned dict but not to add a new header. So that would imply that
 checking the headers should allow additional properties but checking
 the body should not. Is that the desired behavior?

On  Generally Not Acceptable of 
https://wiki.openstack.org/wiki/APIChangeGuidelines ,
the above case is not mentioned. So I'm not sure we have already gotten
a consensus that we should not allow additional properties of response bodies.
I has gotten this point twice through patch reivews. Just IMO, it is OK to
extend response bodies with additional properties. That means 
additionalProperties
should be True in JSONSchema definitions.
The removal/rename of properties cause backward incompatibility issues and
Tempest needs to block these changes, but additional properties don't cause
the issues and clients should be assumed to ignore them if not necessary.

This is just my opinion, and it is not strong one. So I also know some
feedbacks about this.


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] qa-specs Repo and QA Program Juno Blueprint Review Process

2014-04-20 Thread Kenichi Oomichi

Hi Matthew,

Thanks for doing this,

 -Original Message-
 From: Matthew Treinish [mailto:mtrein...@kortar.org]
 Sent: Saturday, April 19, 2014 11:59 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [QA] qa-specs Repo and QA Program Juno Blueprint 
 Review Process
 
 Hi Everyone,
 
 Just like Nova [1] the QA program has adopted the proposal [2] to use gerrit 
 to
 review blueprint specifications.
 
 The openstack/qa-specs repo is now ready for submissions. Changes are 
 submitted
 to it like any other gerrit project. The README and a template for submitting
 new specifications can be found here:
 
 http://git.openstack.org/cgit/openstack/qa-specs/tree/README.rst
 
 http://git.openstack.org/cgit/openstack/qa-specs/tree/template.rst
 
 Please note that *all* Juno blueprints, including ones that were previously
 approved for a previous cycle, must go through this new process.  This will
 help ensure that blueprints previously approved still make sense, as well as
 ensure that all Juno specs follow a more complete and consistent format. All
 outstanding Tempest blueprints from Icehouse have already been moved back into
 the 'New' state on Launchpad in preparation for a specification proposal using
 the new process.
 
 Everyone, not just tempest and grenade cores, should feel welcome to provide
 reviews and feedback on these specification proposals. Just like for code
 reviews we really appreciate anyone who takes the time to provide an 
 insightful
 review.
 
 Since this is still a new process for all the projects I fully expect this
 process to evolve throughout the Juno cycle. But, I can honestly say that we
 have already seen positive effects from this new process even with only a
 handful of specifications going through the process.

I have experienced this process on not only QA but also Nova, that is nice
to get feedbacks from the other area developers. In addition, this process
enforces each writer need to consider/show merit, demerit and alternatives.
That would be nice to get a consensus of each blueprint, I prefer this process.

Just one concern is this process would block the developments if we cannot
get enough reviewers for qa-specs. Now there are few blueprints on qa-specs
and it is easy to review all of them. But if many bps on qa-specs, it would
be difficult to review all, I guess. The part of this is already mentioned
on http://git.openstack.org/cgit/openstack/qa-specs/tree/README.rst#n43
and I guess we need to figure out the review progresses in IRC meetings.

Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to add a property to the API extension?

2014-04-20 Thread Alex Xu

On 2014年04月17日 05:25, Jiang, Yunhong wrote:

Hi, Christopher,
I have some question to the API changes related to 
https://review.openstack.org/#/c/80707/4/nova/api/openstack/compute/plugins/v3/hypervisors.py
 , which adds a property to the hypervisor information.


Hi, Yunhong, Chris may be available for a while. Let me answer your 
question.



a) I checked the https://wiki.openstack.org/wiki/APIChangeGuidelines but not sure if it's ok to 
Adding a property to a resource representation as I did in the patch, or I need another 
extension to add this property? Does OK when conditionally added as a new API extension 
means I need another extension?
You can add a property for v3 api directly for now. Because v3 api 
didn't release yet. We needn't wrong about any back-compatibility 
problem. if you add a property for v2 api

you need another extension.


b) If we can simply add a property like the patch is doing, would it requires 
to bump the version number? If yes, how should the version number be? Would it 
be like 1/2/3 etc, or should it be something like 1.1/1.2/2.1 etc?


You needn't bump the version number for same reason v3 api didn't 
release yet. After v3 api released, we should bump the version, it would 
be like 1/2/3 etc.



Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 退出邮件列表

2014-04-20 Thread l
您好:
由于一些情况,我想退出该邮件列表。谢谢!


xueyan
2014.4.21___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-04-20 Thread Daisuke Morita


(2014/04/17 4:22), Jaume Devesa wrote:

I thought that OpenStack just support one release backwards, if we have
to support three versions, this is not useful.


In fact, I could not make sense this meaning. OpenStack has two 
security-supported series and one project under development.

https://wiki.openstack.org/wiki/Releases

Therefore, I think Sean's proposal is reasonable. Tempest should be able 
to test two supported releases for administrators and one release for 
developers.




There are already ways to enable/disable modules in tempest to adapt to
each deployment needs. Just wanted to avoid more configuration options.




On 16 April 2014 21:14, David Kranz dkr...@redhat.com
mailto:dkr...@redhat.com wrote:

On 04/16/2014 11:48 AM, Jaume Devesa wrote:

Hi Sean,

for what I understood, we will need a new feature flag for each
new feature, and a feature flag (default to false) for each
deprecated one. My concern is: since the goal is make tempest a
confident tool to test any installation and not and 'tempest.conf'
will not be auto-generated from any tool as devstack does,
wouldn't be too hard to prepare a tempest.conf file with so many
feature flags to enable and disable?

If we go down this route, and I think we should, we probably need to
accept that it will be hard for users to manually configure
tempest.conf. Tempest configuration would have to be done by
whatever installation technology was used, as devstack does, or by
auto-discovery. That implies that the presence of new features
should be discoverable through the api which is a good idea anyway.
Of course some one could configure it manually but IMO that is not
desirable even with where we are now.



Maybe I am simplifying too much, but wouldn't be enough with a
pair of functions decorators like

@new
@deprecated

Then, in tempest.conf it could be a flag to say which OpenStack
installation are you testing:

installation = [icehouse|juno]

if you choose Juno, tests with @new decorator will be executed and
tests with @deprecated will be skipped.
If you choose Icehouse, tests with @new decorator will be skipped,
and tests with @deprecated will be executed

I am missing some obvious case that make this approach a nonsense?

There are two problems with this. First, some folks are chasing
master for their deployments and some do not deploy all the features
that are set up by devstack. In both cases, it is not possible to
identify what can be tested with a simple name that corresponds to a
stable release. Second, what happens when we move on to K? The
meaning of new would have to change while retaining its old
meaning as well which won't work. I think Sean spelled out the
important scenarios.

  -David



Regards,
jaume


On 14 April 2014 15:21, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

As we're coming up on the stable/icehouse release the QA team
is looking
pretty positive at no longer branching Tempest. The QA Spec
draft for
this is here -

http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-docs/3f84796/doc/build/html/specs/branchless-tempest.html
and hopefully address a lot of the questions we've seen so far.

Additional comments are welcome on the review -
https://review.openstack.org/#/c/86577/
or as responses on this ML thread.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net mailto:s...@dague.net /
sean.da...@samsung.com mailto:sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [SWIFT] Delete operation problem

2014-04-20 Thread Sumit Gaur
Hi
I using jclouds lib integrated with Openstack Swift+ keystone combination.
Things are working fine except stability test. After 20-30 hours of test
jclouds/SWIFT start degrading in TPS and keep going down over the time.

1) I am running the (PUT-GET-DEL) cycle in 10 parallel threads.
2) I am getting a lot of 409 and DEL failure for the response too from
SWIFT.


Can sombody help me what is going wrong here ?

Thanks
sumit
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev