Re: [openstack-dev] [Neutron] Chalenges with highly available service VMs - port adn security group options.

2013-07-24 Thread Samuel Bercovici
Hi,

This might be apparent but not to me.
Can you point to how broadcast can be turned on a network/port?

As for the 
https://github.com/openstack/neutron/blob/master/neutron/extensions/portsecurity.py,
 in NVP, does this totally disable port security on a port/network or it just 
disable the MAC/IP checks and still allows the "user defined" port security to 
take effect?
This looks like an extension only implemented by NVP, do you know if there are 
similar implementations for other plugins?

Regards,
-Sam.


From: Aaron Rosen [mailto:aro...@nicira.com]
Sent: Tuesday, July 23, 2013 10:52 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List; sorla...@nicira.com; Avishay Balderman; 
gary.kot...@gmail.com
Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.

I agree too. I've posted a work in progress of this here if you want to start 
looking at it: https://review.openstack.org/#/c/38230/

Thanks,

Aaron

On Tue, Jul 23, 2013 at 4:21 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi,

I agree that the AutZ should be separated and the service provider should be 
able to control this based on their model.

For Service VMs who might be serving ~100-~1000 IPs and might use multiple MACs 
per port, it would be better to turn this off altogether that to have an 
IPTABLE rules with thousands of entries.
This why I prefer to be able to turn-off IP spoofing and turn-off MAC spoofing 
altogether.

Still from a logical model / declarative reasons an IP that can migrate between 
different ports should be declared as such and maybe also from MAC perspective.

Regards,
-Sam.








From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Sunday, July 21, 2013 9:56 PM

To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.



On 19 July 2013 13:14, Aaron Rosen 
mailto:aro...@nicira.com>> wrote:


On Fri, Jul 19, 2013 at 1:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:

Hi,



I have completely missed this discussion as it does not have quantum/Neutron in 
the subject (modify it now)

I think that the security group is the right place to control this.

I think that this might be only allowed to admins.


I think this shouldn't be admin only since tenant's have control of their own 
networks they should be allowed to do this.

I reiterate my point that the authZ model for a feature should always be 
completely separated by the business logic of the feature itself.
In my opinion there are grounds both for scoping it as admin only and for 
allowing tenants to use it; it might be better if we just let the policy engine 
deal with this.


Let me explain what we need which is more than just disable spoofing.

1.   Be able to allow MACs which are not defined on the port level to 
transmit packets (for example VRRP MACs)== turn off MAC spoofing

For this it seems you would need to implement the port security extension which 
allows one to enable/disable port spoofing on a port.

This would be one way of doing it. The other would probably be adding a list of 
allowed VRRP MACs, which should be possible with the blueprint pointed by Aaron.

2.   Be able to allow IPs which are not defined on the port level to 
transmit packets (for example, IP used for HA service that moves between an HA 
pair) == turn off IP spoofing

It seems like this would fit your use case perfectly:   
https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairs

3.   Be able to allow broadcast message on the port (for example for VRRP 
broadcast) == allow broadcast.


Quantum does have an abstraction for disabling this so we already allow this by 
default.



Regards,

-Sam.





From: Aaron Rosen [mailto:aro...@nicira.com]
Sent: Friday, July 19, 2013 3:26 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Chalenges with highly available service VMs



Yup:

I'm definitely happy to review and give hints.

Blueprint:  
https://docs.google.com/document/d/18trYtq3wb0eJK2CapktN415FRIVasr7UkTpWn9mLq5M/edit

https://review.openstack.org/#/c/19279/  < patch that merged the feature;

Aaron



On Thu, Jul 18, 2013 at 5:15 PM, Ian Wells 
mailto:ijw.ubu...@cack.org.uk>> wrote:

On 18 July 2013 19:48, Aaron Rosen 
mailto:aro...@nicira.com>> wrote:
> Is there something this is missing that could be added to cover your use
> case? I'd be curious to hear where this doesn't work for your case.  One
> would need to implement the port_security extension if they want to
> completely allow all ips/macs to pass and they could state which ones are
> explicitly allowed with the allowed-address-pair extension (at least that is
> my current thought).

Yes - have you got docs on the port security extension?  All I've
found so far are
http://docs

[openstack-dev] Validating Flavor IDs

2013-07-24 Thread Karajgi, Rohit
Hi,

Referring to https://bugs.launchpad.net/nova/+bug/1202136, it seems that the 
novaclient 
validates the flavor ID to be either an integer or a UUID string. This check 
does not exist in Nova, so currently strings
are also accepted as flavor IDs by Nova when direct restful API calls are made.

What should the data type of a flavor's ID be?

-Rohit

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] use openstack-dev mailing list

2013-07-24 Thread Sergey Lukjanov
Hi folks,

We decided to use openstack-dev@lists.openstack.org mailing list instead of 
savanna-...@lists.launchpad.net for all savanna-related communication. The old 
one will be closed and we’ll monitor it and forward all emails from it to the 
correct mailing list. Additionally it means that there is no need to add 
savanna-all to CC.

The main reason of this decision is to use the same approach as all OpenStack 
projects.

Thanks.


Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Julien Danjou
On this same topic, is https://launchpad.net/~openstack-py3-team
description still accurate? I see no trace of recent meetings. That
could/should be updated.

I've been working on Python 3 effort recently, I've also started a wiki
page at https://wiki.openstack.org/wiki/Python3 to track progress.

I know Chuck's working on this too, so feel free to add more projects
and information. :)

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Day, Phil
Hi Alex,

I'm inclined to agree with others that I'm not sure you need the complexity 
that this BP brings to the system.If you want to provide a user with a 
choice about how much overcommit they will be exposed to then doing that in 
flavours and the aggregate_instance_extra_spec filter seems the more natural 
way to do this, since presumably you'd want to charge differently for those and 
the flavour list is normally what is linked to the pricing model.  

I also like the approach taken by the recent changes to the ram filter where 
the scheduling characteristics are defined as properties of the aggregate 
rather than separate stanzas in the configuration file.

An alternative, and the use case I'm most interested in at the moment, is where 
we want the user to be able to define the scheduling policies on a specific set 
of hosts allocated to them (in this case they pay for the host, so if they want 
to oversubscribe on memory/cpu/disk then they should be able to).  The basic 
framework for this is described in this BP 
https://blueprints.launchpad.net/nova/+spec/whole-host-allocation and the 
corresponding wiki page (https://wiki.openstack.org/wiki/WholeHostAllocation.   
 I've also recently posted code for the basic framework built as a wrapper 
around aggregates (https://review.openstack.org/#/c/38156/, 
https://review.openstack.org/#/c/38158/ ) which you might want to take a look 
at.
 
Its not clear to me if what your proposing addresses an additional gap between 
this and the combination of the aggregate_extra_spec filter + revised filters 
to get their configurations from aggregates) ?

Cheers,
Phil

> -Original Message-
> From: Russell Bryant [mailto:rbry...@redhat.com]
> Sent: 23 July 2013 22:32
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler
> policies/drivers
> 
> On 07/23/2013 04:24 PM, Alex Glikson wrote:
> > Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> >
> >> I understand the use case, but can't it just be achieved with 2
> >> flavors and without this new aggreagte-policy mapping?
> >>
> >> flavor 1 with extra specs to say aggregate A and policy Y flavor 2
> >> with extra specs to say aggregate B and policy Z
> >
> > I agree that this approach is simpler to implement. One of the
> > differences is the level of enforcement that instances within an
> > aggregate are managed under the same policy. For example, nothing
> > would prevent the admin to define 2 flavors with conflicting policies
> > that can be applied to the same aggregate. Another aspect of the same
> > problem is the case when admin wants to apply 2 different policies in
> > 2 aggregates with same capabilities/properties. A natural way to
> > distinguish between the two would be to add an artificial property
> > that would be different between the two -- but then just specifying
> > the policy would make most sense.
> 
> I'm not sure I understand this.  I don't see anything here that couldn't be
> accomplished with flavor extra specs.  Is that what you're saying?
> Or are you saying there are cases that can not be set up using that approach?
> 
> >> > Well, I can think of few use-cases when the selection approach
> >> > might be different. For example, it could be based on tenant
> >> > properties (derived from some kind of SLA associated with the
> >> > tenant, determining the over-commit levels), or image properties
> >> > (e.g., I want to determine placement of Windows instances taking
> >> > into account Windows licensing considerations), etc
> >>
> >> Well, you can define tenant specific flavors that could have
> >> different policy configurations.
> >
> > Would it possible to express something like 'I want CPU over-commit of
> > 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?
> 
> Sure.  Define policies for sla=gold and sla=silver, and the flavors for each
> tenant would refer to those policies.
> 
> >> I think I'd rather hold off on the extra complexity until there is a
> >> concrete implementation of something that requires and justifies it.
> >
> > The extra complexity is actually not that huge.. we reuse the existing
> > mechanism of generic filters.
> 
> I just want to see something that actually requires it before it goes in.  I 
> take
> exposing a pluggable interface very seriously.  I don't want to expose more
> random plug points than necessary.
> 
> > Regarding both suggestions -- I think the value of this blueprint will
> > be somewhat limited if we keep just the simplest version. But if
> > people think that it makes a lot of sense to do it in small increments
> > -- we can probably split the patch into smaller pieces.
> 
> I'm certainly not trying to diminish value, but I am looking for specific 
> cases
> that can not be accomplished with a simpler solution.
> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> h

Re: [openstack-dev] [Swift] Swift Auth systems and Delay Denial

2013-07-24 Thread David Hadas
Clay, It sounds like a bad idea to remove delay_denial support (at least in
the near term).

Authorizing up front seem to have certain advantages:
1. Flexibility - as it allows authenticating based on any attribute
returned from the get_info() (without changes to Swift).
2. Security - a single point of control (unlike delay_denial in which we
need to maintain the co-play between Auth middleware(s) and Swift to be bug
free to be secure).
Performance and overhead are not the issue here.

The get_info() signature was designed to be somewhat extendable and future
proof in the sense that it returns a dictionary - allowing us to add
attributes as needed. And once it is used by middleware, we can use it in
auth middleware as well.

We may document that additional option + the pros and cons of
using delay_denial and vs. using get_info() upfront. Auth system developers
may take per auth system decision if it makes sense at some point to make
changes.

DH


On Tue, Jul 23, 2013 at 9:16 PM, Clay Gerrard wrote:

> I think delay_denial will have to be maintained for awhile for backwards
> compatibility no matter what happens.
>
> I think existing auth middlewares can and often do reject requests
> outright without forwarding them to swift (no x-auth-token?).
>
> I think get_info and the env caching is relatively new, do we have
> confidence that it's call signature and data structure will be robust to
> future requirements?  It seems reasonable to me at first glance that
> upstream middleware would piggy back on existing memcache data, middleware
> authors certainly already can and presumably do depend on get_info's
> interface; so i guess the boat already sailed?
>
> I think there's some simplicity gained from an auth middleware
> implementor's perspective if swift specific path parsing and and relevant
> acl extraction has a more procedural interface, but if there's efficiency
> gains it's probably worth jumping through some domain specific hoops.
>
> So it's certainly possible today, but if we document it as a supported
> interface we'll have to be more careful about how we maintain it.What's
> motivating you to change what's there?  Do you think keystone or swauth
> incur a measurable overhead from the callback based auth in the full
> context of the lifetime of the request?
>
> -Clay
>
>
>
> On Tue, Jul 23, 2013 at 1:49 AM, David Hadas wrote:
>
>> Hi,
>>
>> Starting from 1.9, Swift has get_info() support allowing middleware to
>> get container and/or account information maintained by Swift.
>> Middleware can use get_info() on a container to retrieve the container
>> metadata.
>> In a similar way, middleware can use get_inf() on an account to retrieve
>> the account metadata.
>>
>> The ability to retrieve container and account metadata by middleware
>> opens up an option to write Swift Auth systems without the use of the Swift
>> Delay Denial mechanism. For example, when a request comes in ( during
>> '__call__()' ), the Auth middleware can perform get_info on the container
>> and/or account and decide whether to authorize or reject the client request
>> upfront and before the request ever reaching Swift. In such a case, if the
>> Auth middleware decides to allow the request to be processed by Swift, it
>> may avoid adding a swift.authorize callback and thus disabling the use of
>> the Swift delay_denial mechanism.
>>
>> Qs:
>> 1. Should we document this approach as another way to do auth in Swift
>> (currently this option is not well documented)
>>  See http://docs.openstack.org/developer/swift/development_auth.html:
>>   "Authorization is performed through callbacks by the Swift Proxy
>> server to the WSGI environment’s swift.authorize value, if one is set."
>> followed by an example how that is done. Should we add description for this
>> alternative option of using get_info() during __call__()?
>>
>> 2. What are the pros and cons of each of the two options?
>>  What benefit do we see in an AUTH system using delay_denial over
>> deciding on the authorization upfront?
>>  Should we continue use delay_denial in keystone_auth, swauth?
>>
>> DH
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican]

2013-07-24 Thread Jarret Raim

> 
>It seems KMIP is getting lots of enterprise attention, so I think it may
>be good candidate for future (as you already mentioned in your email
>below) Barbican feature,  as per the link below it seems our community
>also expects KMIP to be integrated with OpenStack line of products.
> 
>https://wiki.openstack.org/wiki/KMIPclient

I have heard the interest in KMIP stated several times at the Summit and
on this list. However, I've talked with many large customers and they are
not asking me for KMIP support right now. I think we will support it at
some point in the future, but as there is no python lib, it is a
reasonably large undertaking to build one correctly. If anyone wants to
help with that, we'd love to have you.

On that blueprint, it was only written a week ago. I already posted a
message on why I think implementing KMIP directly into products is a bad
idea, but any project can go whichever way works best for their users.

> 
>Would you mind sharing the Barbican product roadmap (if it is public) as
>I did not find one?

Most of our work is aimed at cleaning up the current features for Havana.
We are cleaning up the API's treatment of content-types and implementing
backend support for various key storage devices (HSMs, etc) and building
out our production environment at Rackspace. Additionally, we have a lot
of 'good citizen' stuff to finish up including usage events, auditing and
some other features.

Going forward, there are some large feature sets we are looking to build
including SSL support (automatic provisioning, lifecycle management) and
federation. I'm hoping we'll get some more good feedback from customers
once we launch, so I'm leaving some room in the roadmap for requests that
may come up.

> 
>Following are some of thoughts on your previous email about KMIP
>
>(*) That is true but it is getting lots of recognition which means in
>future we will see more HSM product with KMIP compatibility.

There are certainly some out there. As they become more popular, we'll
certainly become more interested in supporting them. The key here is that
HSMs can be supported just fine right now using existing libs and PKCS
#11. Until there is a compelling reason to switch to KMIP, I don't see a
lot of effort going there from us.

>(**) I think Barbican will act as a KMS proxy in this case, which does
>not fulfill the KMIP protocol philosophy which build around interaction
>between KMIP client and server.

As I've said before, I don't think expecting products to talk directly
with the KMIP server is realistic. There are a mountain of tasks that a
good openstack service is expected to do including using keystone for
authentication, providing centrally managed RBAC and access control,
providing metrics events to ceilometer, speaking ReST / JSON, scaling from
small private cloud to large public clouds and many more. KMIP servers
will most likely never do those things. It will be much easier to have
openstack speak a to common, restful, open source abstraction. Underneath,
we can deal with the various key storage styles.

>From what I've seen, KMIP doesn't really support the true multi tenancy
use cases such as those needed by Rackspace. I'm would happy to be proven
wrong, but right now this fact makes it impossible for an OpenStack to use
the device directly as Barbican is needed to provide tenant isolation and
authentication. If Barbican wasn't there, every product will need to
understand the HSM model, be able to configure it as needed and submit to
whatever authentication mechanism is required. This means that the choice
of an HSM vendor will leak out into all the services in a deployment,
rather than just the one that needs to deal with it.

Finally, there must be a free and open source implementation for key
management. Not all providers are interested or capable of purchasing HSMs
to back their encryption and are okay with that tradeoff. PKCS and KMIP
have been around for a while now and we've seen almost no adoption outside
of the enterprise and usually just between large enterprise software
packages. I want strong encryption and key management to be easy for
developers to integrate into all software. It needs to be free,
open-source and dev friendly, none of which describes the current state of
the art. Barbican is going to provide key management to openstack and we
hope that other projects will integrate with us. We will also provide key
management directly to customers to ensure that everyone can build strong
data protection into their applications.

Now if we can just get HSM vendors to install Barbican on their devices,
maybe we'd have something :)


Thanks,
Jarret



> 
> 
>Regards,
>Arvind
> 
> 
> 
>From: Jarret Raim [mailto:jarret.r...@rackspace.com]
>
>Sent: Monday, July 22, 2013 2:38 PM
>To: OpenStack Development Mailing List
>Subject: Re: [openstack-dev] [barbican]
>
>
> 
>I¹m the product owner for Barbican at Rackspace. I¹ll take a shot an
>answering your questions.
> 
>> 1. What is the st

Re: [openstack-dev] KMIP client for volume encryption key management

2013-07-24 Thread Jarret Raim

>· Agreed that there isn¹t an existing KMIP client in python. We are
>offering to port the needed functionality from our current java KMIP
>client  to python and contribute it to openstack.

Are you taking about porting to Python or having python call a Java
wrapper?


>· Good points about the common features that barbican provides. I will
>take a look at the barbican architecture and join discussions there.

Thanks, we'd love the help.


Jarret


> 
> 
>Thanks,
>Bill
> 
> 
>From: Jarret Raim [mailto:jarret.r...@rackspace.com]
>
>Sent: Friday, July 19, 2013 9:46 AM
>To: OpenStack Development Mailing List
>Subject: Re: [openstack-dev] KMIP client for volume encryption key
>management
>
>
> 
>I'm not sure that I agree with this direction. In our investigation, KMIP
>is a problematic protocol for several reasons:
>
>
>* We haven't found an implementation of KMIP for Python. (Let us know if
>there is one!)
>* Support for KMIP by HSM vendors is limited.
>* We haven't found software implementations of KMIP suitable for use as
>an HSM replacement. (e.g. Most deployers wanting to use KMIP would have
>to spend a rather large amount of money to purchase HSMs)
>* From our research, the KMIP spec and implementations seem to lack
>support for multi-tenancy. This makes managing keys for thousands of
>users difficult or impossible.
>
>The goal for the Barbican system is to provide key management for
>OpenStack. It uses the standard interaction mechanisms for OpenStack,
>namely ReST and JSON. We integrate with keystone and will
> provide common features like usage events, role-based access control,
>fine grained control, policy support, client libs, Celiometer support,
>Horizon support and other things expected of an OpenStack service. If
>every product is forced to implement KMIP, these
> features would most likely not be provided by whatever vendor is used
>for the Key Manager. Additionally, as mentioned in the blueprint, I have
>concerns that vendor specific data will be leaked into the rest of
>OpenStack for things like key identifiers, authentication
> and the like. 
>
>
>
> 
>
>I would propose that rather than each product implement KMIP support, we
>implement KMIP support into Barbican. This will allow the products to
>speak ReST / JSON using our client libraries just
> like any other OpenStack system and Barbican will take care of being a
>good OpenStack citizen. On the backend, Barbican will support the use of
>KMIP to talk to whatever device the provider wishes to deploy. We will
>also support other interaction mechanisms
> including PKCS through OpenSSH, a development implementation and a fully
>free and open source software implementation. This also allows some
>advanced uses cases including federation. Federation will allow customers
>of public clouds like Rackspace's to maintain
> custody of their keys while still being able to delegate their use to
>the Cloud for specific tasks.
>
> 
>
>I've been asked about KMIP support at the Summit and by several of
>Rackspace's partners. I was planning on getting to it at some point,
>probably after Icehouse. This is mostly due to the fact that
> we didn't find a suitable KMIP implementation for Python so it looks
>like we'd have to write one. If there is interest from people to create
>that implementation, we'd be happy to help do the work to integrate it
>into Barbican.
>
> 
>
>We just released our M2 milestone and we are on track for our 1.0 release
>for Havana. I would encourage anyone interested to check our what we are
>working on and come help us out. We use this list
> for most of our discussions and we hang out on #openstack-cloudkeep on
>free node. 
>
> 
>
> 
>
>Thanks,
>
>Jarret
>
> 
>
> 
>
> 
>
> 
>
>From: , Bill 
>Reply-To: OpenStack List 
>Date: Thursday, July 18, 2013 2:11 PM
>To: OpenStack List 
>Subject: [openstack-dev] KMIP client for volume encryption key management
>
> 
>
>A blueprint and spec to add a client that implements OASIS KMIP standard
>was recently added:
> 
>https://blueprints.launchpad.net/nova/+spec/kmip-client-for-volume-encrypt
>ion
>https://wiki.openstack.org/wiki/KMIPclient
> 
> 
>We¹re looking for feedback to the set of questions in the spec. Any
>additional input is also appreciated.
> 
>Thanks,
>Bill B.
>The information contained in this electronic mail transmission may be
>privileged and confidential, and therefore, protected from disclosure. If
>you have received this communication in error, please notify us
>immediately by replying to this message and deleting it from your
>computer without copying or disclosing it.
>
>
>The information contained in this electronic mail transmission
>may be privileged and confidential, and therefore, protected
>from disclosure. If you have received this communication in
>error, please notify us immediately by replying to this
>message and deleting it from your computer without copying
>or disclosing it.
>
>


___
OpenStack-dev mailing list
OpenStack-

Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-24 Thread Henry Nash
I think we should transfer this discussion to the etherpad for this blueprint: 
https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's make any 
further comments there, rather than here.

Henry
On 24 Jul 2013, at 00:29, Simo Sorce wrote:

> On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:
>> ...the problem is that if the object does not exists we might not be able 
>> tell whether the use is authorized or not (since authorization might depend 
>> on attributes of the object itself)so how do we know wether to lie or 
>> not?
> 
> If the error you return is always 'Not Found', why do you care ?
> 
> Simo.
> 
>> Henry
>> On 23 Jul 2013, at 21:23, David Chadwick wrote:
>> 
>>> 
>>> 
>>> On 23/07/2013 19:02, Henry Nash wrote:
 One thing we could do is:
 
 - Return Forbidden or NotFound if we can determine the correct answer
 - When we can't (i.e. the object doesn't exist), then return NotFound
 unless a new config value 'policy_harden' (?) is set to true (default
 false) in which case we translate NotFound into Forbidden.
>>> 
>>> I am not sure that this achieves your objective of no data leakage through 
>>> error codes, does it?
>>> 
>>> Its not a question of determining the correct answer or not, its a question 
>>> of whether the user is authorised to see the correct answer or not
>>> 
>>> regards
>>> 
>>> David
 
 Henry
 On 23 Jul 2013, at 18:31, Adam Young wrote:
 
> On 07/23/2013 12:54 PM, David Chadwick wrote:
>> When writing a previous ISO standard the approach we took was as follows
>> 
>> Lie to people who are not authorised.
> 
> Is that your verbage?  I am going to reuse that quote, and I would
> like to get the attribution correct.
> 
>> 
>> So applying this approach to your situation, you could reply Not
>> Found to people who are authorised to see the object if it had
>> existed but does not, and Not Found to those not authorised to see
>> it, regardless of whether it exists or not. In this case, only those
>> who are authorised to see the object will get it if it exists. Those
>> not authorised cannot tell the difference between objects that dont
>> exist and those that do exist
> 
> So, to try and apply this to a semi-real example:  There are two types
> of URLs.  Ones that are like this:
> 
> users/55FEEDBABECAFE
> 
> and ones like this:
> 
> domain/66DEADBEEF/users/55FEEDBABECAFE
> 
> 
> In the first case, you are selecting against a global collection, and
> in the second, against a scoped collection.
> 
> For unscoped, you have to treat all users as equal, and thus a 404
> probably makes sense.
> 
> For a scoped collection we could return a 404 or a 403 Forbidden
>  based on the users
> credentials:  all resources under domain/66DEADBEEF  would show up
> as 403s regardless of existantce or not if the user had no roles in
> the domain 66DEADBEEF.  A user that would be allowed access to
> resources in 66DEADBEEF  would get a 403 only for an object that
> existed but that they had no permission to read, and 404 for a
> resource that doesn't exist.
> 
> 
> 
> 
>> 
>> regards
>> 
>> David
>> 
>> 
>> On 23/07/2013 16:40, Henry Nash wrote:
>>> Hi
>>> 
>>> As part of bp
>>> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
>>> I have uploaded some example WIP code showing a proposed approach
>>> for just a few API calls (one easy, one more complex). I'd
>>> appreciate early feedback on this before I take it any further.
>>> 
>>> https://review.openstack.org/#/c/38308/
>>> 
>>> A couple of points:
>>> 
>>> - One question is on how to handle errors when you are going to get
>>> a target object before doing you policy check.  What do you do if
>>> the object does not exist?  If you return NotFound, then someone,
>>> who was not authorized  could troll for the existence of entities by
>>> seeing whether they got NotFound or Forbidden. If however, you
>>> return Forbidden, then users who are authorized to, say, manage
>>> users in a domain would aways get Forbidden for objects that didn't
>>> exist (since we can know where the non-existant object was!).  So
>>> this would modify the expected return codes.
>>> 
>>> - I really think we need some good documentation on how to bud
>>> keystone policy files.  I'm happy to take a first cut as such a
>>> thing - what do you think the right place is for such documentation
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/opens

[openstack-dev] Quick README? (Re: [vmware] VMwareAPI sub-team status update 2013-07-22)

2013-07-24 Thread Davanum Srinivas
Shawn, or others involved in this effort,

Is there a quick README or equivalent on how to use the latest code
say with devstack and vCenter to get a simple deploy working?

thanks,
-- dims

On Mon, Jul 22, 2013 at 9:15 PM, Shawn Hartsock  wrote:
>
> ** No meeting this week **
>
> I have a conflict and can't run the meeting this week. We'll be back 
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
>
> Two of us ran into a problem with an odd pep8 failure:
>> E: nova.conf.sample is not up to date, please run 
>> tools/conf/generate_sample.sh
>
> Yaguang Tang gave the work around:
> "nova.conf.sample is not up to date, please run tools/conf/generate_sample.sh 
> ,then resubmit."
>
> I've put all these reviews under the "re-work" section. Hopefully this is 
> simple and we can fix them this week.
>
> Blueprints targeted for Havana-3:
> * https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
> nova.conf.sample out of date
> * 
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
>  - needs review
>
> New Blueprint:
> * https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
>
> Needs one more +2 / Approve button:
> * https://review.openstack.org/#/c/33504/
> * https://review.openstack.org/#/c/36411/
>
> Ready for core-reviewer:
> * https://review.openstack.org/#/c/33100/
>
> Needs VMware API expert review (no human reviews):
> * https://review.openstack.org/#/c/30282/
> * https://review.openstack.org/#/c/30628/
> * https://review.openstack.org/#/c/32695/
> * https://review.openstack.org/#/c/37389/
> * https://review.openstack.org/#/c/37539/
>
> Work/re-work in progress:
> * https://review.openstack.org/#/c/30822/ - weird Jenkins issue, fault is not 
> in the patch
> * https://review.openstack.org/#/c/37819/ - weird Jenkins issue, fault is not 
> in the patch
> * https://review.openstack.org/#/c/34189/ - in danger of becoming "abandoned"
>
> Needs help/discussion (has a -1):
> * https://review.openstack.org/#/c/34685/
> * https://review.openstack.org/#/c/34685/
>
> Meeting info:
> * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
>
> # Shawn Hartsock
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Julien Danjou and Ben Nemec now on oslo-core

2013-07-24 Thread Mark McLoughlin
Hey

I just wanted to welcome Ben and Julien to oslo-core. They have both
been doing a lot of high quality reviews lately and it's much
appreciated.

Welcome to the team!

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Stefano Maffulli
Hello

I have seen lots of discussions on blogs and twitter heating up around
Amazon API compatibility and OpenStack. This seems like a recurring
topic, often raised by pundits and recently joined by members of the
community. I think it's time to bring the discussions inside our
community to our established channels and processes. Our community has
established ways to discuss and take technical decisions, from the more
accessible General mailing list to the Development list to the Design
Summits, the weekly project meetings, the reviews on gerrit and the
governing bodies Technical Committee and Board of Directors.

While we have not seen a large push in the community recently via
contributions or deployments, Amazon APIs have been an option for
deployments from the early days of OpenStack.

I would like to have this discussion inside the established channels of
our community and get the opinions from those that maintain that
OpenStack should increase efforts for Amazon APIs compatibility, and
ultimately it would be good to see code contributions.

Do you think OpenStack should have an ongoing effort to imitate Amazon's
API? If you think it should, how would you lead the effort?


/stef
-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Monty Taylor


On 07/24/2013 08:51 AM, Stefano Maffulli wrote:
> Hello
> 
> I have seen lots of discussions on blogs and twitter heating up around
> Amazon API compatibility and OpenStack. This seems like a recurring
> topic, often raised by pundits and recently joined by members of the
> community. I think it's time to bring the discussions inside our
> community to our established channels and processes. Our community has
> established ways to discuss and take technical decisions, from the more
> accessible General mailing list to the Development list to the Design
> Summits, the weekly project meetings, the reviews on gerrit and the
> governing bodies Technical Committee and Board of Directors.
> 
> While we have not seen a large push in the community recently via
> contributions or deployments, Amazon APIs have been an option for
> deployments from the early days of OpenStack.
> 
> I would like to have this discussion inside the established channels of
> our community and get the opinions from those that maintain that
> OpenStack should increase efforts for Amazon APIs compatibility, and
> ultimately it would be good to see code contributions.
> 
> Do you think OpenStack should have an ongoing effort to imitate Amazon's
> API? If you think it should, how would you lead the effort?

I don't care about Amazon's APIs at all, except in as much as
compatibility shims might help people migrate off of a closed system and
on to an open one.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Monty Taylor


On 07/23/2013 03:02 PM, Brian Curtin wrote:
> 
> On Jul 23, 2013, at 3:51 PM, Eric Windisch  >
>  wrote:
> 
>>
>>
>>
>> On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton > > wrote:
>>
>> I'm sure this has been asked before, but what exactly is the plan
>> for Python 3 support?
>>
>> Is the plan to support 2 and 3 at the same time? I was looking
>> around for a blue print or something but I can't seem to find
>> anything.
>>
>>
>> I suppose a wiki page is due.  This was discussed at the last summit:
>> https://etherpad.openstack.org/havana-python3
>>
>> The plan is to support Python 2.6+ for the 2..x series and Python
>> 3.3+. This effort has begun for libraries (oslo) and clients. Work is
>> appreciated on the primary projects, but will ultimately become
>> stalled if the library work is not first completed.

I'd like to add that at some point in the future it is our desire to
drop support for 2.6, as supporting 2.7 and 3.3+ is way easier than also
supporting 2.6. At the moment, I believe our main factor on that is the
current version of RHEL. Fingers crossed for a new one soon... :)

We are also just finishing up getting 3.3 enabled build slaves in the CI
gate, so as projects get 3.3 compliant, we should be able to start
testing that.

> FWIW, I came across https://wiki.openstack.org/wiki/Python3Deps and
> updated "routes", which currently works with 3.3. One small step, for free!
> 
> I'm a newcomer to this list, but I'm a CPython core contributor and am
> working in Developer Relations at Rackspace, so supporting Python 3 is
> right up my alley.

Excellent! Welcome, glad to have you.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Russell Bryant
On 07/24/2013 05:39 AM, Day, Phil wrote:
> Hi Alex,
> 
> I'm inclined to agree with others that I'm not sure you need the complexity 
> that this BP brings to the system.If you want to provide a user with a 
> choice about how much overcommit they will be exposed to then doing that in 
> flavours and the aggregate_instance_extra_spec filter seems the more natural 
> way to do this, since presumably you'd want to charge differently for those 
> and the flavour list is normally what is linked to the pricing model.  
> 
> I also like the approach taken by the recent changes to the ram filter where 
> the scheduling characteristics are defined as properties of the aggregate 
> rather than separate stanzas in the configuration file.
> 
> An alternative, and the use case I'm most interested in at the moment, is 
> where we want the user to be able to define the scheduling policies on a 
> specific set of hosts allocated to them (in this case they pay for the host, 
> so if they want to oversubscribe on memory/cpu/disk then they should be able 
> to).  The basic framework for this is described in this BP 
> https://blueprints.launchpad.net/nova/+spec/whole-host-allocation and the 
> corresponding wiki page (https://wiki.openstack.org/wiki/WholeHostAllocation. 
>I've also recently posted code for the basic framework built as a wrapper 
> around aggregates (https://review.openstack.org/#/c/38156/, 
> https://review.openstack.org/#/c/38158/ ) which you might want to take a look 
> at.
>  
> Its not clear to me if what your proposing addresses an additional gap 
> between this and the combination of the aggregate_extra_spec filter + revised 
> filters to get their configurations from aggregates) ?

I really like your point about not needing to set things up via a config
file.  That's fairly limiting since you can't change it on the fly via
the API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] use openstack-dev mailing list

2013-07-24 Thread Russell Bryant
On 07/24/2013 04:49 AM, Sergey Lukjanov wrote:
> Hi folks,
> 
> We decided to use openstack-dev@lists.openstack.org mailing list instead of 
> savanna-...@lists.launchpad.net for all savanna-related communication. The 
> old one will be closed and we’ll monitor it and forward all emails from it to 
> the correct mailing list. Additionally it means that there is no need to add 
> savanna-all to CC.
> 
> The main reason of this decision is to use the same approach as all OpenStack 
> projects.

Thank you for doing this.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Russell Bryant
On 07/24/2013 11:57 AM, Monty Taylor wrote:
> 
> 
> On 07/24/2013 08:51 AM, Stefano Maffulli wrote:
>> Hello
>>
>> I have seen lots of discussions on blogs and twitter heating up around
>> Amazon API compatibility and OpenStack. This seems like a recurring
>> topic, often raised by pundits and recently joined by members of the
>> community. I think it's time to bring the discussions inside our
>> community to our established channels and processes. Our community has
>> established ways to discuss and take technical decisions, from the more
>> accessible General mailing list to the Development list to the Design
>> Summits, the weekly project meetings, the reviews on gerrit and the
>> governing bodies Technical Committee and Board of Directors.
>>
>> While we have not seen a large push in the community recently via
>> contributions or deployments, Amazon APIs have been an option for
>> deployments from the early days of OpenStack.
>>
>> I would like to have this discussion inside the established channels of
>> our community and get the opinions from those that maintain that
>> OpenStack should increase efforts for Amazon APIs compatibility, and
>> ultimately it would be good to see code contributions.
>>
>> Do you think OpenStack should have an ongoing effort to imitate Amazon's
>> API? If you think it should, how would you lead the effort?
> 
> I don't care about Amazon's APIs at all, except in as much as
> compatibility shims might help people migrate off of a closed system and
> on to an open one.

I feel about the same way, but I think this reason is enough to have and
continue maintaining support for these APIs.

As Stefano said, at least in nova we haven't seen a whole *lot* of work
on the EC2 API support lately.  However, I don't see it going away in
the foreseeable future and would certainly welcome more contributions in
this area.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed July 24th at 2000 UTC

2013-07-24 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed July 24th at 2000 UTC

Current topics for discussion:
 - Review last week's actions
 - Documentation
 - h3 blueprint milestone and priority
 - Removal/moving of heat-boto/heat-cfn/heat-watch client tools
 - Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Blocker for API translation changes

2013-07-24 Thread Luis A. Garcia

Hi Oslos,

I have a refactoring for common code needed to implement REST API 
translations.


The change is a bit of a blocker for multiple other changes across 
various components, and would just like to see if it could get bumped up 
a bit in your review queues.


https://review.openstack.org/#/c/38201/

Thank you,

--
Luis A. García
Cloud Solutions & OpenStack Development
IBM Systems and Technology Group
Ph: (915) 307-6568 | T/L: 363-6276

"Everything should be made as simple as possible, but not simpler."
- Albert Einstein

"Simple can be harder than complex: You have to work hard to get
your thinking clean to make it simple."
– Steve Jobs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-24 Thread Russell Bryant
On 07/23/2013 06:00 PM, Clint Byrum wrote:
> This is really interesting work, thanks for sharing it with us. The
> discussion that has followed has brought up some thoughts I've had for
> a while about this choke point in what is supposed to be an extremely
> scalable cloud platform (OpenStack).
> 
> I feel like the discussions have all been centered around making "the"
> scheduler(s) intelligent.  There seems to be a commonly held belief that
> scheduling is a single step, and should be done with as much knowledge
> of the system as possible by a well informed entity.
> 
> Can you name for me one large scale system that has a single entity,
> human or computer, that knows everything about the system and can make
> good decisions quickly?
> 
> This problem is screaming to be broken up, de-coupled, and distributed.
> 
> I keep asking myself these questions:
> 
> Why are all of the compute nodes informing all of the schedulers?
> 
> Why are all of the schedulers expecting to know about all of the compute 
> nodes?
> 
> Can we break this problem up into simpler problems and distribute the load to
> the entire system?
> 
> This has been bouncing around in my head for a while now, but as a
> shallow observer of nova dev, I feel like there are some well known
> scaling techniques which have not been brought up. Here is my idea,
> forgive me if I have glossed over something or missed a huge hole:
> 
> * Schedulers break up compute nodes by hash table, only caring about
>   those in their hash table.
> * Schedulers, upon claiming a compute node by hash table, poll compute
>   node directly for its information.
> * Requests to boot go into fanout.
> * Schedulers get request and try to satisfy using only their own compute
>   nodes.
> * Failure to boot results in re-insertion in the fanout.
> 
> This gives up the certainty that the scheduler will find a compute node
> for a boot request on the first try. It is also possible that a request
> gets unlucky and takes a long time to find the one scheduler that has
> the one last "X" resource that it is looking for. There are some further
> optimization strategies that can be employed (like queues based on hashes
> already tried.. etc).
> 
> Anyway, I don't see any point in trying to hot-rod the intelligent
> scheduler to go super fast, when we can just optimize for having many
> many schedulers doing the same body of work without blocking and without
> pounding a database.

These are some *very* good observations.  I'd like all of the nova folks
interested in this are to give some deep consideration of this type of
approach.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Alex Gaynor
I believe Red Hat's new "Software Collections" things address this issue,
this is to the point which Django (which has historically used RHEL as a
barometer for when we could drop Pythons) will drop 2.6 in our next release.

Alex


On Wed, Jul 24, 2013 at 9:00 AM, Monty Taylor  wrote:

>
>
> On 07/23/2013 03:02 PM, Brian Curtin wrote:
> >
> > On Jul 23, 2013, at 3:51 PM, Eric Windisch  > >
> >  wrote:
> >
> >>
> >>
> >>
> >> On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton  >> > wrote:
> >>
> >> I'm sure this has been asked before, but what exactly is the plan
> >> for Python 3 support?
> >>
> >> Is the plan to support 2 and 3 at the same time? I was looking
> >> around for a blue print or something but I can't seem to find
> >> anything.
> >>
> >>
> >> I suppose a wiki page is due.  This was discussed at the last summit:
> >> https://etherpad.openstack.org/havana-python3
> >>
> >> The plan is to support Python 2.6+ for the 2..x series and Python
> >> 3.3+. This effort has begun for libraries (oslo) and clients. Work is
> >> appreciated on the primary projects, but will ultimately become
> >> stalled if the library work is not first completed.
>
> I'd like to add that at some point in the future it is our desire to
> drop support for 2.6, as supporting 2.7 and 3.3+ is way easier than also
> supporting 2.6. At the moment, I believe our main factor on that is the
> current version of RHEL. Fingers crossed for a new one soon... :)
>
> We are also just finishing up getting 3.3 enabled build slaves in the CI
> gate, so as projects get 3.3 compliant, we should be able to start
> testing that.
>
> > FWIW, I came across https://wiki.openstack.org/wiki/Python3Deps and
> > updated "routes", which currently works with 3.3. One small step, for
> free!
> >
> > I'm a newcomer to this list, but I'm a CPython core contributor and am
> > working in Developer Relations at Rackspace, so supporting Python 3 is
> > right up my alley.
>
> Excellent! Welcome, glad to have you.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Julien Danjou and Ben Nemec now on oslo-core

2013-07-24 Thread Davanum Srinivas
Welcome Ben and Julien!

On Wed, Jul 24, 2013 at 11:42 AM, Mark McLoughlin  wrote:
> Hey
>
> I just wanted to welcome Ben and Julien to oslo-core. They have both
> been doing a lot of high quality reviews lately and it's much
> appreciated.
>
> Welcome to the team!
>
> Cheers,
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quick README? (Re: [vmware] VMwareAPI sub-team status update 2013-07-22)

2013-07-24 Thread Dan Wendlandt
If you are a developer using devstack, see:
https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

If you are a user deploying from packages, see:
http://docs.openstack.org/trunk/openstack-compute/admin/content/vmware.html

Dan



On Wed, Jul 24, 2013 at 8:22 AM, Davanum Srinivas  wrote:

> Shawn, or others involved in this effort,
>
> Is there a quick README or equivalent on how to use the latest code
> say with devstack and vCenter to get a simple deploy working?
>
> thanks,
> -- dims
>
> On Mon, Jul 22, 2013 at 9:15 PM, Shawn Hartsock 
> wrote:
> >
> > ** No meeting this week **
> >
> > I have a conflict and can't run the meeting this week. We'll be back
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
> >
> > Two of us ran into a problem with an odd pep8 failure:
> >> E: nova.conf.sample is not up to date, please run
> tools/conf/generate_sample.sh
> >
> > Yaguang Tang gave the work around:
> > "nova.conf.sample is not up to date, please run
> tools/conf/generate_sample.sh ,then resubmit."
> >
> > I've put all these reviews under the "re-work" section. Hopefully this
> is simple and we can fix them this week.
> >
> > Blueprints targeted for Havana-3:
> > *
> https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy -
> nova.conf.sample out of date
> > *
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service-
>  needs review
> >
> > New Blueprint:
> > *
> https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
> >
> > Needs one more +2 / Approve button:
> > * https://review.openstack.org/#/c/33504/
> > * https://review.openstack.org/#/c/36411/
> >
> > Ready for core-reviewer:
> > * https://review.openstack.org/#/c/33100/
> >
> > Needs VMware API expert review (no human reviews):
> > * https://review.openstack.org/#/c/30282/
> > * https://review.openstack.org/#/c/30628/
> > * https://review.openstack.org/#/c/32695/
> > * https://review.openstack.org/#/c/37389/
> > * https://review.openstack.org/#/c/37539/
> >
> > Work/re-work in progress:
> > * https://review.openstack.org/#/c/30822/ - weird Jenkins issue, fault
> is not in the patch
> > * https://review.openstack.org/#/c/37819/ - weird Jenkins issue, fault
> is not in the patch
> > * https://review.openstack.org/#/c/34189/ - in danger of becoming
> "abandoned"
> >
> > Needs help/discussion (has a -1):
> > * https://review.openstack.org/#/c/34685/
> > * https://review.openstack.org/#/c/34685/
> >
> > Meeting info:
> > * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
> >
> > # Shawn Hartsock
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [baremetal] Mixed bare-metal + hypervisor cloud using grizzly

2013-07-24 Thread Clint Byrum
First off, welcome! :)

FYI, this is the development mailing list. Questions like the one below
may find more helpful answers on the general OpenStack users' mailing list.
I have CC'd that list so that this response can be seen there, too, I suggest
you re-send your original message there as well.

See https://launchpad.net/~openstack for that list.

Some info in-line.

Excerpts from Zsolt Haraszti's message of 2013-07-23 10:00:59 -0700:
> Hi,
> 
> We are very interested to set up a small OpenStack cloud with a portion of
> the servers used as bare-metal servers and the rest used as "normal" KVM
> hypervisor compute nodes. We are using grizzly, and launch with devstack
> for simplicity.
> 

I hope you are just trying to develop and not running in production. Devstack
is not meant for production, and you will spend more time fighting with it
to get it production ready than you would manually deploying everything.

Also baremetal is not meant to be multi-tenant. The nodes are not wiped when
a server is deleted, and thus the next tenant may very well be able to read
unused disk contents. Also there are other places to hide "evil" in a machine
than just the local disks.

> For a proof-of-concept, I set up an all-in-one node (also acting as KVM
> compute node). Now I am trying to attach a second compute node running in
> baremetal mode.
> 
> Is this known to work?
> 

It is known to not work. One hypervisor per cloud is all that is allowed.

> As a side note, devstack did not seem to support very well our case, i.e.,
> when the control node is not the baremetal node. A number of the automated
> steps were skipped. We worked around this by manually creating the nova_bm
> database, db sync-ing it, creating and uploading the deploy and test
> images, and adding a bare-metal flavor. If there were interest, I would be
> willing to look into modifying devstack to support our case.
> 

Devstack is meant for rapid development of OpenStack. You haven't actually
stated your end use case, but if it is anything other than development of
OpenStack, you need a new tool.

> After this, I was able to enroll an IPMI-enabled 3rd server as a
> baremetal-node, but I am unable to create a BM instance on it. The instance
> gets created in the DB, but the scheduler errors out with NoValidHost. I
> started debugging the issue by investigating the logs and looking into the
> code. I see a few things that I suspect may not be right:
> 
> If I add the second compute node as a normal KVM node, I can see the
> scheduler on the all-in-one node to show both compute nodes refreshing
> every 60 seconds. If I re-add the 2nd compute node in BM mode, I can see no
> more updates coming from that node in the scheduler.
> 
> Also, I dug into the scheduler code a bit, and I can see that in the
> scheduler/host_manager.HostManager.get_all_host_states() the call
> to db.compute_node_get_all(context) returns only one node, the all-in-one.
> 
> Both of the above suggests that the scheduler may have no visibility of the
> BM compute node, hence my troubles.
> 

Right, that is because you can only have one hypervisor type.

> I can debug this further, but I though I ask first. Any pointers would be
> much appreciated.
> 

You may be interested in the OpenStack Deployment program (aka TripleO):

https://github.com/openstack/tripleo-incubator

And the current steps to deploy a development centric OpenStack:

https://github.com/openstack/tripleo-incubator/blob/master/devtest.md

(These steps, unlike devstack, are meant to be a framework for deploying
production clouds.)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quick README? (Re: [vmware] VMwareAPI sub-team status update 2013-07-22)

2013-07-24 Thread Shawn Hartsock
I am trying to put everything here for now:

https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide 

Let me know if you need more.
# Shawn Hartsock 

Davanum Srinivas  wrote:

Shawn, or others involved in this effort,

Is there a quick README or equivalent on how to use the latest code
say with devstack and vCenter to get a simple deploy working?

thanks,
-- dims

On Mon, Jul 22, 2013 at 9:15 PM, Shawn Hartsock  wrote:
>
> ** No meeting this week **
>
> I have a conflict and can't run the meeting this week. We'll be back 
> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
>
> Two of us ran into a problem with an odd pep8 failure:
>> E: nova.conf.sample is not up to date, please run 
>> tools/conf/generate_sample.sh
>
> Yaguang Tang gave the work around:
> "nova.conf.sample is not up to date, please run tools/conf/generate_sample.sh 
> ,then resubmit."
>
> I've put all these reviews under the "re-work" section. Hopefully this is 
> simple and we can fix them this week.
>
> Blueprints targeted for Havana-3:
> * https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
> nova.conf.sample out of date
> * 
> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
>  - needs review
>
> New Blueprint:
> * https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
>
> Needs one more +2 / Approve button:
> * https://review.openstack.org/#/c/33504/
> * https://review.openstack.org/#/c/36411/
>
> Ready for core-reviewer:
> * https://review.openstack.org/#/c/33100/
>
> Needs VMware API expert review (no human reviews):
> * https://review.openstack.org/#/c/30282/
> * https://review.openstack.org/#/c/30628/
> * https://review.openstack.org/#/c/32695/
> * https://review.openstack.org/#/c/37389/
> * https://review.openstack.org/#/c/37539/
>
> Work/re-work in progress:
> * https://review.openstack.org/#/c/30822/ - weird Jenkins issue, fault is not 
> in the patch
> * https://review.openstack.org/#/c/37819/ - weird Jenkins issue, fault is not 
> in the patch
> * https://review.openstack.org/#/c/34189/ - in danger of becoming "abandoned"
>
> Needs help/discussion (has a -1):
> * https://review.openstack.org/#/c/34685/
> * https://review.openstack.org/#/c/34685/
>
> Meeting info:
> * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
>
> # Shawn Hartsock
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quick README? (Re: [vmware] VMwareAPI sub-team status update 2013-07-22)

2013-07-24 Thread Davanum Srinivas
Thanks Dan and Shawn. Those links are exactly what i was needed.

-- dims

On Wed, Jul 24, 2013 at 1:10 PM, Shawn Hartsock  wrote:
> I am trying to put everything here for now:
>
> https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide
>
> Let me know if you need more.
> # Shawn Hartsock
>
> Davanum Srinivas  wrote:
>
> Shawn, or others involved in this effort,
>
> Is there a quick README or equivalent on how to use the latest code
> say with devstack and vCenter to get a simple deploy working?
>
> thanks,
> -- dims
>
> On Mon, Jul 22, 2013 at 9:15 PM, Shawn Hartsock  wrote:
>>
>> ** No meeting this week **
>>
>> I have a conflict and can't run the meeting this week. We'll be back 
>> http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
>>
>> Two of us ran into a problem with an odd pep8 failure:
>>> E: nova.conf.sample is not up to date, please run 
>>> tools/conf/generate_sample.sh
>>
>> Yaguang Tang gave the work around:
>> "nova.conf.sample is not up to date, please run 
>> tools/conf/generate_sample.sh ,then resubmit."
>>
>> I've put all these reviews under the "re-work" section. Hopefully this is 
>> simple and we can fix them this week.
>>
>> Blueprints targeted for Havana-3:
>> * https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
>> nova.conf.sample out of date
>> * 
>> https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
>>  - needs review
>>
>> New Blueprint:
>> * https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
>>
>> Needs one more +2 / Approve button:
>> * https://review.openstack.org/#/c/33504/
>> * https://review.openstack.org/#/c/36411/
>>
>> Ready for core-reviewer:
>> * https://review.openstack.org/#/c/33100/
>>
>> Needs VMware API expert review (no human reviews):
>> * https://review.openstack.org/#/c/30282/
>> * https://review.openstack.org/#/c/30628/
>> * https://review.openstack.org/#/c/32695/
>> * https://review.openstack.org/#/c/37389/
>> * https://review.openstack.org/#/c/37539/
>>
>> Work/re-work in progress:
>> * https://review.openstack.org/#/c/30822/ - weird Jenkins issue, fault is 
>> not in the patch
>> * https://review.openstack.org/#/c/37819/ - weird Jenkins issue, fault is 
>> not in the patch
>> * https://review.openstack.org/#/c/34189/ - in danger of becoming "abandoned"
>>
>> Needs help/discussion (has a -1):
>> * https://review.openstack.org/#/c/34685/
>> * https://review.openstack.org/#/c/34685/
>>
>> Meeting info:
>> * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
>>
>> # Shawn Hartsock
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Mark McLoughlin
On Wed, 2013-07-24 at 09:31 -0700, Alex Gaynor wrote:
> I believe Red Hat's new "Software Collections" things address this issue,
> this is to the point which Django (which has historically used RHEL as a
> barometer for when we could drop Pythons) will drop 2.6 in our next release.

Yep, that's a very good point.

We're as keen as anyone else to get off Python 2.6 and AIUI some folks
on our team are hoping to get RDO Havana onto the 2.7 SCL from here:

  https://fedorahosted.org/SoftwareCollections/

So, assuming nothing crazy crops up, we should be getting close to a
point where dropping 2.6 support upstream would not be a big issue for
RHEL users.

Cheers,
Mark.

> On Wed, Jul 24, 2013 at 9:00 AM, Monty Taylor  wrote:
> 
> >
> >
> > On 07/23/2013 03:02 PM, Brian Curtin wrote:
> > >
> > > On Jul 23, 2013, at 3:51 PM, Eric Windisch  > > >
> > >  wrote:
> > >
> > >>
> > >>
> > >>
> > >> On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton  > >> > wrote:
> > >>
> > >> I'm sure this has been asked before, but what exactly is the plan
> > >> for Python 3 support?
> > >>
> > >> Is the plan to support 2 and 3 at the same time? I was looking
> > >> around for a blue print or something but I can't seem to find
> > >> anything.
> > >>
> > >>
> > >> I suppose a wiki page is due.  This was discussed at the last summit:
> > >> https://etherpad.openstack.org/havana-python3
> > >>
> > >> The plan is to support Python 2.6+ for the 2..x series and Python
> > >> 3.3+. This effort has begun for libraries (oslo) and clients. Work is
> > >> appreciated on the primary projects, but will ultimately become
> > >> stalled if the library work is not first completed.
> >
> > I'd like to add that at some point in the future it is our desire to
> > drop support for 2.6, as supporting 2.7 and 3.3+ is way easier than also
> > supporting 2.6. At the moment, I believe our main factor on that is the
> > current version of RHEL. Fingers crossed for a new one soon... :)
> >
> > We are also just finishing up getting 3.3 enabled build slaves in the CI
> > gate, so as projects get 3.3 compliant, we should be able to start
> > testing that.
> >
> > > FWIW, I came across https://wiki.openstack.org/wiki/Python3Deps and
> > > updated "routes", which currently works with 3.3. One small step, for
> > free!
> > >
> > > I'm a newcomer to this list, but I'm a CPython core contributor and am
> > > working in Developer Relations at Rackspace, so supporting Python 3 is
> > > right up my alley.
> >
> > Excellent! Welcome, glad to have you.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Mark McLoughlin
On Wed, 2013-07-24 at 08:51 -0700, Stefano Maffulli wrote:
> Hello
> 
> I have seen lots of discussions on blogs and twitter heating up around
> Amazon API compatibility and OpenStack. This seems like a recurring
> topic, often raised by pundits and recently joined by members of the
> community. I think it's time to bring the discussions inside our
> community to our established channels and processes. Our community has
> established ways to discuss and take technical decisions, from the more
> accessible General mailing list to the Development list to the Design
> Summits, the weekly project meetings, the reviews on gerrit and the
> governing bodies Technical Committee and Board of Directors.
> 
> While we have not seen a large push in the community recently via
> contributions or deployments, Amazon APIs have been an option for
> deployments from the early days of OpenStack.
> 
> I would like to have this discussion inside the established channels of
> our community and get the opinions from those that maintain that
> OpenStack should increase efforts for Amazon APIs compatibility, and
> ultimately it would be good to see code contributions.
> 
> Do you think OpenStack should have an ongoing effort to imitate Amazon's
> API? If you think it should, how would you lead the effort?

I think AWS compatible APIs for any of our services is a great feature.
I'd love to tell people they can try out OpenStack by pointing their
existing AWS based deployment tools at an OpenStack cloud.

Just yesterday, I saw a comment on IRC along the lines of "wow, Nova has
an EC2 API ... I should totally try out using knife with that".

Two things seem straightforward and obvious to me - our primary API is
the OpenStack "native" APIs and, yet, any built-in AWS compatibility we
can get is mucho goodness.

That said, it's not "AWS compat == goodness" statements we need ... we
need people who are keen to contribute to the work.

However, the very least we should do is make it clear that if anyone
*does* step up and do that work, that we'll welcome the contributions
with open arms.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Chmouel Boudjnah
On Wed, Jul 24, 2013 at 8:51 AM, Stefano Maffulli  wrote:
> Do you think OpenStack should have an ongoing effort to imitate Amazon's
> API? If you think it should, how would you lead the effort?

We (Swift) moved the S3 compatibiliy middleware out of core swift for
quite some in its own github repository[1] maintained by fujita and so
far this has been working well for us, since most of the Swift core
don't know (or care I guess) about S3 API and fujita does this
ensuring it works.

I personally don't see an advantage of having to support S3 (it's
basically a constant screenscraping) but like monty said I care much
more about facilitating migration for our users.

Chmouel.

[1] https://github.com/fujita/swift3

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Sean Dague

On 07/24/2013 01:43 PM, Mark McLoughlin wrote:

On Wed, 2013-07-24 at 08:51 -0700, Stefano Maffulli wrote:

Hello

I have seen lots of discussions on blogs and twitter heating up around
Amazon API compatibility and OpenStack. This seems like a recurring
topic, often raised by pundits and recently joined by members of the
community. I think it's time to bring the discussions inside our
community to our established channels and processes. Our community has
established ways to discuss and take technical decisions, from the more
accessible General mailing list to the Development list to the Design
Summits, the weekly project meetings, the reviews on gerrit and the
governing bodies Technical Committee and Board of Directors.

While we have not seen a large push in the community recently via
contributions or deployments, Amazon APIs have been an option for
deployments from the early days of OpenStack.

I would like to have this discussion inside the established channels of
our community and get the opinions from those that maintain that
OpenStack should increase efforts for Amazon APIs compatibility, and
ultimately it would be good to see code contributions.

Do you think OpenStack should have an ongoing effort to imitate Amazon's
API? If you think it should, how would you lead the effort?


I think AWS compatible APIs for any of our services is a great feature.
I'd love to tell people they can try out OpenStack by pointing their
existing AWS based deployment tools at an OpenStack cloud.

Just yesterday, I saw a comment on IRC along the lines of "wow, Nova has
an EC2 API ... I should totally try out using knife with that".

Two things seem straightforward and obvious to me - our primary API is
the OpenStack "native" APIs and, yet, any built-in AWS compatibility we
can get is mucho goodness.

That said, it's not "AWS compat == goodness" statements we need ... we
need people who are keen to contribute to the work.

However, the very least we should do is make it clear that if anyone
*does* step up and do that work, that we'll welcome the contributions
with open arms.


+1. Also validation of those interfaces would be appreciated. Today the 
tempest 3rdparty gate tests use the boto library, which is a good first 
step, but doesn't validate the AWS API strongly.


Those kinds of contributions are equally welcomed, we've even set aside 
a place dedicated to them in Tempest (tempest/thirdparty) where non 
"native" API testing can live.


But again, what is lacking here is mostly contributions. The more the 
merrier, and there are still many places where people can leave their 
mark on the project.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Chuck Short
Hi,


The use of mox (https://pypi.python.org/pypi/mox/0.5.3) across the test
suites in the Openstack project is quite extensive. This is probably due to
the fact that it is the most familiar mocking object framework for most
python developers.

However there is big drawback with using mox across all of the OpenStack
projects is that it is not python3 compatible. This makes python3
compliance problematic because we want the test suites to be compatible as
well.

While thinking about this problem for a while now, while helping porting
OpenStack over to python3 there is a couple of options that as a project
can do as a whole:

1. Change mox usage to more python3 friendly such as mock. (
https://pypi.python.org/pypi/mock/1.0.1). However this will cause alot of
code churn in the projects as we move away from mox to mock.

2. Use the python3 fork called pymox (https://github.com/emonty/pymox).
This project has reasonable compatibility with mox and is python3
compatible. Using this option causes less code churn. IMHO this would be
the better option.

I would like to hear peoples opinion on this.

Thanks
chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Brian Curtin

On Jul 23, 2013, at 4:32 PM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:




On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton 
mailto:lo...@bacoosta.com>> wrote:

I'm sure this has been asked before, but what exactly is the plan for Python 3 
support?

Is the plan to support 2 and 3 at the same time? I was looking around for a 
blue print or something but I can't seem to find anything.

If Python 3 support is part of the plan, can I start running 2to3 and making 
edits to keep changes compatible with Python 2?

Eric replied with details, but I wanted to address the question of 2to3.

Using 2to3 is no longer the preferred way to port to Python 3. With changes 
that landed in 3.3, it is easier to create code that will run under python 2.7 
and 3.3, without resorting to the translation steps that were needed for 
3.0-3.2. Chuck Short has landed a series of patches modifying code by hand for 
some cases (mostly print and exceptions) and by using the six library in others 
(for iteration and module renaming).

Speaking of preferred ways to port, has there been any discussion about which 
version takes precedence when we have to do different things? For example, with 
imports, should we be trying the 2.x name first and falling back to 3.x on 
ImportError, or vice versa?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Alex Gaynor
I think moving towards mock is a better long term strategy:

a) I don't you're correct that it's the most familiar for most python
developers. By PyPi installs (A TERRIBLE METRIC, but it's all we have).
Mock has 24k in the last week, mox has 3.5k
b) mock is a part of the standard library starting with python 3.3, this
will lead to even more adoption.

Alex


On Wed, Jul 24, 2013 at 11:12 AM, Chuck Short wrote:

> Hi,
>
>
> The use of mox (https://pypi.python.org/pypi/mox/0.5.3) across the test
> suites in the Openstack project is quite extensive. This is probably due to
> the fact that it is the most familiar mocking object framework for most
> python developers.
>
> However there is big drawback with using mox across all of the OpenStack
> projects is that it is not python3 compatible. This makes python3
> compliance problematic because we want the test suites to be compatible as
> well.
>
> While thinking about this problem for a while now, while helping porting
> OpenStack over to python3 there is a couple of options that as a project
> can do as a whole:
>
> 1. Change mox usage to more python3 friendly such as mock. (
> https://pypi.python.org/pypi/mock/1.0.1). However this will cause alot of
> code churn in the projects as we move away from mox to mock.
>
> 2. Use the python3 fork called pymox (https://github.com/emonty/pymox).
> This project has reasonable compatibility with mox and is python3
> compatible. Using this option causes less code churn. IMHO this would be
> the better option.
>
> I would like to hear peoples opinion on this.
>
> Thanks
> chuck
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Jay Pipes

On 07/24/2013 02:19 PM, Alex Gaynor wrote:

I think moving towards mock is a better long term strategy:

a) I don't you're correct that it's the most familiar for most python
developers. By PyPi installs (A TERRIBLE METRIC, but it's all we have).
Mock has 24k in the last week, mox has 3.5k
b) mock is a part of the standard library starting with python 3.3, this
will lead to even more adoption.


++. I personally prefer mock over mox.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Eric Windisch
> Speaking of preferred ways to port, has there been any discussion about
> which version takes precedence when we have to do different things? For
> example, with imports, should we be trying the 2.x name first and falling
> back to 3.x on ImportError, or vice versa?
>

Are we having it now? My belief here is we should be following the
principle of "ask forgiveness, not permission". Try Python 3 and then
fallback to Python 2 whenever possible.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Kevin L. Mitchell
On Wed, 2013-07-24 at 14:12 -0400, Chuck Short wrote:
> 1. Change mox usage to more python3 friendly such as mock.
> (https://pypi.python.org/pypi/mock/1.0.1). However this will cause
> alot of code churn in the projects as we move away from mox to mock.
> 
> 
> 2. Use the python3 fork called pymox
> (https://github.com/emonty/pymox). This project has reasonable
> compatibility with mox and is python3 compatible. Using this option
> causes less code churn. IMHO this would be the better option.

My personal preference is that we move to mock; I think it is a better
methodology, and I like its features.
-- 
Kevin L. Mitchell 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Alex Meade
+1 I prefer mock over mox as well. It's more readable and intuitive. I've had a 
number of bad mox experiences lately so I'm a tad biased.

-Alex

-Original Message-
From: "Jay Pipes" 
Sent: Wednesday, July 24, 2013 2:24pm
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Usage of mox through out the Openstack project.

On 07/24/2013 02:19 PM, Alex Gaynor wrote:
> I think moving towards mock is a better long term strategy:
>
> a) I don't you're correct that it's the most familiar for most python
> developers. By PyPi installs (A TERRIBLE METRIC, but it's all we have).
> Mock has 24k in the last week, mox has 3.5k
> b) mock is a part of the standard library starting with python 3.3, this
> will lead to even more adoption.

++. I personally prefer mock over mox.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Brian Curtin
On Jul 24, 2013, at 1:12 PM, Chuck Short 
mailto:chuck.sh...@canonical.com>>
 wrote:

Hi,


The use of mox (https://pypi.python.org/pypi/mox/0.5.3) across the test suites 
in the Openstack project is quite extensive. This is probably due to the fact 
that it is the most familiar mocking object framework for most python 
developers.

However there is big drawback with using mox across all of the OpenStack 
projects is that it is not python3 compatible. This makes python3 compliance 
problematic because we want the test suites to be compatible as well.

While thinking about this problem for a while now, while helping porting 
OpenStack over to python3 there is a couple of options that as a project can do 
as a whole:

1. Change mox usage to more python3 friendly such as mock. 
(https://pypi.python.org/pypi/mock/1.0.1). However this will cause alot of code 
churn in the projects as we move away from mox to mock.

2. Use the python3 fork called pymox (https://github.com/emonty/pymox). This 
project has reasonable compatibility with mox and is python3 compatible. Using 
this option causes less code churn. IMHO this would be the better option.

I would like to hear peoples opinion on this.

Moving towards the standard library's unittest.mock for 3 and the external 
package for 2 is what I've done in the past, but I moved away from mocker and 
another one I forget, not mox.

Are there usages of mox that aren't served by mock? Code churn sucks, but if 
something has to change, I think there's value in moving toward the standard 
facilities if they'll do the job.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-24 Thread Brian Curtin
On Jul 24, 2013, at 1:27 PM, Eric Windisch 
mailto:e...@cloudscaling.com>>
 wrote:


Speaking of preferred ways to port, has there been any discussion about which 
version takes precedence when we have to do different things? For example, with 
imports, should we be trying the 2.x name first and falling back to 3.x on 
ImportError, or vice versa?

Are we having it now? My belief here is we should be following the principle of 
"ask forgiveness, not permission". Try Python 3 and then fallback to Python 2 
whenever possible.

That's my belief and preference as well.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-24 Thread Joe Gordon
On Wed, Jul 24, 2013 at 12:24 PM, Russell Bryant  wrote:

> On 07/23/2013 06:00 PM, Clint Byrum wrote:
> > This is really interesting work, thanks for sharing it with us. The
> > discussion that has followed has brought up some thoughts I've had for
> > a while about this choke point in what is supposed to be an extremely
> > scalable cloud platform (OpenStack).
> >
> > I feel like the discussions have all been centered around making "the"
> > scheduler(s) intelligent.  There seems to be a commonly held belief that
> > scheduling is a single step, and should be done with as much knowledge
> > of the system as possible by a well informed entity.
> >
> > Can you name for me one large scale system that has a single entity,
> > human or computer, that knows everything about the system and can make
> > good decisions quickly?
> >
> > This problem is screaming to be broken up, de-coupled, and distributed.
> >
> > I keep asking myself these questions:
> >
> > Why are all of the compute nodes informing all of the schedulers?
>
 >
> > Why are all of the schedulers expecting to know about all of the compute
> nodes?
>

So the scheduler can try to find the globally optimum solution, see below.


> >
> > Can we break this problem up into simpler problems and distribute the
> load to
> > the entire system?
> >
> > This has been bouncing around in my head for a while now, but as a
> > shallow observer of nova dev, I feel like there are some well known
> > scaling techniques which have not been brought up. Here is my idea,
> > forgive me if I have glossed over something or missed a huge hole:
> >
> > * Schedulers break up compute nodes by hash table, only caring about
> >   those in their hash table.
> > * Schedulers, upon claiming a compute node by hash table, poll compute
> >   node directly for its information.
>

For people who want to schedule on information that is constantly changing
(such as CPU load, memory usage etc).  How often would you poll?


> > * Requests to boot go into fanout.
> > * Schedulers get request and try to satisfy using only their own compute
> >   nodes.
> > * Failure to boot results in re-insertion in the fanout.
>

With this model we loose the ability to find the global optimum host to
schedule on, and can only find an optimal solution.  Which sounds like a
reasonable scale trade off.  Going forward I can image nova having several
different schedulers for different requirements.  As someone who is
deploying at a massive scale will probably accept an optimal solution (and
a scheduler that scales better) but someone with a smaller cloud will want
the globally optimum solution.


> >
> > This gives up the certainty that the scheduler will find a compute node
> > for a boot request on the first try. It is also possible that a request
> > gets unlucky and takes a long time to find the one scheduler that has
> > the one last "X" resource that it is looking for. There are some further
> > optimization strategies that can be employed (like queues based on hashes
> > already tried.. etc).
> >
> > Anyway, I don't see any point in trying to hot-rod the intelligent
> > scheduler to go super fast, when we can just optimize for having many
> > many schedulers doing the same body of work without blocking and without
> > pounding a database.
>
> These are some *very* good observations.  I'd like all of the nova folks
> interested in this are to give some deep consideration of this type of
> approach.
>
>
I agree an approach like this is very interesting and is something worth
exploring, especially at the summit.   There are some clear pros and cons
to an approach like this.  For example this will scale better, but cannot
find the optimum node to schedule on.  My question is, at what scale does
it make sense to adopt an approach like this?  And how can we improve our
current scheduler to scale better, not that it will ever scale better then
the idea proposed here.

While talking about scale there are some other big issues, such as RPC that
need be be sorted out as well.


>  --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Russell Bryant
On 07/24/2013 02:32 PM, Kevin L. Mitchell wrote:
> On Wed, 2013-07-24 at 14:12 -0400, Chuck Short wrote:
>> 1. Change mox usage to more python3 friendly such as mock.
>> (https://pypi.python.org/pypi/mock/1.0.1). However this will cause
>> alot of code churn in the projects as we move away from mox to mock.
>>
>>
>> 2. Use the python3 fork called pymox
>> (https://github.com/emonty/pymox). This project has reasonable
>> compatibility with mox and is python3 compatible. Using this option
>> causes less code churn. IMHO this would be the better option.
> 
> My personal preference is that we move to mock; I think it is a better
> methodology, and I like its features.
> 

That's fine with me if everyone feels that way.  I'm afraid it's not a
quick move because of how much we're using mox.  A practical approach
would probably be:

1) Prefer mock for new tests.

2) Use suggestion #2 above to mitigate the Python 3 concern.

3) Convert tests to mock over time, opportunistically, as tests are
being updated anyway.  (Or if someone *really* wants to take this on as
a project ...)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Alex Meade
+1 to everything Russell just said and of course Blueprints for this. One for 
#3 (changing from mox -> Mock) would be good so that anyone who is bored or 
finds this urgent can collaborate. Also, we need to make sure reviewers are 
aware (Hopefully they are reading this).

-Alex

-Original Message-
From: "Russell Bryant" 
Sent: Wednesday, July 24, 2013 2:45pm
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Usage of mox through out the Openstack project.

On 07/24/2013 02:32 PM, Kevin L. Mitchell wrote:
> On Wed, 2013-07-24 at 14:12 -0400, Chuck Short wrote:
>> 1. Change mox usage to more python3 friendly such as mock.
>> (https://pypi.python.org/pypi/mock/1.0.1). However this will cause
>> alot of code churn in the projects as we move away from mox to mock.
>>
>>
>> 2. Use the python3 fork called pymox
>> (https://github.com/emonty/pymox). This project has reasonable
>> compatibility with mox and is python3 compatible. Using this option
>> causes less code churn. IMHO this would be the better option.
> 
> My personal preference is that we move to mock; I think it is a better
> methodology, and I like its features.
> 

That's fine with me if everyone feels that way.  I'm afraid it's not a
quick move because of how much we're using mox.  A practical approach
would probably be:

1) Prefer mock for new tests.

2) Use suggestion #2 above to mitigate the Python 3 concern.

3) Convert tests to mock over time, opportunistically, as tests are
being updated anyway.  (Or if someone *really* wants to take this on as
a project ...)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-24 Thread Adam Young

On 07/23/2013 03:56 PM, David Chadwick wrote:



On 23/07/2013 18:36, Adam Young wrote:

On 07/23/2013 01:17 PM, David Chadwick wrote:

Of course the tricky thing is knowing which object attributes to fetch
for which user API requests. In the general case you cannot assume
that Keystone knows the format or structure of the policy rules, or
which attributes each will need, so you would need a specific tailored
context handler to go with a specific policy engine. This implies that
the context handler and policy engine should be pluggable Keystone
components that it calls, and that can be switchable as people decide
use different policy engines.

We are using a model where Keystone plays the mediator, and decides what
attributes to include.  The only attributes we currently claim to
support are


what I am saying is that, in the long term, this model is too 
restrictive. It would be much better for Keystone to call a plugin 
module that determines which attributes are needed to match the policy 
engine that is implemented.


An interesting model:  attribute sets based on the service

for nova provide:  project role assignments
for swift provide : user name

and so forth.






userid
domainid
role_assignments: a collection of tuples  (project, role)


I thought in your blog post you said "While OpenStack calls this Role 
Based Access Control (RBAC) there is nothing in the mechanism that 
specifies that only roles can be used for these decisions. Any 
attribute in the token response could reasonably be used to 
provide/deny access. Thus, we speak of the token as containing 
authorization attributes."


THat is true.  We just put a very limited set of attributes in the token 
at present




Thus the plugin should be capable of adding any attribute to the 
request to the policy engine.
Yes it can, and I think we need a way to manage the set of attributes 
that are bound in a token.







Objects in openstack are either owned by users (in Swift) or by Projects
(Nova and elsewhere).  Thus, providing userid and role_assignments
should be sufficient to make access decisions.


this is too narrow a viewpoint and contradicts your blog posting.
No, this is what is required today.  If there were additional 
attributes, they could be used.




 If there are other

attributes that people want consume for policy enforcement, they can
add them to custom token providers.


the token is not the only place that attributes can come from. The 
token contains subject attributes, but there are also resource 
attributes and environmental attributes that may be needed by the 
policy engine. Thus I am suggesting that we should design for 
eventuality. I think that re-engineering the existing code base should 
allow the context handler to be pluggable, whilst the first 
implementation will simply use the attributes that are currently being 
used, so that you have backwards compatibility
I think we can do that with the current implementation.  I am not 
certain if the policy engine as it is currently implemented has access 
to the entire HTTP request, but expanding it to have access should not 
be difficult.


The biggest drawback is the fact that the rules are on "method name" and 
thus you might have two "create" methods that conflict.




regards

David


 The policy enforcement mechanism is

flexible enough that extending it to other attributes should be fairly
straightforward.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-24 Thread Boris Pavlovic
Hi Mike,


On Wed, Jul 24, 2013 at 1:01 AM, Mike Wilson  wrote:

> Again I can only speak for qpid, but it's not really a big load on the
> qpidd server itself. I think the issue is that the updates come in serially
> into each scheduler that you have running. We don't process those quickly
> enough for it to do any good, which is why the lookup from db. You can see
> this for yourself using the fake hypervisor, launch yourself a bunch of
> simulated nova-compute, launch a nova-scheduler on the same host and even
> with 1k or so you will notice the latency between the update being sent and
> the update actually meaning anything for the scheduler.
>
> I think a few points that have been brought up could mitigate this quite a
> bit. My personal view is the following:
>
> -Only update when you have to (ie. 10k nodes all sending update every
> periodic interval is heavy, only send when you have to)
> -Don't fanout to schedulers, update a single scheduler which in turn
> updates a shared store that is fast such as memcache
>
> I guess that effectively is what you are proposing with the added twist of
> the shared store.
>


Absolutely agree with this. Especially with using memcached (or redis) as
common storage for all schedulers.

Best regards,
Boris Pavlovic
---
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-24 Thread Anita Kuno
We had this discussion in Ironic a while back, and part of the use of 
mox going forward was familiarity amongst most of the coders currently 
submitting patches thus enhancing speed of development, a situation that 
falls under your observations, Chuck.


Would it be possible to create a wiki page of some sort presented as a 
tutorial supporting the mox to mock transition for those who know mox 
and would like to learn mock? Perhaps including some links to patches 
where some of this transition is already merged?


If any mock users are able to contribute to this, I think it could help.

Thanks,
Anita.


On 13-07-24 02:24 PM, Jay Pipes wrote:

On 07/24/2013 02:19 PM, Alex Gaynor wrote:

I think moving towards mock is a better long term strategy:

a) I don't you're correct that it's the most familiar for most python
developers. By PyPi installs (A TERRIBLE METRIC, but it's all we have).
Mock has 24k in the last week, mox has 3.5k
b) mock is a part of the standard library starting with python 3.3, this
will lead to even more adoption.


++. I personally prefer mock over mox.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-24 Thread Pete Zaitcev
On Thu, 18 Jul 2013 12:31:02 -0500
Chuck Thier  wrote:

> I'm with Chmouel though.  It seems to me that EC policy should be chosen by
> the provider and not the client.  For public storage clouds, I don't think
> you can make the assumption that all users/clients will understand the
> storage/latency tradeoffs and benefits.

Would not a tiered pricing make them figure it out quickly?
Make EC cheaper by the factor of cost of storage used and voila.

At first I also had a violent reaction to this kind of exposure
of internals. After all S3 went this far while being entirely
opaque. But we're not S3, that's the key.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Jul 25th at 1500 UTC

2013-07-24 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Jul 25th at 1500 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Action from previous meeting
  * nealph to finish looking into tempest QA efforts 
  * eglynn release python-ceilometerclient as soon as
https://review.openstack.org/#/c/37410/ gets in and unblock
https://review.openstack.org/#/c/36905/
* Add logging to our IRC channel? - dhellmann
  * The infra team is going to make logging more widely available. Do we
want it activated in #openstack-ceilometer?
* Review Havana-3 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-3
* Release python-ceilometerclient? 
* Splitting CADF support out of Ceilometer --
  https://review.openstack.org/#/c/31969 -- dhellmann/jd 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Program Description for OpenStack Documentation

2013-07-24 Thread Anne Gentle
Program Name: OpenStack Documentation
PTL: Anne Gentle
Mission Statement: Provide documentation for core OpenStack projects to
promote OpenStack. Develop and maintain tools and processes to ensure
quality, accurate documentation. Treat documentation like OpenStack code.

Details: Documentation is an essential effort to meet the goals of the
OpenStack mission. We collaborate tightly with development and maintain
open tools and processes like OpenStack development. We review each other's
patches, continuously publish, track doc bugs and monitor code patches for
doc impact. We provide documentation for installation and system
administration of OpenStack clouds at http://docs.openstack.org. We provide
API documentation for cloud consumers at http://api.openstack.org.

The docs-core team consists of about 20 members who review documentation
patches, log and triage doc bugs, moderate comments on the documentation,
write documentation, provide tooling for automation and publication, and
maintain search and navigation for the docs.openstack.org and
api.openstack.org sites. We also support community-centric book sprints,
intensive documentation efforts focused on a single deliverable. The number
of documentation contributors in any given six-month release are about 3-4
times the size of the core team.

We offer a framework and tools for creating and maintaining documentation
for OpenStack user roles as defined by the User Committee [1]:

- A consumer who is submitting work, storing data or interacting with an
OpenStack cloud
- An operator who is running a public or private openstack cloud
- An ecosystem partner who is developing solutions such as software and
services around OpenStack. This corresponds to the “built for openstack”
trademark requirements.
- A distribution provider or appliance vendor that is providing packaged
solutions and support of OpenStack

Expected deliverables and repositories
The OpenStack Documentation program maintains and governs these
repositories:
openstack/openstack-manuals
openstack/api-site
openstack/operations-guide

These repositories are co-governed with project core and docs-core having
approval permissions:
openstack/compute-api
openstack/object-api
openstack/netconn-api
openstack/image-api
openstack/identity-api
openstack/volume-api

Note: One integrated project repos co-governed with project core and
docs-core:
database-api

As an example, here is a general mapping for a project's documentation,
such as the Images project, Glance:
glance/doc/source/ should contain information for contributors to the
Glance project itself.
openstack/openstack-manuals/ contains installation and administration
information.
openstack/image-api/ contains API specifications.
openstack/api-site/ contains API reference information only.

Since we cannot govern all documentation equally with the resources
available, our focus is core first and users first, with collaborative
efforts to provide coaching and processes to complete documentation for
OpenStack core and integrated projects and additional audiences.

Thanks for reading this far -- input and questions welcomed.
Anne

1.
https://docs.google.com/document/d/1yD8TfqUik2dt5xo_jMVHMl7tw9oIJnEndLK8YqEToyo/edit
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Chalenges with highly available service VMs - port adn security group options.

2013-07-24 Thread Aaron Rosen
On Wed, Jul 24, 2013 at 12:42 AM, Samuel Bercovici wrote:

>  Hi,
>
> ** **
>
> This might be apparent but not to me.
>
> Can you point to how broadcast can be turned on a network/port?
>

There  is currently no way to prevent it so it's on by default.

> 
>
> ** **
>
> As for the
> https://github.com/openstack/neutron/blob/master/neutron/extensions/portsecurity.py,
> in NVP, does this totally disable port security on a port/network or it
> just disable the MAC/IP checks and still allows the “user defined” port
> security to take effect?
>

Port security is currently obtained from the fixed_ips and mac_address
field on the port. This removes the filtering done on fixed_ips and
mac_address fields when disabled.


> 
>
> This looks like an extension only implemented by NVP, do you know if there
> are similar implementations for other plugins?
>
> **
>

No, the other plugins do not currently have a way to disable spoofing
dynamically (only globally disabled).


>  **
>
> Regards,
>
> -Sam.
>
> ** **
>
> ** **
>
> *From:* Aaron Rosen [mailto:aro...@nicira.com]
> *Sent:* Tuesday, July 23, 2013 10:52 PM
> *To:* Samuel Bercovici
> *Cc:* OpenStack Development Mailing List; sorla...@nicira.com; Avishay
> Balderman; gary.kot...@gmail.com
>
> *Subject:* Re: [openstack-dev] [Neutron] Chalenges with highly available
> service VMs - port adn security group options.
>
> ** **
>
> I agree too. I've posted a work in progress of this here if you want to
> start looking at it: https://review.openstack.org/#/c/38230/
>
> ** **
>
> Thanks, 
>
>
> Aaron
>
> ** **
>
> On Tue, Jul 23, 2013 at 4:21 AM, Samuel Bercovici 
> wrote:
>
> Hi,
>
>  
>
> I agree that the AutZ should be separated and the service provider should
> be able to control this based on their model.
>
>  
>
> For Service VMs who might be serving ~100-~1000 IPs and might use multiple
> MACs per port, it would be better to turn this off altogether that to have
> an IPTABLE rules with thousands of entries. 
>
> This why I prefer to be able to turn-off IP spoofing and turn-off MAC
> spoofing altogether.
>
>  
>
> Still from a logical model / declarative reasons an IP that can migrate
> between different ports should be declared as such and maybe also from MAC
> perspective.
>
>  
>
> Regards,
>
> -Sam.
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Sunday, July 21, 2013 9:56 PM
>
>
> *To:* OpenStack Development Mailing List
>
> *Subject:* Re: [openstack-dev] [Neutron] Chalenges with highly available
> service VMs - port adn security group options.
>
>  
>
>  
>
>  
>
> On 19 July 2013 13:14, Aaron Rosen  wrote:
>
>  
>
>  
>
> On Fri, Jul 19, 2013 at 1:55 AM, Samuel Bercovici 
> wrote:
>
> Hi,
>
>  
>
> I have completely missed this discussion as it does not have
> quantum/Neutron in the subject (modify it now)
>
> I think that the security group is the right place to control this.
>
> I think that this might be only allowed to admins.
>
>  
>
> I think this shouldn't be admin only since tenant's have control of their
> own networks they should be allowed to do this. 
>
>  
>
> I reiterate my point that the authZ model for a feature should always be
> completely separated by the business logic of the feature itself.
>
> In my opinion there are grounds both for scoping it as admin only and for
> allowing tenants to use it; it might be better if we just let the policy
> engine deal with this.
>
>  
>
> Let me explain what we need which is more than just disable spoofing.*
> ***
>
> 1.   Be able to allow MACs which are not defined on the port level to
> transmit packets (for example VRRP MACs)== turn off MAC spoofing
>
>   
>
> For this it seems you would need to implement the port security extension
> which allows one to enable/disable port spoofing on a port. 
>
>   
>
> This would be one way of doing it. The other would probably be adding a
> list of allowed VRRP MACs, which should be possible with the blueprint
> pointed by Aaron. 
>
> 2.   Be able to allow IPs which are not defined on the port level
> to transmit packets (for example, IP used for HA service that moves between
> an HA pair) == turn off IP spoofing
>
>   
>
> It seems like this would fit your use case perfectly:
> https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairs
>
>  3.   Be able to allow broadcast message on the port (for example for
> VRRP broadcast) == allow broadcast.
>
>  
>
>  Quantum does have an abstraction for disabling this so we already allow
> this by default.  
>
>   
>
> Regards,
>
> -Sam.
>
>  
>
>  
>
> *From:* Aaron Rosen [mailto:aro...@nicira.com]
> *Sent:* Friday, 

Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-24 Thread Derek Higgins

+1 to removing the suders rules we have, there adding overhead and
contain enough wildcards that all they do is give people a false sense
of security

On 23/07/13 17:39, Chris Jones wrote:
> Hi
> 
> On 23 July 2013 10:52, Robert Collins  > wrote:
> 
> So I'd like to change things to say:
>  - either run sudo disk-image-create or
> 
> 
> This is probably the simplest option, but it does increase the amount of
> code we're running with elevated privileges, which might be a concern,
> but probably isn't, given the ratio of stuff that currently runs without
> sudo, to the stuff that does.
> I think we also need to do a little work to make this option functional,
> a quick test just now suggests we are doing something wrong with
> ELEMENTS_PATH at least.
>  
> 
>  - setup passwordless sudo or
> 
> 
> Doesn't sound like a super awesome option to me, it places an ugly
> security problem on anyone wanting to set this up anywhere, imo.


this idea seems best to me, keeping passwordless sudo for a specific
user (not all users as with the current method) and only running the
parts of di-b that need privileges as root makes it less likely
accidents will happen with buggy code.

I don't think its any worse then the security implications of running
di-b as root.

>  
> 
>  - don't run unattended.
> 
> 
> I like being able to run a build while I read email or do some reviews,
> so I do not like this option ;)
> 
> I think if we make option 1 work, then option 2 is a viable option for
> people who want it, they have a single command to allow in sudoers.
> Option 3 essentially works in all scenarios :)
>  
> FWIW I do quite like the implicit auditing of sudo commands that is
> currently required to manually create the sudoers file, but I take your
> point that it's probably unnecessary work at this point.
> 
> Cheers,
> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
"Day, Phil"  wrote on 24/07/2013 12:39:16 PM:
> 
> If you want to provide a user with a choice about how much overcommit
> they will be exposed to then doing that in flavours and the 
> aggregate_instance_extra_spec filter seems the more natural way to 
> do this, since presumably you'd want to charge differently for those
> and the flavour list is normally what is linked to the pricing model. 

So, there are 2 aspects here. First, whether policy should be part of 
flavor definition or separate. I claim that in some cases it would make 
sense to specify it separately. For example, if we want to support 
multiple policies for the same virtual hardware configuration, making 
policy to be part of the flavor extra spec would potentially multiply the 
number of virtual hardware configurations, which is what flavors 
essentially are, by the number of policies -- contributing to explosion in 
the number of flavors in the system. Moreover, although in some cases you 
would want the user to be aware and distinguish between policies, this is 
not always the case. For example, the admin may want to apply 
consolidation/packing policy in one aggregate, and spreading in another. 
Showing two different flavors does seem reasonable in such cases. 

Secondly, even if the policy *is* defined in flavor extra spec, I can see 
value in having a separate filter to handle it. I personally see the main 
use-case for the extra spec filter in supporting matching of capabilities. 
Resource management policy is something which should be hidden, or at 
least abstracted, from the user. And enforcing it with a separate filter 
could be a 'cleaner' design, and also more convenient -- both from 
developer perspective and admin perspective.

> I also like the approach taken by the recent changes to the ram 
> filter where the scheduling characteristics are defined as 
> properties of the aggregate rather than separate stanzas in the 
> configuration file.

Indeed, subset of the scenarios we had in mind can be implemented by 
making each property of each filter/weight an explicit key-value of the 
aggregate, and making each of the filters/weights aware of those aggregate 
properties.
However, our design have several potential advantages, such as:
1) different policies can have different sets of filters/weights
2) different policies can be even enforced by different drivers
3) the configuration is more maintainable -- the admin defines policies in 
one place, and not in 10 places (if you have large environment with 10 
aggregates). One of the side-effects is improved consistency -- if the 
admin needs to change a policy, he needs to do it in one place, and he can 
be sure that all the aggregates comply to one of the valid policies. 
4) the developer of filters/weights does need to care whether the 
parameters are persisted -- nova.conf or aggregate properties

> An alternative, and the use case I'm most interested in at the 
> moment, is where we want the user to be able to define the 
> scheduling policies on a specific set of hosts allocated to them (in
> this case they pay for the host, so if they want to oversubscribe on
> memory/cpu/disk then they should be able to). 
[...]
> Its not clear to me if what your proposing addresses an additional 
> gap between this and the combination of the aggregate_extra_spec 
> filter + revised filters to get their configurations from aggregates) ?

IMO, this can be done with our proposed implementation. 
Going forward, I think that policies should be first-class citizens 
(rather than static sections in nova.conf, or just sets of key-value pairs 
associated with aggregates). Then we can provide APIs to manage them in a 
more flexible manner.

Regards,
Alex

> Cheers,
> Phil
> 
> > -Original Message-
> > From: Russell Bryant [mailto:rbry...@redhat.com]
> > Sent: 23 July 2013 22:32
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] support for multiple active 
scheduler
> > policies/drivers
> > 
> > On 07/23/2013 04:24 PM, Alex Glikson wrote:
> > > Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> > >
> > >> I understand the use case, but can't it just be achieved with 2
> > >> flavors and without this new aggreagte-policy mapping?
> > >>
> > >> flavor 1 with extra specs to say aggregate A and policy Y flavor 2
> > >> with extra specs to say aggregate B and policy Z
> > >
> > > I agree that this approach is simpler to implement. One of the
> > > differences is the level of enforcement that instances within an
> > > aggregate are managed under the same policy. For example, nothing
> > > would prevent the admin to define 2 flavors with conflicting 
policies
> > > that can be applied to the same aggregate. Another aspect of the 
same
> > > problem is the case when admin wants to apply 2 different policies 
in
> > > 2 aggregates with same capabilities/properties. A natural way to
> > > distinguish between the two would be to add an artificial property
> > > that would

Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-24 Thread Joshua Harlow
As far as the send only when you have to. That reminds me of this piece of work 
that could be resurrected that slowed down the periodic updates when nothing 
was changing.

https://review.openstack.org/#/c/26291/

Could be brought back, the concept still feels useful imho. But maybe not to 
others :-P

From: Boris Pavlovic mailto:bo...@pavlovic.me>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 24, 2013 12:12 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] A simple way to improve nova scheduler

Hi Mike,


On Wed, Jul 24, 2013 at 1:01 AM, Mike Wilson 
mailto:geekinu...@gmail.com>> wrote:
Again I can only speak for qpid, but it's not really a big load on the qpidd 
server itself. I think the issue is that the updates come in serially into each 
scheduler that you have running. We don't process those quickly enough for it 
to do any good, which is why the lookup from db. You can see this for yourself 
using the fake hypervisor, launch yourself a bunch of simulated nova-compute, 
launch a nova-scheduler on the same host and even with 1k or so you will notice 
the latency between the update being sent and the update actually meaning 
anything for the scheduler.

I think a few points that have been brought up could mitigate this quite a bit. 
My personal view is the following:

-Only update when you have to (ie. 10k nodes all sending update every periodic 
interval is heavy, only send when you have to)
-Don't fanout to schedulers, update a single scheduler which in turn updates a 
shared store that is fast such as memcache

I guess that effectively is what you are proposing with the added twist of the 
shared store.


Absolutely agree with this. Especially with using memcached (or redis) as 
common storage for all schedulers.

Best regards,
Boris Pavlovic
---
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-24 Thread Alex Glikson
Russell Bryant  wrote on 24/07/2013 07:14:27 PM:
> 
> I really like your point about not needing to set things up via a config
> file.  That's fairly limiting since you can't change it on the fly via
> the API.

True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, 
API, etc. Maybe even a separate policy service? But in the meantime, it 
seems that the approach with config file is a reasonable compromise in 
terms of usability, consistency and simplicity.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-24 Thread Tiwari, Arvind
I have added my proposal @ https://etherpad.openstack.org/api_policy_on_target.

Thanks,
Arvind

-Original Message-
From: Henry Nash [mailto:hen...@linux.vnet.ibm.com] 
Sent: Wednesday, July 24, 2013 8:46 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Extending policy checking to include 
target entities

I think we should transfer this discussion to the etherpad for this blueprint: 
https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's make any 
further comments there, rather than here.

Henry
On 24 Jul 2013, at 00:29, Simo Sorce wrote:

> On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:
>> ...the problem is that if the object does not exists we might not be able 
>> tell whether the use is authorized or not (since authorization might depend 
>> on attributes of the object itself)so how do we know wether to lie or 
>> not?
> 
> If the error you return is always 'Not Found', why do you care ?
> 
> Simo.
> 
>> Henry
>> On 23 Jul 2013, at 21:23, David Chadwick wrote:
>> 
>>> 
>>> 
>>> On 23/07/2013 19:02, Henry Nash wrote:
 One thing we could do is:
 
 - Return Forbidden or NotFound if we can determine the correct answer
 - When we can't (i.e. the object doesn't exist), then return NotFound
 unless a new config value 'policy_harden' (?) is set to true (default
 false) in which case we translate NotFound into Forbidden.
>>> 
>>> I am not sure that this achieves your objective of no data leakage through 
>>> error codes, does it?
>>> 
>>> Its not a question of determining the correct answer or not, its a question 
>>> of whether the user is authorised to see the correct answer or not
>>> 
>>> regards
>>> 
>>> David
 
 Henry
 On 23 Jul 2013, at 18:31, Adam Young wrote:
 
> On 07/23/2013 12:54 PM, David Chadwick wrote:
>> When writing a previous ISO standard the approach we took was as follows
>> 
>> Lie to people who are not authorised.
> 
> Is that your verbage?  I am going to reuse that quote, and I would
> like to get the attribution correct.
> 
>> 
>> So applying this approach to your situation, you could reply Not
>> Found to people who are authorised to see the object if it had
>> existed but does not, and Not Found to those not authorised to see
>> it, regardless of whether it exists or not. In this case, only those
>> who are authorised to see the object will get it if it exists. Those
>> not authorised cannot tell the difference between objects that dont
>> exist and those that do exist
> 
> So, to try and apply this to a semi-real example:  There are two types
> of URLs.  Ones that are like this:
> 
> users/55FEEDBABECAFE
> 
> and ones like this:
> 
> domain/66DEADBEEF/users/55FEEDBABECAFE
> 
> 
> In the first case, you are selecting against a global collection, and
> in the second, against a scoped collection.
> 
> For unscoped, you have to treat all users as equal, and thus a 404
> probably makes sense.
> 
> For a scoped collection we could return a 404 or a 403 Forbidden
>  based on the users
> credentials:  all resources under domain/66DEADBEEF  would show up
> as 403s regardless of existantce or not if the user had no roles in
> the domain 66DEADBEEF.  A user that would be allowed access to
> resources in 66DEADBEEF  would get a 403 only for an object that
> existed but that they had no permission to read, and 404 for a
> resource that doesn't exist.
> 
> 
> 
> 
>> 
>> regards
>> 
>> David
>> 
>> 
>> On 23/07/2013 16:40, Henry Nash wrote:
>>> Hi
>>> 
>>> As part of bp
>>> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
>>> I have uploaded some example WIP code showing a proposed approach
>>> for just a few API calls (one easy, one more complex). I'd
>>> appreciate early feedback on this before I take it any further.
>>> 
>>> https://review.openstack.org/#/c/38308/
>>> 
>>> A couple of points:
>>> 
>>> - One question is on how to handle errors when you are going to get
>>> a target object before doing you policy check.  What do you do if
>>> the object does not exist?  If you return NotFound, then someone,
>>> who was not authorized  could troll for the existence of entities by
>>> seeing whether they got NotFound or Forbidden. If however, you
>>> return Forbidden, then users who are authorized to, say, manage
>>> users in a domain would aways get Forbidden for objects that didn't
>>> exist (since we can know where the non-existant object was!).  So
>>> this would modify the expected return codes.
>>> 
>>> - I really think we need some good documentation on how to bud
>

[openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-24 Thread Joe Gordon
Hi all

I have proposed a patch to disable per-user rate limiting by default:
https://review.openstack.org/#/c/34821/. And on Russell's request  does
anyone care or prefer this to be enabled by default?

Here is some more context:

Earlier rate limiting discussion:
http://www.gossamer-threads.com/lists/openstack/operators/28599
Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529
rate limiting is per process, and doesn't act as expected in a
multi-process environment: https://review.openstack.org/#/c/36516/

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking] quantum...oops neutron unit tests

2013-07-24 Thread Jian Wen
Hello,

I had trouble with run-tests.sh.
`tox -epy27` works.

On Fri, Jun 21, 2013 at 12:32 AM, Armando Migliaccio  wrote:

> Folks,
>
> Is anyone having troubles running the units tests locally on a clean venv
> with both run-tests.sh and tox?
>
> I found out that this is relevant to the issue I am seeing:
>
> https://answers.launchpad.net/quantum/+question/230219
>
> I cannot go past the ML2 unit tests, namely only 1900~ tests run, and then
> the runner just dies.
>
> Thanks,
> Armando
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Cheers,
Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stackalytics] 0.1 release

2013-07-24 Thread Alex Freedland
Roman,

Thank you for your comment. I agree that is should not be the only way to
look at the statistics and that is why Stackalytics also measures the
number of contributions and soon will add the number of reviews. I do,
however, think it a useful statistic as because not all commits are created
equal.

To your argument that the developers will write longer code just for the
sake of statistics, I think this will not happen en mass. First and
foremost, the developers care about their reputations and knowing that
their code is peer-reviewed, very few will intentionally write inefficient
code just to get their numbers up. Those few who will choose this route
will lose the respect of their peers and consequently will not be able to
contribute as much.

Also, in order to deal with the situations where people can manipulate the
numbers, Stackalytics allows anyone in the community to correct the line
count where it does not make sense.  (
https://wiki.openstack.org/wiki/Stackalytics#Commits_metrics_corrections_and_a_common_sense_approach
).

We welcome any other improvements and suggestions on how to make OpenStack
statistics more transparent, meaningful and reliable.

Alex Freedland




On Tue, Jul 23, 2013 at 7:25 AM, Roman Prykhodchenko <
rprikhodche...@mirantis.com> wrote:

> I still think counting lines of code is evil because it might encourage
> some developers to write longer code just for statistics.
>
> On Jul 23, 2013, at 16:58 , Herman Narkaytis 
> wrote:
>
> Hello everyone!
>
> Mirantis  is pleased to announce the release of
> Stackalytics  0.1. You can find complete
> details on the Stackalytics 
> wiki page,
> but here are the brief release notes:
>
>- Changed the internal architecture. Main features include advanced
>real time processing and horizontal scalability.
>- Got rid of all 3rd party non-Apache libraries and published the
>source on StackForge under the Apache2 license.
>- Improved release cycle tracking by using Git tags instead of
>approximate date periods.
>- Changed project classification to a two-level structure: OpenStack (core,
>incubator, documentation, other) and StackForge.
>- Implemented correction mechanism that allows users to tweak metrics
>for particular commits.
>- Added a number of new projects (Tempest, documentation, Puppet
>recipes).
>- Added company affiliated contribution breakdown to the user's
>profile page.
>
> We welcome you to read, look it over, and comment.
>
> Thank you!
>
> --
> Herman Narkaytis
> DoO Ru, PhD
> Tel.: +7 (8452) 674-555, +7 (8452) 431-555
> Tel.: +7 (495) 640-4904
> Tel.: +7 (812) 640-5904
> Tel.: +38(057)728-4215
> Tel.: +1 (408) 715-7897
> ext 2002
> http://www.mirantis.com
>
> This email (including any attachments) is confidential. If you are not the
> intended recipient you must not copy, use, disclose, distribute or rely on
> the information contained in it. If you have received this email in error,
> please notify the sender immediately by reply email and delete the email
> from your system. Confidentiality and legal privilege attached to this
> communication are not waived or lost by reason of mistaken delivery to you.
> Mirantis does not guarantee (that this email or the attachment's) are
> unaffected by computer virus, corruption or other defects. Mirantis may
> monitor incoming and outgoing emails for compliance with its Email Policy.
> Please note that our servers may not be located in your country.
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-24 Thread Joshua Harlow
I think its still useful to have both, although I have a feeling that
something like the 'AWSOME' conversion layer from EC2 might still be a
pretty useful project to have to allow for a robust EC2 api which has a
dedicated group of people to support it. Never was quite sure what
happened to that project.

http://www.canonical.com/content/canonical%E2%80%99s-awsome-bridges-amazon-
and-openstack-clouds

I might even take this further and propose that the nova-api binary itself
should/could be this 'conversion layer' as a separate project and then
allowing nova (the rest of the binaries) to be everything under said API
(the MQ would then be more of nova's 'exposed' API). Then the exposed WS
api can be whatever is best at that layer, whether it be ec2 or the native
nova-api.

But as for which one should be supported, I think this is an evolutionary
thing, whichever is most used and supported will be what openstack uses;
if that¹s the ec2 api or the native nova-api then so be it. Perhaps this
is another good reason for nova-api as its own project, let that battle be
fought in said project instead of having the nova project deal with that
api.

-Josh 

On 7/24/13 11:06 AM, "Sean Dague"  wrote:

>On 07/24/2013 01:43 PM, Mark McLoughlin wrote:
>> On Wed, 2013-07-24 at 08:51 -0700, Stefano Maffulli wrote:
>>> Hello
>>>
>>> I have seen lots of discussions on blogs and twitter heating up around
>>> Amazon API compatibility and OpenStack. This seems like a recurring
>>> topic, often raised by pundits and recently joined by members of the
>>> community. I think it's time to bring the discussions inside our
>>> community to our established channels and processes. Our community has
>>> established ways to discuss and take technical decisions, from the more
>>> accessible General mailing list to the Development list to the Design
>>> Summits, the weekly project meetings, the reviews on gerrit and the
>>> governing bodies Technical Committee and Board of Directors.
>>>
>>> While we have not seen a large push in the community recently via
>>> contributions or deployments, Amazon APIs have been an option for
>>> deployments from the early days of OpenStack.
>>>
>>> I would like to have this discussion inside the established channels of
>>> our community and get the opinions from those that maintain that
>>> OpenStack should increase efforts for Amazon APIs compatibility, and
>>> ultimately it would be good to see code contributions.
>>>
>>> Do you think OpenStack should have an ongoing effort to imitate
>>>Amazon's
>>> API? If you think it should, how would you lead the effort?
>>
>> I think AWS compatible APIs for any of our services is a great feature.
>> I'd love to tell people they can try out OpenStack by pointing their
>> existing AWS based deployment tools at an OpenStack cloud.
>>
>> Just yesterday, I saw a comment on IRC along the lines of "wow, Nova has
>> an EC2 API ... I should totally try out using knife with that".
>>
>> Two things seem straightforward and obvious to me - our primary API is
>> the OpenStack "native" APIs and, yet, any built-in AWS compatibility we
>> can get is mucho goodness.
>>
>> That said, it's not "AWS compat == goodness" statements we need ... we
>> need people who are keen to contribute to the work.
>>
>> However, the very least we should do is make it clear that if anyone
>> *does* step up and do that work, that we'll welcome the contributions
>> with open arms.
>
>+1. Also validation of those interfaces would be appreciated. Today the
>tempest 3rdparty gate tests use the boto library, which is a good first
>step, but doesn't validate the AWS API strongly.
>
>Those kinds of contributions are equally welcomed, we've even set aside
>a place dedicated to them in Tempest (tempest/thirdparty) where non
>"native" API testing can live.
>
>But again, what is lacking here is mostly contributions. The more the
>merrier, and there are still many places where people can leave their
>mark on the project.
>
>   -Sean
>
>-- 
>Sean Dague
>http://dague.net
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking] quantum...oops neutron unit tests

2013-07-24 Thread Eugene Nikanorov
Hi Armando,

That happens from time to time.

Thanks,
Eugene.


On Thu, Jul 25, 2013 at 5:22 AM, Jian Wen  wrote:

> Hello,
>
> I had trouble with run-tests.sh.
> `tox -epy27` works.
>
> On Fri, Jun 21, 2013 at 12:32 AM, Armando Migliaccio <
> amigliac...@nicira.com> wrote:
>
>> Folks,
>>
>> Is anyone having troubles running the units tests locally on a clean venv
>> with both run-tests.sh and tox?
>>
>> I found out that this is relevant to the issue I am seeing:
>>
>> https://answers.launchpad.net/quantum/+question/230219
>>
>> I cannot go past the ML2 unit tests, namely only 1900~ tests run, and
>> then the runner just dies.
>>
>> Thanks,
>> Armando
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Cheers,
> Jian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] scalable architecture

2013-07-24 Thread Matthew Farrellee

On 07/23/2013 12:32 PM, Sergey Lukjanov wrote:

Hi evereyone,

We’ve started working on upgrading Savanna architecture in version
0.3 to make it horizontally scalable.

The most part of information is in the wiki page -
https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.

Additionally there are several blueprints created for this activity -
https://blueprints.launchpad.net/savanna?searchtext=ng-

We are looking for comments / questions / suggestions.


Some comments on "Why not provision agents to Hadoop cluster's to 
provision all other stuff?"


Re problems with scaling agents for launching large clusters - launching 
large clusters may be resource intensive, those resources must be 
provided by someone. They're either going to be provided by a) the 
hardware running the savanna infrastructure or b) the instance hardware 
provided to the tenant. If they are provided by (a) then the cost of 
launching the cluster is incurred by all users of savanna. If (b) then 
the cost is incurred by the user trying to launch the large cluster. It 
is true that some instance recommendations may be necessary, e.g. if you 
want to run a 500 instance cluster than your head node should be large 
(vs medium or small). That sizing decision needs to happen for (a) or 
(b) because enough virtual resources must be present to maintain the 
large cluster after it is launched. There are accounting and isolation 
benefits to (b).


Re problems migrating agents while cluster is scaling - will you expand 
on this point?


Re unexpected resource consumers - during launch, maybe, during 
execution the agent should be a minimal consumer of resources. sshd may 
also be an unexpected resource consumer.


Re security vulnerability - the agents should only communicate within 
the instance network, primarily w/ the head node. The head node can 
relay information to the savanna infrastructure outside the instances in 
the same way savanna-api gets information now. So there should be no 
difference in vulnerability assessment.


Re support multiple distros - yes, but I'd argue this is at most a small 
incremental complexity on what already exists today w/ properly creating 
savanna plugin compatible instances.


-

Concretely, the architecture of using instance resources for 
provisioning is no different than spinning an instance w/ ambari and 
then telling that instance to provision the rest of the cluster and 
report back status.


-

Re metrics - wherever you gather Hz (# req per sec, # queries per sec, 
etc), also gather standard summary statistics (mean, median, std dev, 
quartiles, range)


Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-24 Thread Joshua Harlow
I would personally like it off, since it appears to me to offer a false sense 
of security for the reasons mentioned in that review (doesn't stop DOS, doesn't 
work across processes/API nodes).

Even though, I would recommend/think before its turned off that there should be 
a detailed document on what to replace it with since even though it does 
provide a minimal rate limiting capabilities, it does provide more than zero. 
So there should be some docs or thought put into a replacement and explanation 
on how to use said replacement/s.

-josh

Sent from my really tiny device...

On Jul 24, 2013, at 3:42 PM, "Joe Gordon" 
mailto:joe.gord...@gmail.com>> wrote:

Hi all

I have proposed a patch to disable per-user rate limiting by default: 
https://review.openstack.org/#/c/34821/. And on Russell's request  does anyone 
care or prefer this to be enabled by default?

Here is some more context:

Earlier rate limiting discussion: 
http://www.gossamer-threads.com/lists/openstack/operators/28599
Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529
rate limiting is per process, and doesn't act as expected in a multi-process 
environment: https://review.openstack.org/#/c/36516/

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev