Re: [openstack-dev] There are no OpenStack talks submitted to XenCon, CFP ends this week

2013-07-23 Thread Bob Ball
The CFP is at 
http://events.linuxfoundation.org/events/linuxcon-north-america/program/xen-project-user-summit
 - this is for the Xen User Summit as part of linuxcon.

While it might not have OpenStack in the title, I'm aware of at least one talk 
which has already been submitted with a strong OpenStack component.

I'm also aware of a second talk which is expected to be submitted before the 
deadline - so there will be some OpenStack representation there.

Of course, more talks are always welcome!

Bob

From: Michael Still [mi...@stillhq.com]
Sent: 23 July 2013 02:39
To: OpenStack Development Mailing List; mark.atw...@hp.com
Subject: Re: [openstack-dev] There are no OpenStack talks submitted to XenCon, 
CFP ends this week

On Tue, Jul 23, 2013 at 5:19 AM, Atwood, Mark  wrote:
> Hi!
>
> While I was at the Community Leadership Summit conference this weekend, I met 
> the community manager for the Xen hypervisor project.  He told me that there 
> are *no* OpenStack talks submitted to the upcoming XenCon conference.
>
> The CFP closes this Friday.
>
> Allow me to suggest that any of us who have something to say about Xen in 
> Nova in OpenStack, submit papers.

Mark -- I can't see an obvious URL for the CFP. Can you chase down the
community manager and ask what it is?

Michael


--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [horizon] Python client uncapping, currently blocking issues

2013-07-23 Thread Julien Danjou
On Tue, Jul 23 2013, Sean Dague wrote:

Hi Sean,

> A couple weeks ago after a really *fun* night we started down this road of
> uncapping all the python clients to ensure that we're actually testing the
> git clients in the gate. We're close, but we need the help of the horizon
> and ceilometerclient teams to get us there:
>
> 1) we need a rebase on this patch for Horizon - 
> https://review.openstack.org/#/c/36897/
>
> 2) we need a python-ceilometerclient release, as ceilometer uses
> python-ceilometerclient (for unit tests) which means we can't bump
> ceilometer client (https://review.openstack.org/#/c/36905/) until it's done.

Sorry for the delay. I think Eoghan wanted to do the release, but he
probably got swamped by something else, so I just released 1.0.2.

Hope that helps,

-- 
Julien Danjou
// Free Software hacker / freelance consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blueprint sharing-model-for-external-networks] Define a sharing model for external networks

2013-07-23 Thread Zang MingJie
Hi Salvatore:

I have submitted a blueprint zone-based-router [1] intending to solve
the problem the bp focus on, it is a totally different approach, where
shared network and router are abandoned. It may be a bit hard to
implement in havana, but some prepare work still can be done in
havana.

First, determine whether my solution is acceptable and meet the
requirement. Then, complete the API/Data model change, figure out how
to migrate current network. So we can start coding at next milestone.

[1] https://blueprints.launchpad.net/neutron/+spec/zone-based-router

Regards.

--
Zang MingJie

On Tue, Jul 23, 2013 at 5:42 AM, Salvatore Orlando  wrote:
> Blueprint changed by Salvatore Orlando:
>
> +
> + --
> + UPDATE 2013-07-22
> + As no particular interested has been detected on this bp, I am proposing to 
> untarget it from Havana.
>
> --
> Define a sharing model for external networks
> https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Add method to get iptables traffic counters

2013-07-23 Thread Sylvain Afchain
Hi Brian,

I just realized that my example in the comment of the review was not so good.
The currently implementation of the driver already does what you propose.

The driver addresses this case:

Two labels rules matches a packet; a CIDR of a rule overlaps another one:

iptables -N test1
iptables -N test2
iptables -A test2
iptables -A OUTPUT -j test1

iptables -A test1 -d 8.8.8.0/27 -j test2
iptables -A test1 -d 8.8.8.0/24 -j test2

I could remove the mark, and add a constraint on the plugin's side to avoid the 
overlapping.

Thoughts?

Thanks,

Sylvain.


- Original Message -
From: "Brian Haley" 
To: "Sylvain Afchain" 
Cc: openstack-dev@lists.openstack.org
Sent: Monday, July 22, 2013 10:30:32 PM
Subject: Re: Change in openstack/neutron[master]: Add method to get iptables 
traffic counters

Sylvain,

Something like this would require no marking:

# iptables -N test2
# iptables -N test3
# iptables -A test3
# iptables -A test2 -d 9.9.9.9/32 -j RETURN
# iptables -A test2 -d 10.10.10.10/32 -j RETURN
# iptables -A test2 -j test3
# iptables -A OUTPUT -j test2

# ping -I eth0 -r 9.9.9.9
PING 9.9.9.9 (9.9.9.9) from 16.1.1.40 eth0: 56(84) bytes of data.
^C
--- 9.9.9.9 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1007ms

# iptables-save -c | grep test
:test2 - [0:0]
:test3 - [0:0]
[3198:403274] -A OUTPUT -j test2
[2:168] -A test2 -d 9.9.9.9/32 -j RETURN
[0:0] -A test2 -d 10.10.10.10/32 -j RETURN
[3196:403106] -A test2 -j test3
[3196:403106] -A test3

# iptables -L test2 -v -x -n
Chain test2 (1 references)
pkts  bytes target prot opt in out source
destination
   2  168 RETURN all  --  *  *   0.0.0.0/0
9.9.9.9
   00 RETURN all  --  *  *   0.0.0.0/0
10.10.10.10
3182   401554 test3  all  --  *  *   0.0.0.0/0
0.0.0.0/0

> iptables -L test3 -v -x -n
Chain test3 (1 references)
pkts  bytes target prot opt in out source
destination
3182   401554all  --  *  *   0.0.0.0/0
0.0.0.0/0

And I seems similar to your cut/paste from below.

Thoughts?

-Brian

On 07/22/2013 03:55 AM, Sylvain Afchain wrote:
> Hi Brian,
> 
> Thanks for your reply.
> 
>> 1. This isn't something a tenant should be able to do, so should be 
>> admin-only,
>> correct?
> 
> Correct.
> 
>> 2. I think it would be useful for an admin to be able to add metering rules 
>> for
>> all tenants with a single command.  This gets back to wanting to pre-seed an 
>> ini
>> file with a set of subnets, then add/subtract from it later without 
>> restarting
>> the daemon.
> 
> I agree with you, could be a future enhancement.
> 
>> 3. I think it would be better if you didn't mark the packets, for performance
>> reasons.  If you were marking them on input to be matched by something on 
>> output
>> I'd feel different, but for just counting bytes we should be able to do it
>> another way.  I can get back to you next week on figuring this out.
> 
> Ok, I'll take a look too.
> 
> Thanks.
> 
> Sylvain.
> 
> - Original Message -
> From: "Brian Haley" 
> To: "Sylvain Afchain" 
> Cc: openstack-dev@lists.openstack.org
> Sent: Friday, July 19, 2013 11:47:41 PM
> Subject: Re: Change in openstack/neutron[master]: Add method to get iptables 
> traffic counters
> 
> Hi Sylvain,
> 
> Sorry for the slow reply, I'll have to look closer next week, but I did have
> some comments.
> 
> 1. This isn't something a tenant should be able to do, so should be 
> admin-only,
> correct?
> 
> 2. I think it would be useful for an admin to be able to add metering rules 
> for
> all tenants with a single command.  This gets back to wanting to pre-seed an 
> ini
> file with a set of subnets, then add/subtract from it later without restarting
> the daemon.
> 
> 3. I think it would be better if you didn't mark the packets, for performance
> reasons.  If you were marking them on input to be matched by something on 
> output
> I'd feel different, but for just counting bytes we should be able to do it
> another way.  I can get back to you next week on figuring this out.
> 
> Thanks,
> 
> -Brian
> 
> On 07/18/2013 04:29 AM, Sylvain Afchain wrote:
>> Hi Brian,
>>
>> For iptables rules, see below
>>
>> Yes the only way to setup metering labels/rules is the neutronclient. I 
>> agree with you about the future
>> enhancement.
>>
>> Regards,
>>
>> Sylvain
>>
>> - Original Message -
>> From: "Brian Haley" 
>> To: "Sylvain Afchain" 
>> Cc: openstack-dev@lists.openstack.org
>> Sent: Thursday, July 18, 2013 4:58:26 AM
>> Subject: Re: Change in openstack/neutron[master]: Add method to get iptables 
>> traffic counters
>>
>>> Hi Sylvain,
>>>
>>> I think I've caught-up with all your reviews, but I still did have some
>>> questions on the iptables rules, below.
>>>
>>> One other question, and maybe it's simply a future enhancement, but is the 
>>> only
>>> way to setup these meters using neutronclient?  I think being able to 

[openstack-dev] [Swift] Swift Auth systems and Delay Denial

2013-07-23 Thread David Hadas
Hi,

Starting from 1.9, Swift has get_info() support allowing middleware to get
container and/or account information maintained by Swift.
Middleware can use get_info() on a container to retrieve the container
metadata.
In a similar way, middleware can use get_inf() on an account to retrieve
the account metadata.

The ability to retrieve container and account metadata by middleware opens
up an option to write Swift Auth systems without the use of the Swift Delay
Denial mechanism. For example, when a request comes in ( during
'__call__()' ), the Auth middleware can perform get_info on the container
and/or account and decide whether to authorize or reject the client request
upfront and before the request ever reaching Swift. In such a case, if the
Auth middleware decides to allow the request to be processed by Swift, it
may avoid adding a swift.authorize callback and thus disabling the use of
the Swift delay_denial mechanism.

Qs:
1. Should we document this approach as another way to do auth in Swift
(currently this option is not well documented)
 See http://docs.openstack.org/developer/swift/development_auth.html:
  "Authorization is performed through callbacks by the Swift Proxy
server to the WSGI environment’s swift.authorize value, if one is set."
followed by an example how that is done. Should we add description for this
alternative option of using get_info() during __call__()?

2. What are the pros and cons of each of the two options?
 What benefit do we see in an AUTH system using delay_denial over
deciding on the authorization upfront?
 Should we continue use delay_denial in keystone_auth, swauth?

DH
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-23 Thread Robert Collins
We have a bunch of sudo rules in disk-image-builder. They are there
primarily so we could have passwordless sudo on jenkins boxes, but
working with the infra team now, it looks like we'd run on
devstack-gate nodes, not on jenkins directly, so they aren't needed
for that.

They don't add appreciable security for end users as they are
trivially bypassed with link attacks.

And for distributors they are not something you want to install from a package.

The only thing the *do* do is permit long running builds to run
unattended by users with out reprompting for sudo; but this isn't an
issue for most users, as we download the bulk of data before hitting
the first sudo call.

So I'd like to change things to say:
 - either run sudo disk-image-create or
 - setup passwordless sudo or
 - don't run unattended.

and delete the sudoers.d rules as being a distraction, one we no longer need.

Opinions?

-Rob
-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Chalenges with highly available service VMs - port adn security group options.

2013-07-23 Thread Samuel Bercovici
Hi,

I agree that the AutZ should be separated and the service provider should be 
able to control this based on their model.

For Service VMs who might be serving ~100-~1000 IPs and might use multiple MACs 
per port, it would be better to turn this off altogether that to have an 
IPTABLE rules with thousands of entries.
This why I prefer to be able to turn-off IP spoofing and turn-off MAC spoofing 
altogether.

Still from a logical model / declarative reasons an IP that can migrate between 
different ports should be declared as such and maybe also from MAC perspective.

Regards,
-Sam.








From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Sunday, July 21, 2013 9:56 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.



On 19 July 2013 13:14, Aaron Rosen 
mailto:aro...@nicira.com>> wrote:


On Fri, Jul 19, 2013 at 1:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:

Hi,



I have completely missed this discussion as it does not have quantum/Neutron in 
the subject (modify it now)

I think that the security group is the right place to control this.

I think that this might be only allowed to admins.


I think this shouldn't be admin only since tenant's have control of their own 
networks they should be allowed to do this.

I reiterate my point that the authZ model for a feature should always be 
completely separated by the business logic of the feature itself.
In my opinion there are grounds both for scoping it as admin only and for 
allowing tenants to use it; it might be better if we just let the policy engine 
deal with this.


Let me explain what we need which is more than just disable spoofing.

1.   Be able to allow MACs which are not defined on the port level to 
transmit packets (for example VRRP MACs)== turn off MAC spoofing

For this it seems you would need to implement the port security extension which 
allows one to enable/disable port spoofing on a port.

This would be one way of doing it. The other would probably be adding a list of 
allowed VRRP MACs, which should be possible with the blueprint pointed by Aaron.

2.   Be able to allow IPs which are not defined on the port level to 
transmit packets (for example, IP used for HA service that moves between an HA 
pair) == turn off IP spoofing

It seems like this would fit your use case perfectly:   
https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairs

3.   Be able to allow broadcast message on the port (for example for VRRP 
broadcast) == allow broadcast.


Quantum does have an abstraction for disabling this so we already allow this by 
default.



Regards,

-Sam.





From: Aaron Rosen [mailto:aro...@nicira.com]
Sent: Friday, July 19, 2013 3:26 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Chalenges with highly available service VMs



Yup:

I'm definitely happy to review and give hints.

Blueprint:  
https://docs.google.com/document/d/18trYtq3wb0eJK2CapktN415FRIVasr7UkTpWn9mLq5M/edit

https://review.openstack.org/#/c/19279/  < patch that merged the feature;

Aaron



On Thu, Jul 18, 2013 at 5:15 PM, Ian Wells 
mailto:ijw.ubu...@cack.org.uk>> wrote:

On 18 July 2013 19:48, Aaron Rosen 
mailto:aro...@nicira.com>> wrote:
> Is there something this is missing that could be added to cover your use
> case? I'd be curious to hear where this doesn't work for your case.  One
> would need to implement the port_security extension if they want to
> completely allow all ips/macs to pass and they could state which ones are
> explicitly allowed with the allowed-address-pair extension (at least that is
> my current thought).

Yes - have you got docs on the port security extension?  All I've
found so far are
http://docs.openstack.org/developer/quantum/api/quantum.extensions.portsecurity.html
and the fact that it's only the Nicira plugin that implements it.  I
could implement it for something else, but not without a few hints...
--
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Is that possible to implement new APIs for horizon to show the usage report and charts?

2013-07-23 Thread Julien Danjou
On Tue, Jul 23 2013, Brooklyn Chen wrote:

>
> It would be helpful if ceilometer-api provides following api:
>
> GET /v2/usages/disk/
>
> Parameters:  q(list(Query)) Filter rules for the resources to be returned.
> Return Type: list(Usage) A list of usage with different tenant,user,
> resource
>
> GET /v2/usages/disk/

Did you try /v2/meter//statistics ?
I think /statistics is good enough *except* that it misses the ability
to group by resource the statistics.

> 2. need gauge data like "cpu_util" to render stat charts.
> We have cumulative meters like "disk.read.bytes" and
> "networking.incoming.bytes" but they are not able to be used for drawing
> charts since the value of them are always increasing.

The /statistics with the period= argument would allow you to do that as
far as I can tell.

-- 
Julien Danjou
# Free Software hacker # freelance consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stable/grizzly 2013.1.3 approaching WAS Re: No Project & release status meeting tomorrow

2013-07-23 Thread Alan Pevec
Hi Thierry,

> we'll be skipping the release status
> meeting tomorrow at 21:00 UTC

I wanted to remind at that meeting about the next stable/grizzly release
2013.1.3, meeting next week would too late so I'll piggy back here.
Proposed freeze is Aug 1st and release Aug 8th. Milestone 2013.1.3 has been
created in Launchpad and I'd like to ask PTLs to target, in their opinion,
important bugs to that milestone, even if backport is not proposed yet.
That will help us prioritize among bugs tagged for grizzly.

Cheers,
Alan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Stackalytics] 0.1 release

2013-07-23 Thread Herman Narkaytis
Hello everyone!

Mirantis  is pleased to announce the release of
Stackalytics  0.1. You can find complete
details on the Stackalytics
wiki page,
but here are the brief release notes:

   - Changed the internal architecture. Main features include advanced real
   time processing and horizontal scalability.
   - Got rid of all 3rd party non-Apache libraries and published the source
   on StackForge under the Apache2 license.
   - Improved release cycle tracking by using Git tags instead of
   approximate date periods.
   - Changed project classification to a two-level structure: OpenStack (core,
   incubator, documentation, other) and StackForge.
   - Implemented correction mechanism that allows users to tweak metrics
   for particular commits.
   - Added a number of new projects (Tempest, documentation, Puppet
   recipes).
   - Added company affiliated contribution breakdown to the user's profile
   page.

We welcome you to read, look it over, and comment.

Thank you!

-- 
Herman Narkaytis
DoO Ru, PhD
Tel.: +7 (8452) 674-555, +7 (8452) 431-555
Tel.: +7 (495) 640-4904
Tel.: +7 (812) 640-5904
Tel.: +38(057)728-4215
Tel.: +1 (408) 715-7897
ext 2002
http://www.mirantis.com

This email (including any attachments) is confidential. If you are not the
intended recipient you must not copy, use, disclose, distribute or rely on
the information contained in it. If you have received this email in error,
please notify the sender immediately by reply email and delete the email
from your system. Confidentiality and legal privilege attached to this
communication are not waived or lost by reason of mistaken delivery to you.
Mirantis does not guarantee (that this email or the attachment's) are
unaffected by computer virus, corruption or other defects. Mirantis may
monitor incoming and outgoing emails for compliance with its Email Policy.
Please note that our servers may not be located in your country.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Split the Identity Backend blueprint

2013-07-23 Thread Adam Young
On 07/22/2013 09:49 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:


Adam,

Sorry for the questions, but even though I have been programming for 
nearly 30 years I am new to Python and I find the code base somewhat 
difficult to follow. I have noticed that the file 
keystone.identity.backends.ldap.Identity has a set of methods and file 
keystone.assignment.backends.sql.Assignment has a set of methods. My 
question is this: is there a way to specify which methods to use the 
ldap.Identity backend with and which methods to use the sql. 
Assignment backend with or does each backend only support all of the 
methods provided by each file? In working with an enterprise LDAP 
server, there is no way we will be able to create users or to write to 
it. If there is a way to pick and choose which methods access the LDAP 
server and which ones access the SQL keystone database, then I have 
what we need.




Here's the general gist:

We split off the Assignment functions from Identity in order to be able 
to vary the two backends independently.THe expectation is that 
people will use the LDAP backlend for Identity and the SQL backend for 
Assignments. LDAP will be read only, and Assignments will be 
read-write.  That being said, there are cases where people will have 
writable LDAP, or will use the SQL Identity backend, so there are 
functions which can change the state of the Identity backend, and those 
are not going to go away.


The general code set up is as follows:

Routers describe the mappings from URLs to Python Code.
Controllers ate stateless objects.  In theory they should be protocol 
agnostic, but in practice they are aware that they are being used with HTTP.
Managers and Drivers implement the Data layer.  The managers start as 
simple accessors, but over time they get more and more logic. We don't 
have a clear place for Business logic.  Since the Backends are radically 
different, a lot of the logic has gotten duplicated between LDAP, SQL, 
Memcahced, and others.  We are working to minimize this.  The general 
approach is that code that should not be duplicated gets "pulled up" to 
the manager.  This kind of refactoring is constant and ongoing.


When I split out the Assignment backend, I tried to to it in a way that 
did not modify the unit tests, so that other reviewers would have 
theassurance that the chagnes were just restructuring,  not 
fundamentally changing functionality.  Thus, we had a shim layer in the 
Identity Layer that called through to the assignment layer. This has the 
added benefit of maintaining API compatibility for anyone who has 
customized code.  However, I've found a lot of our tests were talking to 
the driver, not talking through the manager, and thus I had to clean up 
a bunch of the tests to go through the manager as well.


As an end user, you should specify that the Identity backend is LDAP and 
the Assignment backend is SQL.  Assuimg your LDAP backend is not 
writable, and call to the Identity layer that attempts to morph the 
state of the Directory store will fail.  However, what you should be 
doing is using the user groups from LDAP as a way to manage users, and 
place those groups into Role Assignments.  Roles, Role Assignments, and 
Projects all live in the Identity (SQL) backend, and all of those should 
be writeable regardless of LDAP state.



Thanks,

Mark

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Monday, July 22, 2013 4:52 PM
*To:* Miller, Mark M (EB SW Cloud - R&D - Corvallis)
*Cc:* Dolph Mathews; OpenStack Development Mailing List
*Subject:* Re: [keystone] Split the Identity Backend blueprint

On 07/22/2013 07:43 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:


Adam,

You wrote:

[identity]

 driver = keystone.identity.backends.ldap.Identity

[assignment]

driver = keystone.assignment.backends.sql.Identity

Did you mean to write:

[assignment]

driver = keystone.assignment.backends.sql.Assignment

Yes, that was a mistake on my part.  Sorry

Mark

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Monday, July 22, 2013 12:50 PM
*To:* Miller, Mark M (EB SW Cloud - R&D - Corvallis)
*Cc:* Dolph Mathews; OpenStack Development Mailing List
*Subject:* Re: [keystone] Split the Identity Backend blueprint

On 07/22/2013 01:38 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:


Hello,

I have been reading source code in an attempt to figure out how to
use the new split backend feature, specifically how to split the
identity data between an ldap server and the standard Keystone sql
database. However, I haven't been able to figure it out quite yet.
Does someone have some examples of this new feature in action? Is
there another configuration file that is required?

[identity]

driver = driver = keystone.identity.backends.sql.Identity

[assignment]

driver = ???

[ldap]

Quite a few options

Regards,

Mark Miller


RIght now the o

Re: [openstack-dev] [Stackalytics] 0.1 release

2013-07-23 Thread Roman Prykhodchenko
I still think counting lines of code is evil because it might encourage some 
developers to write longer code just for statistics.

On Jul 23, 2013, at 16:58 , Herman Narkaytis  wrote:

> Hello everyone!
> 
> Mirantis is pleased to announce the release of Stackalytics 0.1. You can find 
> complete details on the Stackalytics wiki page, but here are the brief 
> release notes:
> Changed the internal architecture. Main features include advanced real time 
> processing and horizontal scalability.
> Got rid of all 3rd party non-Apache libraries and published the source on 
> StackForge under the Apache2 license.
> Improved release cycle tracking by using Git tags instead of approximate date 
> periods.
> Changed project classification to a two-level structure: OpenStack (core, 
> incubator, documentation, other) and StackForge.
> Implemented correction mechanism that allows users to tweak metrics for 
> particular commits.
> Added a number of new projects (Tempest, documentation, Puppet recipes).
> Added company affiliated contribution breakdown to the user's profile page.
> We welcome you to read, look it over, and comment.
> 
> Thank you!
> 
> -- 
> Herman Narkaytis
> DoO Ru, PhD
> Tel.: +7 (8452) 674-555, +7 (8452) 431-555
> Tel.: +7 (495) 640-4904 
> Tel.: +7 (812) 640-5904
> Tel.: +38(057)728-4215 
> Tel.: +1 (408) 715-7897
> ext 2002
> http://www.mirantis.com
> 
> This email (including any attachments) is confidential. If you are not the 
> intended recipient you must not copy, use, disclose, distribute or rely on 
> the information contained in it. If you have received this email in error, 
> please notify the sender immediately by reply email and delete the email from 
> your system. Confidentiality and legal privilege attached to this 
> communication are not waived or lost by reason of mistaken delivery to you. 
> Mirantis does not guarantee (that this email or the attachment's) are 
> unaffected by computer virus, corruption or other defects. Mirantis may 
> monitor incoming and outgoing emails for compliance with its Email Policy. 
> Please note that our servers may not be located in your country.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:24 AM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 01:04:24 AM:
>> > [1]
> https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
>> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
>> > [3] https://review.openstack.org/#/c/37407/
>>
>> Thanks for bringing this up.  I do have some comments.
> 
> Thanks for the comments. See below.
> 
>>
>> The current design shows 2 different use cases for how a scheduling
>> policy would be chosen.
>>
>> #1 - policy associated with a host aggregate
>>
>> This seems very odd to me.  Scheduling policy is what chooses hosts, so
>> having a subset of hosts specify which policy to use seems backwards.
> 
> This is not what we had in mind. Host aggregate is selected based on
> policy passed in the request (hint, extra spec, or whatever -- see
> below) and 'policy' attribute of the aggregate -- possibly in
> conjunction with 'regular' aggregate filtering. And not the other way
> around. Maybe the design document is not clear enough about this point.

Then I don't understand what this adds over the existing ability to
specify an aggregate using extra_specs.

>> #2 - via a scheduler hint
>>
>> It also seems odd to have the user specifying scheduling policy.  This
>> seems like something that should be completely hidden from the user.
>>
>> How about just making the scheduling policy choice as simple as an item
>> in the flavor extra specs?
> 
> This is certainly an option. It would be just another implementation of
> the policy selection interface (implemented using filters). In fact, we
> already have it implemented -- just thought that explicit hint could be
> more straightforward to start with. Will include the implementation
> based on flavor extra spec in the next commit.

Ok.  I'd actually prefer to remove the scheduler hint support
completely.  I'm not even sure it makes sense to make this pluggable.  I
can't think of why something other than flavor extra specs is necessary
and justifies the additional complexity.

>> The design also shows some example configuration.  It shows a global set
>> of enabled scheduler filters, and then policy specific tweaks of filter
>> config (CPU allocation ratio in the example).  I would expect to be able
>> to set a scheduling policy specific list of scheduler filters and
>> weights, as well.
> 
> This is certainly supported. Just didn't want to complicate the example
> too much. It could be even a different driver, assuming that the driver
> complies with the 'policy' attribute of the aggregates -- which is
> achieved by PolicyFilter in FilterScheduler. We plan to make other
> drivers 'policy-aware' in a future patch, leveraging the new db method
> that returns hosts belonging to aggregates with compatible policies.

I think some additional examples would help.  It's also important to
have this laid out for documentation purposes.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting Canceled for Today.

2013-07-23 Thread Peter Pouliot
Hi All,

I need to cancel the Hyper-V meeting for today.   We will resume next week.

Best

p



Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Henry Nash
Hi

As part of bp 
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I have 
uploaded some example WIP code showing a proposed approach for just a few API 
calls (one easy, one more complex).  I'd appreciate early feedback on this 
before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a target 
object before doing you policy check.  What do you do if the object does not 
exist?  If you return NotFound, then someone, who was not authorized  could 
troll for the existence of entities by seeing whether they got NotFound or 
Forbidden.  If however, you return Forbidden, then users who are authorized to, 
say, manage users in a domain would aways get Forbidden for objects that didn't 
exist (since we can know where the non-existant object was!).  So this would 
modify the expected return codes.

- I really think we need some good documentation on how to bud keystone policy 
files.  I'm happy to take a first cut as such a thing - what do you think the 
right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant  wrote on 23/07/2013 05:35:18 PM:
> >> #1 - policy associated with a host aggregate
> >>
> >> This seems very odd to me.  Scheduling policy is what chooses hosts, 
so
> >> having a subset of hosts specify which policy to use seems backwards.
> > 
> > This is not what we had in mind. Host aggregate is selected based on
> > policy passed in the request (hint, extra spec, or whatever -- see
> > below) and 'policy' attribute of the aggregate -- possibly in
> > conjunction with 'regular' aggregate filtering. And not the other way
> > around. Maybe the design document is not clear enough about this 
point.
> 
> Then I don't understand what this adds over the existing ability to
> specify an aggregate using extra_specs.

The added value is in the ability to configure the scheduler accordingly 
-- potentially differently for different aggregates -- in addition to just 
restricting the target host to those belonging to an aggregate with 
certain properties. For example, let's say we want to support two classes 
of workloads - CPU-intensive, and memory-intensive. The administrator may 
decide to use 2 different hardware models, and configure one aggregate 
with lots of CPU, and another aggregate with lots of memory. In addition 
to just routing an incoming provisioning request to the correct aggregate 
(which can be done already), we may want different cpu_allocation_ratio 
and memory_allocation_ratio when managing resources in each of the 
aggregates. In order to support this, we would define 2 policies (with 
corresponding configuration of filters), and attach each one to the 
corresponding aggregate.

> 
> >> #2 - via a scheduler hint
> >> How about just making the scheduling policy choice as simple as an 
item
> >> in the flavor extra specs?
> > 
> > This is certainly an option. It would be just another implementation 
of
> > the policy selection interface (implemented using filters). In fact, 
we
> > already have it implemented -- just thought that explicit hint could 
be
> > more straightforward to start with. Will include the implementation
> > based on flavor extra spec in the next commit.
> 
> Ok.  I'd actually prefer to remove the scheduler hint support
> completely. 

OK, removing the support for doing it via hint is easy :-)

> I'm not even sure it makes sense to make this pluggable.  I
> can't think of why something other than flavor extra specs is necessary
> and justifies the additional complexity.

Well, I can think of few use-cases when the selection approach might be 
different. For example, it could be based on tenant properties (derived 
from some kind of SLA associated with the tenant, determining the 
over-commit levels), or image properties (e.g., I want to determine 
placement of Windows instances taking into account Windows licensing 
considerations), etc

> I think some additional examples would help.  It's also important to
> have this laid out for documentation purposes.

OK, sure, will add more. Hopefully few examples above are also helpful to 
clarify the intention/design.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-23 Thread Clint Byrum
Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:
> On 07/23/2013 10:46 AM, Angus Salkeld wrote:
> > On 22/07/13 16:52 +0200, Bartosz Górski wrote:
> >> Hi folks,
> >>
> >> I would like to start a discussion about the blueprint I raised about
> >> multi region support.
> >> I would like to get feedback from you. If something is not clear or
> >> you have questions do not hesitate to ask.
> >> Please let me know what you think.
> >>
> >> Blueprint:
> >> https://blueprints.launchpad.net/heat/+spec/multi-region-support
> >>
> >> Wikipage:
> >> https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
> >>
> >
> > What immediatley looks odd to me is you have a MultiCloud Heat talking
> > to other Heat's in each region. This seems like unneccessary
> > complexity to me.
> > I would have expected one Heat to do this job.
> 
> It should be possible to achieve this with a single Heat installation -
> that would make the architecture much simpler.
> 

Agreed that it would be simpler and is definitely possible.

However, consider that having a Heat in each region means Heat is more
resilient to failure. So focusing on a way to make multiple Heat's
collaborate, rather than on a way to make one Heat talk to two regions
may be a more productive exercise.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Split the Identity Backend blueprint

2013-07-23 Thread Alexius Ludeman
hi Adam,

Can you explain why RoleApi() and ProjectApi() are duplicated
in assignment/backends/ldap.py and identity/backends/ldap.py?

It would seem duplicating the same class in two files should be refactored
into new shared file.

thanks
lex



On Tue, Jul 23, 2013 at 7:21 AM, Adam Young  wrote:

>  On 07/22/2013 09:49 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> wrote:
>
>  Adam,
>
> ** **
>
> Sorry for the questions, but even though I have been programming for
> nearly 30 years I am new to Python and I find the code base somewhat
> difficult to follow. I have noticed that the file 
> keystone.identity.backends.ldap.Identity
> has a set of methods and file keystone.assignment.backends.sql.Assignment
> has a set of methods. My question is this: is there a way to specify which
> methods to use the ldap.Identity backend with and which methods to use the
> sql. Assignment backend with or does each backend only support all of the
> methods provided by each file? In working with an enterprise LDAP server,
> there is no way we will be able to create users or to write to it. If there
> is a way to pick and choose which methods access the LDAP server and which
> ones access the SQL keystone database, then I have what we need.
>
>
> Here's the general gist:
>
> We split off the Assignment functions from Identity in order to be able to
> vary the two backends independently.THe expectation is that people will
> use the LDAP backlend for Identity and the SQL backend for Assignments.
> LDAP will be read only, and Assignments will be read-write.  That being
> said, there are cases where people will have writable LDAP, or will use the
> SQL Identity backend, so there are functions which can change the state of
> the Identity backend, and those are not going to go away.
>
> The general code set up is as follows:
>
> Routers describe the mappings from URLs to Python Code.
> Controllers ate stateless objects.  In theory they should be protocol
> agnostic, but in practice they are aware that they are being used with HTTP.
> Managers and Drivers implement the Data layer.  The managers start as
> simple accessors, but over time they get more and more logic.   We don't
> have a clear place for Business logic.  Since the Backends are radically
> different, a lot of the logic has gotten duplicated between LDAP, SQL,
> Memcahced, and others.  We are working to minimize this.  The general
> approach is that code that should not be duplicated gets "pulled up" to the
> manager.  This kind of refactoring is constant and ongoing.
>
> When I split out the Assignment backend, I tried to to it in a way that
> did not modify the unit tests, so that other reviewers would have
> theassurance that the chagnes were just restructuring,  not fundamentally
> changing functionality.  Thus, we had a shim layer in the Identity Layer
> that called through to the assignment layer.  This has the added benefit of
> maintaining API compatibility for anyone who has customized code.  However,
> I've found a lot of our tests were talking to the driver, not talking
> through the manager, and thus I had to clean up a bunch of the tests to go
> through the manager as well.
>
> As an end user, you should specify that the Identity backend is LDAP and
> the Assignment backend is SQL.  Assuimg your LDAP backend is not writable,
> and call to the Identity layer that attempts to morph the state of the
> Directory store will fail.  However, what you should be doing is using the
> user groups from LDAP as a way to manage users, and place those groups into
> Role Assignments.  Roles, Role Assignments, and Projects all live in the
> Identity (SQL) backend, and all of those should be writeable regardless of
> LDAP state.
>
>  
>
> ** **
>
> Thanks,
>
> ** **
>
> Mark
>
> ** **
>
> ** **
>
> *From:* Adam Young [mailto:ayo...@redhat.com ]
> *Sent:* Monday, July 22, 2013 4:52 PM
>
> *To:* Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> *Cc:* Dolph Mathews; OpenStack Development Mailing List
> *Subject:* Re: [keystone] Split the Identity Backend blueprint
>
>  ** **
>
> On 07/22/2013 07:43 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> wrote:
>
> Adam,
>
>  
>
> You wrote:
>
>  
>
> [identity] 
>
>  driver = keystone.identity.backends.ldap.Identity
>
>  
>
> [assignment] 
>
> driver = keystone.assignment.backends.sql.Identity
>
>  
>
> Did you mean to write: 
>
>  
>
> [assignment] 
>
> driver = keystone.assignment.backends.sql.Assignment
>
> Yes, that was a mistake on my part.  Sorry
>
> 
>
>  
>
> Mark
>
>  
>
> *From:* Adam Young [mailto:ayo...@redhat.com ]
> *Sent:* Monday, July 22, 2013 12:50 PM
> *To:* Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> *Cc:* Dolph Mathews; OpenStack Development Mailing List
> *Subject:* Re: [keystone] Split the Identity Backend blueprint
>
>  
>
> On 07/22/2013 01:38 PM, Miller, Mark M (EB SW Cloud - R&D - Corvalli

Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Dolph Mathews
On Tue, Jul 23, 2013 at 10:40 AM, Henry Nash wrote:

> Hi
>
> As part of bp
> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I
> have uploaded some example WIP code showing a proposed approach for just a
> few API calls (one easy, one more complex).  I'd appreciate early feedback
> on this before I take it any further.
>
> https://review.openstack.org/#/c/38308/
>
> A couple of points:
>
> - One question is on how to handle errors when you are going to get a
> target object before doing you policy check.  What do you do if the object
> does not exist?  If you return NotFound, then someone, who was not
> authorized  could troll for the existence of entities by seeing whether
> they got NotFound or Forbidden.  If however, you return Forbidden, then
> users who are authorized to, say, manage users in a domain would aways get
> Forbidden for objects that didn't exist (since we can know where the
> non-existant object was!).  So this would modify the expected return codes.
>

This could be based on whether debug mode is enabled or not... in debug
mode, raise a Forbidden for an object that exists but you don't have access
to. In normal mode, suppress that extra (potentially sensitive) information
by convert Forbidden errors into 404's. Either way, the ID's would be very
difficult to guess, so I'm not sure how much trouble it's worth?


>
> - I really think we need some good documentation on how to bud keystone
> policy files.  I'm happy to take a first cut as such a thing - what do you
> think the right place is for such documentation
>

That would be MUCH appreciated -- definitely belongs in openstack-manuals
but I'm not sure which book would be most appropriate?

  https://github.com/openstack/openstack-manuals/tree/master/doc/src/docbkx


>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:02 PM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 05:35:18 PM:
>> >> #1 - policy associated with a host aggregate
>> >>
>> >> This seems very odd to me.  Scheduling policy is what chooses hosts, so
>> >> having a subset of hosts specify which policy to use seems backwards.
>> >
>> > This is not what we had in mind. Host aggregate is selected based on
>> > policy passed in the request (hint, extra spec, or whatever -- see
>> > below) and 'policy' attribute of the aggregate -- possibly in
>> > conjunction with 'regular' aggregate filtering. And not the other way
>> > around. Maybe the design document is not clear enough about this point.
>>
>> Then I don't understand what this adds over the existing ability to
>> specify an aggregate using extra_specs.
> 
> The added value is in the ability to configure the scheduler accordingly
> -- potentially differently for different aggregates -- in addition to
> just restricting the target host to those belonging to an aggregate with
> certain properties. For example, let's say we want to support two
> classes of workloads - CPU-intensive, and memory-intensive. The
> administrator may decide to use 2 different hardware models, and
> configure one aggregate with lots of CPU, and another aggregate with
> lots of memory. In addition to just routing an incoming provisioning
> request to the correct aggregate (which can be done already), we may
> want different cpu_allocation_ratio and memory_allocation_ratio when
> managing resources in each of the aggregates. In order to support this,
> we would define 2 policies (with corresponding configuration of
> filters), and attach each one to the corresponding aggregate.

I understand the use case, but can't it just be achieved with 2 flavors
and without this new aggreagte-policy mapping?

flavor 1 with extra specs to say aggregate A and policy Y
flavor 2 with extra specs to say aggregate B and policy Z

>>
>> >> #2 - via a scheduler hint
>> >> How about just making the scheduling policy choice as simple as an item
>> >> in the flavor extra specs?
>> >
>> > This is certainly an option. It would be just another implementation of
>> > the policy selection interface (implemented using filters). In fact, we
>> > already have it implemented -- just thought that explicit hint could be
>> > more straightforward to start with. Will include the implementation
>> > based on flavor extra spec in the next commit.
>>
>> Ok.  I'd actually prefer to remove the scheduler hint support
>> completely.
> 
> OK, removing the support for doing it via hint is easy :-)
> 
>> I'm not even sure it makes sense to make this pluggable.  I
>> can't think of why something other than flavor extra specs is necessary
>> and justifies the additional complexity.
> 
> Well, I can think of few use-cases when the selection approach might be
> different. For example, it could be based on tenant properties (derived
> from some kind of SLA associated with the tenant, determining the
> over-commit levels), or image properties (e.g., I want to determine
> placement of Windows instances taking into account Windows licensing
> considerations), etc

Well, you can define tenant specific flavors that could have different
policy configurations.

I think I'd rather hold off on the extra complexity until there is a
concrete implementation of something that requires and justifies it.

>> I think some additional examples would help.  It's also important to
>> have this laid out for documentation purposes.
> 
> OK, sure, will add more. Hopefully few examples above are also helpful
> to clarify the intention/design.


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] scalable architecture

2013-07-23 Thread Sergey Lukjanov
Hi evereyone,

We’ve started working on upgrading Savanna architecture in version 0.3 to make 
it horizontally scalable.

The most part of information is in the wiki page - 
https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.

Additionally there are several blueprints created for this activity - 
https://blueprints.launchpad.net/savanna?searchtext=ng-

We are looking for comments / questions / suggestions.

P.S. The another thing that we’re working on in Savanna 0.3 is EDP (Elastic 
Data Processing).

Thank you!

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-23 Thread Chris Jones
Hi

On 23 July 2013 10:52, Robert Collins  wrote:

> So I'd like to change things to say:
>  - either run sudo disk-image-create or
>

This is probably the simplest option, but it does increase the amount of
code we're running with elevated privileges, which might be a concern, but
probably isn't, given the ratio of stuff that currently runs without sudo,
to the stuff that does.
I think we also need to do a little work to make this option functional, a
quick test just now suggests we are doing something wrong with
ELEMENTS_PATH at least.


>  - setup passwordless sudo or
>

Doesn't sound like a super awesome option to me, it places an ugly security
problem on anyone wanting to set this up anywhere, imo.


>  - don't run unattended.
>

I like being able to run a build while I read email or do some reviews, so
I do not like this option ;)

I think if we make option 1 work, then option 2 is a viable option for
people who want it, they have a single command to allow in sudoers. Option
3 essentially works in all scenarios :)

FWIW I do quite like the implicit auditing of sudo commands that is
currently required to manually create the sudoers file, but I take your
point that it's probably unnecessary work at this point.

Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-23 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-07-23 02:52:11 -0700:
> We have a bunch of sudo rules in disk-image-builder. They are there
> primarily so we could have passwordless sudo on jenkins boxes, but
> working with the infra team now, it looks like we'd run on
> devstack-gate nodes, not on jenkins directly, so they aren't needed
> for that.
> 
> They don't add appreciable security for end users as they are
> trivially bypassed with link attacks.
> 
> And for distributors they are not something you want to install from a 
> package.
> 
> The only thing the *do* do is permit long running builds to run
> unattended by users with out reprompting for sudo; but this isn't an
> issue for most users, as we download the bulk of data before hitting
> the first sudo call.
> 
> So I'd like to change things to say:
>  - either run sudo disk-image-create or
>  - setup passwordless sudo or
>  - don't run unattended.
> 
> and delete the sudoers.d rules as being a distraction, one we no longer need.
> 
> Opinions?

Keeping it simple seems more useful in keeping diskimage-builder users
secure than specifying everything. Perhaps a user who wants to chase
higher security will do so using SELinux or AppArmor. +1 for the plan.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] scalable architecture

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:32 PM, Sergey Lukjanov wrote:
> Hi evereyone,
> 
> We’ve started working on upgrading Savanna architecture in version 0.3 to 
> make it horizontally scalable.
> 
> The most part of information is in the wiki page - 
> https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.
> 
> Additionally there are several blueprints created for this activity - 
> https://blueprints.launchpad.net/savanna?searchtext=ng-
> 
> We are looking for comments / questions / suggestions.
> 
> P.S. The another thing that we’re working on in Savanna 0.3 is EDP (Elastic 
> Data Processing).

Just did a quick look ... what's the justification for needing
savanna-conductor?

In nova, putting db access through nova-conductor was to remove direct
db access from compute nodes, since they are the least trusted part of
the system.  I don't see the same concern here.  Is there another reason
for this or should you just have api and engine hit the db directly?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Meeting Canceled for Today.

2013-07-23 Thread Russell Bryant
On 07/23/2013 11:38 AM, Peter Pouliot wrote:
> Hi All,
> 
>  
> 
> I need to cancel the Hyper-V meeting for today.   We will resume next week.

I'd like an update from you guys on all of the hyper-v blueprints
targeted for havana-3.  I see 6 of them.  A few are still marked not
started.  Do you still intend to deliver all of them by the deadline?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Meeting Canceled for Today.

2013-07-23 Thread Alessandro Pilotti
Hy Russell,

Yep, we are on track with the development, I have to update the BP status.

There's a fairly big one up for review now 
https://review.openstack.org/#/c/38160/, the other Nova ones are faster to get 
ready for review.


Thanks,

Alessandro


On Jul 23, 2013, at 19:47 , Russell Bryant 
mailto:rbry...@redhat.com>>
 wrote:

On 07/23/2013 11:38 AM, Peter Pouliot wrote:
Hi All,



I need to cancel the Hyper-V meeting for today.   We will resume next week.

I'd like an update from you guys on all of the hyper-v blueprints
targeted for havana-3.  I see 6 of them.  A few are still marked not
started.  Do you still intend to deliver all of them by the deadline?

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread David Chadwick

When writing a previous ISO standard the approach we took was as follows

Lie to people who are not authorised.

So applying this approach to your situation, you could reply Not Found 
to people who are authorised to see the object if it had existed but 
does not, and Not Found to those not authorised to see it, regardless of 
whether it exists or not. In this case, only those who are authorised to 
see the object will get it if it exists. Those not authorised cannot 
tell the difference between objects that dont exist and those that do exist


regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp 
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I have 
uploaded some example WIP code showing a proposed approach for just a few API 
calls (one easy, one more complex).  I'd appreciate early feedback on this 
before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a target 
object before doing you policy check.  What do you do if the object does not 
exist?  If you return NotFound, then someone, who was not authorized  could 
troll for the existence of entities by seeing whether they got NotFound or 
Forbidden.  If however, you return Forbidden, then users who are authorized to, 
say, manage users in a domain would aways get Forbidden for objects that didn't 
exist (since we can know where the non-existant object was!).  So this would 
modify the expected return codes.

- I really think we need some good documentation on how to bud keystone policy 
files.  I'm happy to take a first cut as such a thing - what do you think the 
right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican]

2013-07-23 Thread Tiwari, Arvind
Thanks Jarret for all the good information.

It seems KMIP is getting lots of enterprise attention, so I think it may be 
good candidate for future (as you already mentioned in your email below) 
Barbican feature,  as per the link below it seems our community also expects 
KMIP to be integrated with OpenStack line of products.

https://wiki.openstack.org/wiki/KMIPclient

Would you mind sharing the Barbican product roadmap (if it is public) as I did 
not find one?

Following are some of thoughts on your previous email about KMIP
(*) That is true but it is getting lots of recognition which means in future we 
will see more HSM product with KMIP compatibility.
(**) I think Barbican will act as a KMS proxy in this case, which does not 
fulfill the KMIP protocol philosophy which build around interaction between 
KMIP client and server.


Regards,
Arvind



From: Jarret Raim [mailto:jarret.r...@rackspace.com]
Sent: Monday, July 22, 2013 2:38 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [barbican]

I'm the product owner for Barbican at Rackspace. I'll take a shot an answering 
your questions.

> 1. What is the state of the project, is it in the state where it can be 
> utilized in production deployments?

We currently in active development and pushing for our 1.0 release for Havana. 
As to production deployments, the answer right now is none. We are currently 
working on enabling Barbican to use hardware security modules for key storage. 
Once this code is complete, we should be close to a place where the first 
production deployment is feasible. At Rack, we are building out the 
infrastructure to do so and I hope to have good news once we get towards the 
Summit.

> 2. Dose Barbican is an implementation of 
> https://wiki.openstack.org/wiki/KeyManager BP? If not please point me to the 
> correct design/BP resource on which Barbican is based on.

We are inspired by the blueprint you linked. That blueprint was a bit more 
limited than we were planning and we have changed quite a bit. For a more 
detailed version, you can find lots of documentation on our wiki here:

https://github.com/cloudkeep/barbican/wiki/Blueprint:-Technical-Approach
https://github.com/cloudkeep/barbican
https://github.com/cloudkeep/barbican/wiki

> 3. Is it KMIP (KMIP 1.1 spec 
> https://www.oasis-open.org/standards#kmipspecv1.1) complaint? If not, what 
> are the plans any initiative so far?

Not right now. As I mentioned in a previous email (I'll copy the contents 
below), KMIP is not the greatest protocol for this use case. Our current plans 
are to expose the Barbican API to all consumers. This is a standard OpenStack 
API using ReST / JSON, authing through keystone, etc. If there is enough 
interest, I am planning on supporting KMIP inside Barbican to talk to various 
HSM type providers. This would most likely not be exposed to customers.

I haven't heard from anyone who needs KMIP support at this point. Mostly the 
questions have just been whether we are planning on supporting it. If you have 
a strong use case as to why you want / need it, I'd love to hear it. You can 
respond here or reach out to me at 
jarret.r...@rackspace.com

Thanks,
Jarret


Here is the previous email relating to KMIP for additional reading:

I'm not sure that I agree with this direction. In our investigation, KMIP is a 
problematic protocol for several reasons:

  *   We haven't found an implementation of KMIP for Python. (Let us know if 
there is one!)
  *   Support for KMIP by HSM vendors is limited. (*)
  *   We haven't found software implementations of KMIP suitable for use as an 
HSM replacement. (e.g. Most deployers wanting to use KMIP would have to spend a 
rather large amount of money to purchase HSMs)
  *   From our research, the KMIP spec and implementations seem to lack support 
for multi-tenancy. This makes managing keys for thousands of users difficult or 
impossible.
The goal for the Barbican system is to provide key management for OpenStack. It 
uses the standard interaction mechanisms for OpenStack, namely ReST and JSON. 
We integrate with keystone and will provide common features like usage events, 
role-based access control, fine grained control, policy support, client libs, 
Celiometer support, Horizon support and other things expected of an OpenStack 
service. If every product is forced to implement KMIP, these features would 
most likely not be provided by whatever vendor is used for the Key Manager. 
Additionally, as mentioned in the blueprint, I have concerns that vendor 
specific data will be leaked into the rest of OpenStack for things like key 
identifiers, authentication and the like.

(**) I would propose that rather than each product implement KMIP support, we 
implement KMIP support into Barbican. This will allow the products to speak 
ReST / JSON using our client libraries just like any other OpenStack system and 
Barbican will take care of being a good OpenStack citizen. 

[openstack-dev] [nova] [baremetal] Mixed bare-metal + hypervisor cloud using grizzly

2013-07-23 Thread Zsolt Haraszti
Hi,

We are very interested to set up a small OpenStack cloud with a portion of
the servers used as bare-metal servers and the rest used as "normal" KVM
hypervisor compute nodes. We are using grizzly, and launch with devstack
for simplicity.

For a proof-of-concept, I set up an all-in-one node (also acting as KVM
compute node). Now I am trying to attach a second compute node running in
baremetal mode.

Is this known to work?

As a side note, devstack did not seem to support very well our case, i.e.,
when the control node is not the baremetal node. A number of the automated
steps were skipped. We worked around this by manually creating the nova_bm
database, db sync-ing it, creating and uploading the deploy and test
images, and adding a bare-metal flavor. If there were interest, I would be
willing to look into modifying devstack to support our case.

After this, I was able to enroll an IPMI-enabled 3rd server as a
baremetal-node, but I am unable to create a BM instance on it. The instance
gets created in the DB, but the scheduler errors out with NoValidHost. I
started debugging the issue by investigating the logs and looking into the
code. I see a few things that I suspect may not be right:

If I add the second compute node as a normal KVM node, I can see the
scheduler on the all-in-one node to show both compute nodes refreshing
every 60 seconds. If I re-add the 2nd compute node in BM mode, I can see no
more updates coming from that node in the scheduler.

Also, I dug into the scheduler code a bit, and I can see that in the
scheduler/host_manager.HostManager.get_all_host_states() the call
to db.compute_node_get_all(context) returns only one node, the all-in-one.

Both of the above suggests that the scheduler may have no visibility of the
BM compute node, hence my troubles.

I can debug this further, but I though I ask first. Any pointers would be
much appreciated.

Zsolt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread David Chadwick

Hi Henry

using the XACML processing model, the functionality that you are
describing, which you say is currently partly missing from Keystone, is
that of the context handler. Its job is to marshall all the attributes
that are needed and put them into the request context for calling the
policy engine. So it should be perfectly possible for the API call to
simply name a target object and an operation (such as Delete userID),
then the keystone context handler can fetch the various attributes of
the target (using a function called the Policy Information Point in
XACML), add them to the request to the policy engine (in the delete 
userID case all you might need to fetch is the domain id), then get the
response from the policy engine, and if granted, hand back control to 
Keystone to continue processing the request.


Of course the tricky thing is knowing which object attributes to fetch 
for which user API requests. In the general case you cannot assume that 
Keystone knows the format or structure of the policy rules, or which 
attributes each will need, so you would need a specific tailored context 
handler to go with a specific policy engine. This implies that the 
context handler and policy engine should be pluggable Keystone 
components that it calls, and that can be switchable as people decide 
use different policy engines.


Hope this helps

regards

David

On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
I have uploaded some example WIP code showing a proposed approach for
just a few API calls (one easy, one more complex).  I'd appreciate
early feedback on this before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a
target object before doing you policy check.  What do you do if the
object does not exist?  If you return NotFound, then someone, who was
not authorized  could troll for the existence of entities by seeing
whether they got NotFound or Forbidden.  If however, you return
Forbidden, then users who are authorized to, say, manage users in a
domain would aways get Forbidden for objects that didn't exist (since
we can know where the non-existant object was!).  So this would
modify the expected return codes.

- I really think we need some good documentation on how to bud
keystone policy files.  I'm happy to take a first cut as such a thing
- what do you think the right place is for such documentation

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Problem When Have Test In Master Branch

2013-07-23 Thread Wentian Jiang
stdout: {{{
GET: /v1/nodes/1be26c0b-03f2-4d2e-ae87-c02d7f33c123 {}
GOT:Response: 401 Unauthorized
Content-Type: text/plain; charset=UTF-8
Www-Authenticate: Keystone uri='https://127.0.0.1:35357'
401 Unauthorized
This server could not verify that you are authorized to access the document
you requested. Either you supplied the wrong credentials (e.g., bad
password), or your browser does not understand how to supply the
credentials required.
 Authentication required
}}}

Traceback (most recent call last):
  File "ironic/tests/api/test_acl.py", line 70, in test_non_admin
self.assertEqual(response.status_int, 403)
  File
"/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
line 322, in assertEqual
self.assertThat(observed, matcher, message)
  File
"/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
line 417, in assertThat
raise MismatchError(matchee, matcher, mismatch, verbose)
MismatchError: 401 != 403


-- 
Wentian Jiang
UnitedStack Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Adam Young

On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we took was as follows

Lie to people who are not authorised.


Is that your verbage?  I am going to reuse that quote, and I would like 
to get the attribution correct.




So applying this approach to your situation, you could reply Not Found 
to people who are authorised to see the object if it had existed but 
does not, and Not Found to those not authorised to see it, regardless 
of whether it exists or not. In this case, only those who are 
authorised to see the object will get it if it exists. Those not 
authorised cannot tell the difference between objects that dont exist 
and those that do exist


So, to try and apply this to a semi-real example:  There are two types 
of URLs.  Ones that are like this:


users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global collection, and in 
the second, against a scoped collection.


For unscoped, you have to treat all users as equal, and thus a 404 
probably makes sense.


For a scoped collection we could return a 404 or a 403 Forbidden 
 based on the users 
credentials:  all resources under domain/66DEADBEEF  would show up 
as 403s regardless of existantce or not if the user had no roles in the 
domain 66DEADBEEF.  A user that would be allowed access to resources 
in 66DEADBEEF  would get a 403 only for an object that existed but 
that they had no permission to read, and 404 for a resource that doesn't 
exist.







regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp 
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target 
I have uploaded some example WIP code showing a proposed approach for 
just a few API calls (one easy, one more complex). I'd appreciate 
early feedback on this before I take it any further.


https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a 
target object before doing you policy check.  What do you do if the 
object does not exist?  If you return NotFound, then someone, who was 
not authorized  could troll for the existence of entities by seeing 
whether they got NotFound or Forbidden. If however, you return 
Forbidden, then users who are authorized to, say, manage users in a 
domain would aways get Forbidden for objects that didn't exist (since 
we can know where the non-existant object was!).  So this would 
modify the expected return codes.


- I really think we need some good documentation on how to bud 
keystone policy files.  I'm happy to take a first cut as such a thing 
- what do you think the right place is for such documentation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Meeting Canceled for Today.

2013-07-23 Thread Russell Bryant
On 07/23/2013 12:55 PM, Alessandro Pilotti wrote:
> Hy Russell,
> 
> Yep, we are on track with the development, I have to update the BP status.
> 
> There's a fairly big one up for review
> now https://review.openstack.org/#/c/38160/, the other Nova ones are
> faster to get ready for review.

Thanks for the quick update!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Adam Young

On 07/23/2013 01:17 PM, David Chadwick wrote:
Of course the tricky thing is knowing which object attributes to fetch 
for which user API requests. In the general case you cannot assume 
that Keystone knows the format or structure of the policy rules, or 
which attributes each will need, so you would need a specific tailored 
context handler to go with a specific policy engine. This implies that 
the context handler and policy engine should be pluggable Keystone 
components that it calls, and that can be switchable as people decide 
use different policy engines. 
We are using a model where Keystone plays the mediator, and decides what 
attributes to include.  The only attributes we currently claim to 
support are


userid
domainid
role_assignments: a collection of tuples  (project, role)

Objects in openstack are either owned by users (in Swift) or by Projects 
(Nova and elsewhere).  Thus, providing userid and role_assignments 
should be sufficient to make access decisions.  If there are other 
attributes that people want consume for  policy enforcement, they can 
add them to custom token providers.  The policy enforcement mechanism is 
flexible enough that extending it to other attributes should be fairly 
straightforward.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Split the Identity Backend blueprint

2013-07-23 Thread Adam Young

On 07/23/2013 12:15 PM, Alexius Ludeman wrote:

hi Adam,

Can you explain why RoleApi() and ProjectApi() are duplicated 
in assignment/backends/ldap.py and identity/backends/ldap.py?


It would seem duplicating the same class in two files should be 
refactored into new shared file.


That is the "backwards compatbility" I was referring to earlier. Roles 
and Projects are now owned by the assignment API, but have been accessed 
via the Identity backend up until now.  Thus, the Identity 
implementation should be nothing but a shim to call the assignment 
implementation.




thanks
lex



On Tue, Jul 23, 2013 at 7:21 AM, Adam Young > wrote:


On 07/22/2013 09:49 PM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis) wrote:


Adam,

Sorry for the questions, but even though I have been programming
for nearly 30 years I am new to Python and I find the code base
somewhat difficult to follow. I have noticed that the file
keystone.identity.backends.ldap.Identity has a set of methods and
file keystone.assignment.backends.sql.Assignment has a set of
methods. My question is this: is there a way to specify which
methods to use the ldap.Identity backend with and which methods
to use the sql. Assignment backend with or does each backend only
support all of the methods provided by each file? In working with
an enterprise LDAP server, there is no way we will be able to
create users or to write to it. If there is a way to pick and
choose which methods access the LDAP server and which ones access
the SQL keystone database, then I have what we need.



Here's the general gist:

We split off the Assignment functions from Identity in order to be
able to vary the two backends independently.THe expectation is
that people will use the LDAP backlend for Identity and the SQL
backend for Assignments. LDAP will be read only, and Assignments
will be read-write.  That being said, there are cases where people
will have writable LDAP, or will use the SQL Identity backend, so
there are functions which can change the state of the Identity
backend, and those are not going to go away.

The general code set up is as follows:

Routers describe the mappings from URLs to Python Code.
Controllers ate stateless objects.  In theory they should be
protocol agnostic, but in practice they are aware that they are
being used with HTTP.
Managers and Drivers implement the Data layer.  The managers start
as simple accessors, but over time they get more and more logic.  
We don't have a clear place for Business logic.  Since the

Backends are radically different, a lot of the logic has gotten
duplicated between LDAP, SQL, Memcahced, and others.  We are
working to minimize this.  The general approach is that code that
should not be duplicated gets "pulled up" to the manager. This
kind of refactoring is constant and ongoing.

When I split out the Assignment backend, I tried to to it in a way
that did not modify the unit tests, so that other reviewers would
have theassurance that the chagnes were just restructuring,  not
fundamentally changing functionality.  Thus, we had a shim layer
in the Identity Layer that called through to the assignment
layer.  This has the added benefit of maintaining API
compatibility for anyone who has customized code.  However, I've
found a lot of our tests were talking to the driver, not talking
through the manager, and thus I had to clean up a bunch of the
tests to go through the manager as well.

As an end user, you should specify that the Identity backend is
LDAP and the Assignment backend is SQL. Assuimg your LDAP backend
is not writable, and call to the Identity layer that attempts to
morph the state of the Directory store will fail.  However, what
you should be doing is using the user groups from LDAP as a way to
manage users, and place those groups into Role Assignments. 
Roles, Role Assignments, and Projects all live in the Identity

(SQL) backend, and all of those should be writeable regardless of
LDAP state.


Thanks,

Mark

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Monday, July 22, 2013 4:52 PM


*To:* Miller, Mark M (EB SW Cloud - R&D - Corvallis)
*Cc:* Dolph Mathews; OpenStack Development Mailing List
*Subject:* Re: [keystone] Split the Identity Backend blueprint

On 07/22/2013 07:43 PM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis) wrote:

Adam,

You wrote:

[identity]

 driver = keystone.identity.backends.ldap.Identity

[assignment]

driver = keystone.assignment.backends.sql.Identity

Did you mean to write:

[assignment]

driver = keystone.assignment.backends.sql.Assignment

Yes, that was a mistake on my part.  Sorry

Mark

*From:*Adam Yo

Re: [openstack-dev] [ironic] Problem When Have Test In Master Branch

2013-07-23 Thread Devananda van der Veen
The tests are working for me, and for Jenkins.

Several versions of packages in requirements.txt and test-requirements.txt
were updated yesterday -- try to update your dev environment with "source
.tox/venv/bin/activate & pip install --upgrade -r requirements.txt -r
test-requirements.txt" and then run testr again.


On Tue, Jul 23, 2013 at 10:22 AM, Wentian Jiang wrote:

> stdout: {{{
> GET: /v1/nodes/1be26c0b-03f2-4d2e-ae87-c02d7f33c123 {}
> GOT:Response: 401 Unauthorized
> Content-Type: text/plain; charset=UTF-8
> Www-Authenticate: Keystone uri='https://127.0.0.1:35357'
> 401 Unauthorized
> This server could not verify that you are authorized to access the
> document you requested. Either you supplied the wrong credentials (e.g.,
> bad password), or your browser does not understand how to supply the
> credentials required.
>  Authentication required
> }}}
>
> Traceback (most recent call last):
>   File "ironic/tests/api/test_acl.py", line 70, in test_non_admin
> self.assertEqual(response.status_int, 403)
>   File
> "/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
> line 322, in assertEqual
> self.assertThat(observed, matcher, message)
>   File
> "/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
> line 417, in assertThat
> raise MismatchError(matchee, matcher, mismatch, verbose)
> MismatchError: 401 != 403
>
>
> --
> Wentian Jiang
> UnitedStack Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Henry Nash
One thing we could do is:

- Return Forbidden or NotFound if we can determine the correct answer
- When we can't (i.e. the object doesn't exist), then return NotFound unless a 
new config value 'policy_harden' (?) is set to true (default false) in which 
case we translate NotFound into Forbidden.

Henry
On 23 Jul 2013, at 18:31, Adam Young wrote:

> On 07/23/2013 12:54 PM, David Chadwick wrote:
>> When writing a previous ISO standard the approach we took was as follows 
>> 
>> Lie to people who are not authorised. 
> 
> Is that your verbage?  I am going to reuse that quote, and I would like to 
> get the attribution correct.
> 
>> 
>> So applying this approach to your situation, you could reply Not Found to 
>> people who are authorised to see the object if it had existed but does not, 
>> and Not Found to those not authorised to see it, regardless of whether it 
>> exists or not. In this case, only those who are authorised to see the object 
>> will get it if it exists. Those not authorised cannot tell the difference 
>> between objects that dont exist and those that do exist 
> 
> So, to try and apply this to a semi-real example:  There are two types of 
> URLs.  Ones that are like this:
> 
> users/55FEEDBABECAFE
> 
> and ones like this:
> 
> domain/66DEADBEEF/users/55FEEDBABECAFE
> 
> 
> In the first case, you are selecting against a global collection, and in the 
> second, against a scoped collection.
> 
> For unscoped, you have to treat all users as equal, and thus a 404 probably 
> makes sense.
> 
> For a scoped collection we could return a 404 or a 403 Forbidden based on the 
> users credentials:  all resources under domain/66DEADBEEF  would show up 
> as 403s regardless of existantce or not if the user had no roles in the 
> domain 66DEADBEEF.  A user that would be allowed access to resources in 
> 66DEADBEEF  would get a 403 only for an object that existed but that they 
> had no permission to read, and 404 for a resource that doesn't exist.
> 
> 
> 
> 
>> 
>> regards 
>> 
>> David 
>> 
>> 
>> On 23/07/2013 16:40, Henry Nash wrote: 
>>> Hi 
>>> 
>>> As part of bp 
>>> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target I have 
>>> uploaded some example WIP code showing a proposed approach for just a few 
>>> API calls (one easy, one more complex).  I'd appreciate early feedback on 
>>> this before I take it any further. 
>>> 
>>> https://review.openstack.org/#/c/38308/ 
>>> 
>>> A couple of points: 
>>> 
>>> - One question is on how to handle errors when you are going to get a 
>>> target object before doing you policy check.  What do you do if the object 
>>> does not exist?  If you return NotFound, then someone, who was not 
>>> authorized  could troll for the existence of entities by seeing whether 
>>> they got NotFound or Forbidden.  If however, you return Forbidden, then 
>>> users who are authorized to, say, manage users in a domain would aways get 
>>> Forbidden for objects that didn't exist (since we can know where the 
>>> non-existant object was!).  So this would modify the expected return codes. 
>>> 
>>> - I really think we need some good documentation on how to bud keystone 
>>> policy files.  I'm happy to take a first cut as such a thing - what do you 
>>> think the right place is for such documentation 
>>> 
>>> ___ 
>>> OpenStack-dev mailing list 
>>> OpenStack-dev@lists.openstack.org 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>> 
>> 
>> ___ 
>> OpenStack-dev mailing list 
>> OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Joshua Harlow
Or another idea:

Have each compute node write into redis (thus avoiding saturating the MQ & 
broker/DB with capabilities information) under 2 keys, one that is updated over 
longer periods and one that is updated frequently.

- Possibly like the following

compute-$hostname.slow
compute-$hostname.fast

Now schedulers can either pull from said slow key to get less frequent updates, 
or they can subscribe (yes redis has a subscribe model) to get updates about 
the 'fast' information which will be more accurate.

Since this information is pretty transient, it doesn't seem like we need to use 
a DB and since the MQ is used for control traffic it doesn't seem so good to 
use the MQ for this transient information either.

For the problem of when a new scheduler comes online they can basically query 
the database for the compute hostnames, then query redis (slow or fast keys) 
and setup there own internal state accordingly.

Since redis can be scaled/partitioned pretty easily it seems like it could be a 
useful way to store this type of information.

Thoughts?

From: Joshua Harlow mailto:harlo...@yahoo-inc.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, July 22, 2013 4:12 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Joe Gordon mailto:joe.gord...@gmail.com>>
Subject: Re: [openstack-dev] A simple way to improve nova scheduler

An interesting idea, I'm not sure how useful it is but it could be.

If you think of the compute node capability information as an 'event stream' 
then you could imagine using something like apache flume 
(http://flume.apache.org/) or storm (http://storm-project.net/) to be able to 
sit on this stream and perform real-time analytics of said stream to update how 
scheduling can be performed. Maybe the MQ or ceilometer can be the same 
'stream' source but it doesn't seem like it is needed to 'tie' the impl to 
those methods. If you consider compute nodes as producers of said data and then 
hook a real-time processing engine on-top that can adjust some scheduling 
database used by a scheduler then it seems like u could vary how often compute 
nodes produce said stream info, where and how said stream info is stored and 
analyzed which will allow you to then adjust how 'real-time' you want said 
compute scheduling capability information to be up to date.

Just seems that real-time processing  is a similar model as what is needed here.

Maybe something like that is where this should end up?

-Josh

From: Joe Gordon mailto:joe.gord...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, July 22, 2013 3:47 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] A simple way to improve nova scheduler




On Mon, Jul 22, 2013 at 5:16 AM, Boris Pavlovic 
mailto:bo...@pavlovic.me>> wrote:
Joe,

>> Speaking of Chris Beherns  "Relying on anything but the DB for current 
>> memory free, etc, is just too laggy… so we need to stick with it, IMO." 
>> http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html

It doesn't scale, use tons of resources, works slow and is hard to extend.
Also the mechanism of getting free and used memory is done by virt layer.
And only thing that could be laggy is rpc (but it is used also by compute node 
update)

You say it doesn't scale and uses tons of resources can you show to reproduce 
your findings.  Also just because the current implementation of the scheduler 
is non-optimal doesn't mean the no DB is the only solution, I am interested in 
seeing other possible solutions before going down such a drastically different 
road (no-db).  Such as pushing more of the logic into the DB and not searching 
through all compute nodes in python space or looking at removing the periodic 
updates all  together or ???.



>> * How do you bring a new scheduler up in an existing deployment and make it 
>> get the full state of the system?

You should wait for a one periodic task time. And you will get full information 
about all compute nodes.

sure, that may work we need to add logic in to handle this.


>> *  Broadcasting RPC updates from compute nodes to the scheduler means every 
>> scheduler has to process  the same RPC message.  And if a deployment hits 
>> the point where the number of compute updates is consuming 99 percent of the 
>> scheduler's time just adding another scheduler won't fix anything as it will 
>> get bombarded too.


If we are speaking about numbers. You are able to see our doc, where they are 
counted.
If we have 10k nodes it will make only 150rpc calls/sec (which means nothing 
for cpu). By the way we way we will remove 150 calls/s from conductor. One more 
thing currently in 10nodes deployment I think we will spend almost all time fro 
waiting DB (compute_nodes_get_all()). And also when we are calling this method 
in this mome

Re: [openstack-dev] [Swift] Swift Auth systems and Delay Denial

2013-07-23 Thread Clay Gerrard
I think delay_denial will have to be maintained for awhile for backwards
compatibility no matter what happens.

I think existing auth middlewares can and often do reject requests outright
without forwarding them to swift (no x-auth-token?).

I think get_info and the env caching is relatively new, do we have
confidence that it's call signature and data structure will be robust to
future requirements?  It seems reasonable to me at first glance that
upstream middleware would piggy back on existing memcache data, middleware
authors certainly already can and presumably do depend on get_info's
interface; so i guess the boat already sailed?

I think there's some simplicity gained from an auth middleware
implementor's perspective if swift specific path parsing and and relevant
acl extraction has a more procedural interface, but if there's efficiency
gains it's probably worth jumping through some domain specific hoops.

So it's certainly possible today, but if we document it as a supported
interface we'll have to be more careful about how we maintain it.What's
motivating you to change what's there?  Do you think keystone or swauth
incur a measurable overhead from the callback based auth in the full
context of the lifetime of the request?

-Clay



On Tue, Jul 23, 2013 at 1:49 AM, David Hadas  wrote:

> Hi,
>
> Starting from 1.9, Swift has get_info() support allowing middleware to get
> container and/or account information maintained by Swift.
> Middleware can use get_info() on a container to retrieve the container
> metadata.
> In a similar way, middleware can use get_inf() on an account to retrieve
> the account metadata.
>
> The ability to retrieve container and account metadata by middleware opens
> up an option to write Swift Auth systems without the use of the Swift Delay
> Denial mechanism. For example, when a request comes in ( during
> '__call__()' ), the Auth middleware can perform get_info on the container
> and/or account and decide whether to authorize or reject the client request
> upfront and before the request ever reaching Swift. In such a case, if the
> Auth middleware decides to allow the request to be processed by Swift, it
> may avoid adding a swift.authorize callback and thus disabling the use of
> the Swift delay_denial mechanism.
>
> Qs:
> 1. Should we document this approach as another way to do auth in Swift
> (currently this option is not well documented)
>  See http://docs.openstack.org/developer/swift/development_auth.html:
>   "Authorization is performed through callbacks by the Swift Proxy
> server to the WSGI environment’s swift.authorize value, if one is set."
> followed by an example how that is done. Should we add description for this
> alternative option of using get_info() during __call__()?
>
> 2. What are the pros and cons of each of the two options?
>  What benefit do we see in an AUTH system using delay_denial over
> deciding on the authorization upfront?
>  Should we continue use delay_denial in keystone_auth, swauth?
>
> DH
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron Network Statistics

2013-07-23 Thread Brian Haley
Hi Peter,

There are some blueprints around getting metering information at the router
level, mainly so they can be fed into a billing system.  For example:

https://blueprints.launchpad.net/neutron/+spec/bandwidth-router-measurement

There are patches out for review as well.

They are targeted more at an admin than a tenant though.

Some of the items you list are there for each instance if you just look at the
tap device stats, others might be in iptables.  It almost seems like having a
way to get port stats is what you want?  And you'd probably want to rate-limit
such requests, speaking with my provider hat on.

Like Nachi said, please write a blueprint for it.

Thanks,

-Brian

On 07/22/2013 08:22 PM, Mellquist, Peter wrote:
> Hi!
> 
> I am interested to know if the topic of surfacing networking statistics 
> through
> the Neutron APIs has been discussed and if there are any existing blueprints
> working on this feature?  Specifically,  the current APIs,
> https://wiki.openstack.org/wiki/Neutron/APIv2-specification, do not support
> reading network counters typically available through SNMP. I think these
> ‘/stats’ would prove to be quite valuable for performance and fault 
> monitoring.
> If I am a Openstack / Neutron tenant and I have created my own networks, how 
> can
> I see performance and faults?
> 
>  
> 
> Examples,
> 
> GET /networks/{network_id}/stats
> 
> GET/subnets/{subnet-id}/stats
> 
> GET   /floatingips/{floatingip_id}/stats
> 
>  
> 
> Status  : [up,down,error]
> 
> Usage   : [sum of Tx and Rx packets]
> 
> ReceivedRate: [Rate of data received in kB/sec]
> 
> TransmittedRate : [Rate of data transmitted in kB/sec]
> 
> PacketTx: [total # of packets transmitted since reset]
> 
> PacketRx: [total # of packets received in since reset]
> 
> Etc …
> 
>  
> 
>  
> 
> Thanks,
> 
> Peter.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KMIP client for volume encryption key management

2013-07-23 Thread Becker, Bill
Thanks for your review and comments on the blueprint. A few comments / 
clarifications:


* There are about a half dozen key manager products on the market that 
support KMIP. They are offered in different form factors / price points: some 
are physical appliances (with and without embedded HSMs) and some are software 
implementations.

* Agreed that there isn't an existing KMIP client in python. We are 
offering to port the needed functionality from our current java KMIP client  to 
python and contribute it to openstack.

* Good points about the common features that barbican provides. I will 
take a look at the barbican architecture and join discussions there.


Thanks,
Bill


From: Jarret Raim [mailto:jarret.r...@rackspace.com]
Sent: Friday, July 19, 2013 9:46 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] KMIP client for volume encryption key management

I'm not sure that I agree with this direction. In our investigation, KMIP is a 
problematic protocol for several reasons:

 *   We haven't found an implementation of KMIP for Python. (Let us know if 
there is one!)
 *   Support for KMIP by HSM vendors is limited.
 *   We haven't found software implementations of KMIP suitable for use as an 
HSM replacement. (e.g. Most deployers wanting to use KMIP would have to spend a 
rather large amount of money to purchase HSMs)
 *   From our research, the KMIP spec and implementations seem to lack support 
for multi-tenancy. This makes managing keys for thousands of users difficult or 
impossible.
The goal for the Barbican system is to provide key management for OpenStack. It 
uses the standard interaction mechanisms for OpenStack, namely ReST and JSON. 
We integrate with keystone and will provide common features like usage events, 
role-based access control, fine grained control, policy support, client libs, 
Celiometer support, Horizon support and other things expected of an OpenStack 
service. If every product is forced to implement KMIP, these features would 
most likely not be provided by whatever vendor is used for the Key Manager. 
Additionally, as mentioned in the blueprint, I have concerns that vendor 
specific data will be leaked into the rest of OpenStack for things like key 
identifiers, authentication and the like.

I would propose that rather than each product implement KMIP support, we 
implement KMIP support into Barbican. This will allow the products to speak 
ReST / JSON using our client libraries just like any other OpenStack system and 
Barbican will take care of being a good OpenStack citizen. On the backend, 
Barbican will support the use of KMIP to talk to whatever device the provider 
wishes to deploy. We will also support other interaction mechanisms including 
PKCS through OpenSSH, a development implementation and a fully free and open 
source software implementation. This also allows some advanced uses cases 
including federation. Federation will allow customers of public clouds like 
Rackspace's to maintain custody of their keys while still being able to 
delegate their use to the Cloud for specific tasks.

I've been asked about KMIP support at the Summit and by several of Rackspace's 
partners. I was planning on getting to it at some point, probably after 
Icehouse. This is mostly due to the fact that we didn't find a suitable KMIP 
implementation for Python so it looks like we'd have to write one. If there is 
interest from people to create that implementation, we'd be happy to help do 
the work to integrate it into Barbican.

We just released our M2 milestone and we are on track for our 1.0 release for 
Havana. I would encourage anyone interested to check our what we are working on 
and come help us out. We use this list for most of our discussions and we hang 
out on #openstack-cloudkeep on free node.


Thanks,
Jarret




From: , Bill 
mailto:bill.bec...@safenet-inc.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 18, 2013 2:11 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] KMIP client for volume encryption key management

A blueprint and spec to add a client that implements OASIS KMIP standard was 
recently added:

https://blueprints.launchpad.net/nova/+spec/kmip-client-for-volume-encryption
https://wiki.openstack.org/wiki/KMIPclient


We're looking for feedback to the set of questions in the spec. Any additional 
input is also appreciated.

Thanks,
Bill B.

The information contained in this electronic mail transmission

may be privileged and confidential, and therefore, protected

from disclosure. If you have received this communication in

error, please notify us immediately by replying to this

message and deleting it from your computer without copying

or disclosing it.





The information contained in this electronic mail transmission 
may be privileged and confidential, and therefore, protected 
from disclosure. If you have received this c

Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Joe Gordon
On Jul 22, 2013 7:13 PM, "Joshua Harlow"  wrote:
>
> An interesting idea, I'm not sure how useful it is but it could be.
>
> If you think of the compute node capability information as an 'event
stream' then you could imagine using something like apache flume (
http://flume.apache.org/) or storm (http://storm-project.net/) to be able
to sit on this stream and perform real-time analytics of said stream to
update how scheduling can be performed. Maybe the MQ or ceilometer can be
the same 'stream' source but it doesn't seem like it is needed to 'tie' the
impl to those methods. If you consider compute nodes as producers of said
data and then hook a real-time processing engine on-top that can adjust
some scheduling database used by a scheduler then it seems like u could
vary how often compute nodes produce said stream info, where and how said
stream info is stored and analyzed which will allow you to then adjust how
'real-time' you want said compute scheduling capability information to be
up to date.

Interesting idea, but not sure if its the right solution.  There are two
known issues today
* periodic updates can overwhelm things.  Solution: remove unneeded
updates, most scheduling data only changes when an instance does some state
change.
* according to Boris doing a get all hosts from the db doesn't scale.
Solution: there are several possibilities.

Neither scale issue today is helped with flume.  But this concept may be
useful in the future

>
> Just seems that real-time processing  is a similar model as what is
needed here.
>
> Maybe something like that is where this should end up?
>
> -Josh
>
> From: Joe Gordon 
> Reply-To: OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>
> Date: Monday, July 22, 2013 3:47 PM
> To: OpenStack Development Mailing List 
>
> Subject: Re: [openstack-dev] A simple way to improve nova scheduler
>
>
>
>
> On Mon, Jul 22, 2013 at 5:16 AM, Boris Pavlovic  wrote:
>>
>> Joe,
>>
>> >> Speaking of Chris Beherns  "Relying on anything but the DB for
current memory free, etc, is just too laggy… so we need to stick with it,
IMO."
http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html
>>
>> It doesn't scale, use tons of resources, works slow and is hard to
extend.
>> Also the mechanism of getting free and used memory is done by virt
layer.
>> And only thing that could be laggy is rpc (but it is used also by
compute node update)
>
>
> You say it doesn't scale and uses tons of resources can you show to
reproduce your findings.  Also just because the current implementation of
the scheduler is non-optimal doesn't mean the no DB is the only solution, I
am interested in seeing other possible solutions before going down such a
drastically different road (no-db).  Such as pushing more of the logic into
the DB and not searching through all compute nodes in python space or
looking at removing the periodic updates all  together or ???.
>
>>
>>
>>
>> >> * How do you bring a new scheduler up in an existing deployment and
make it get the full state of the system?
>>
>> You should wait for a one periodic task time. And you will get full
information about all compute nodes.
>
>
> sure, that may work we need to add logic in to handle this.
>
>>
>> >> *  Broadcasting RPC updates from compute nodes to the scheduler means
every scheduler has to process  the same RPC message.  And if a deployment
hits the point where the number of compute updates is consuming 99 percent
of the scheduler's time just adding another scheduler won't fix anything as
it will get bombarded too.
>>
>>
>> If we are speaking about numbers. You are able to see our doc, where
they are counted.
>> If we have 10k nodes it will make only 150rpc calls/sec (which means
nothing for cpu). By the way we way we will remove 150 calls/s from
conductor. One more thing currently in 10nodes deployment I think we will
spend almost all time fro waiting DB (compute_nodes_get_all()). And also
when we are calling this method in this moment we should process all data
for 60 sec. (So in this case in numbers we are doing on scheduler side
60*request_pro_sec of our approach. Which means if we get more then 1
request pro sec we will do more CPU load.)
>
>
> There are deployments in production (bluehost) that are already bigger
then 10k nodes, AFAIK the last numbers I heard were 16k nodes and they
didn't use our scheduler at all. So a better upper limit would be something
like 30k nodes.  At that scale we get 500 RPC broadcasts per second
(assuming 60 second periodic update) from periodic updates, plus updates
from state changes.  If we assume only 1% of compute nodes have instances
that are changing state that is an additional 300 RPC broadcasts to the
schedulers per second.  So now we have 800 per second.  How many RPC
updates (from compute node to scheduler) per second can a single python
thread handle without DB access? With DB Access?
>
> As for your second point, I don't follow can you elaborate.
>
>
>
>
>>
>>
>>
>> >>

Re: [openstack-dev] [savanna] scalable architecture

2013-07-23 Thread Sergey Lukjanov
There is an inaccuracy about savanna-conductor role in this document. Now we 
have no real reasons to make savanna-conductor as an separated service. The 
main goal of declaring savanna-conductor in this doc is to illustrate that we 
want to move all db-related operations to the single module that could be used 
as a separated service (if it’ll be needed in future) and make savanna able to 
work with db only using this module w/o direct db access. In fact we want only 
“local” mode for savanna-conductor now.

There are several potential reasons to implement savanna-conductor as a 
separated service in future. First of all, there are some ideas about 
provisioning savanna agents to the Hadoop clusters to monitor/manage cluster 
state, so, we’ll have the same security problem as nova. The second potential 
reason is that we are planning to advance tasks execution flow in savanna to 
support long running complex operations that will require additional management 
such as replying/rollbacking and conductor could be the service that will 
implement it.

Thank you for comment, I’ll update diagram and description to explain that 
currently there is no need to implement savanna-conductor.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Jul 23, 2013, at 20:45, Russell Bryant  wrote:

> On 07/23/2013 12:32 PM, Sergey Lukjanov wrote:
>> Hi evereyone,
>> 
>> We’ve started working on upgrading Savanna architecture in version 0.3 to 
>> make it horizontally scalable.
>> 
>> The most part of information is in the wiki page - 
>> https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.
>> 
>> Additionally there are several blueprints created for this activity - 
>> https://blueprints.launchpad.net/savanna?searchtext=ng-
>> 
>> We are looking for comments / questions / suggestions.
>> 
>> P.S. The another thing that we’re working on in Savanna 0.3 is EDP (Elastic 
>> Data Processing).
> 
> Just did a quick look ... what's the justification for needing
> savanna-conductor?
> 
> In nova, putting db access through nova-conductor was to remove direct
> db access from compute nodes, since they are the least trusted part of
> the system.  I don't see the same concern here.  Is there another reason
> for this or should you just have api and engine hit the db directly?
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About fixing old DB migrations

2013-07-23 Thread Vishvananda Ishaya

On Jul 18, 2013, at 4:19 AM, Sean Dague  wrote:

> On 07/18/2013 05:54 AM, Nikola Đipanov wrote:
>> 
>> Heya,
>> 
>> Rule is because (I believe at least) - in the spirit of continuous
>> integration - people should be able to deploy continuously anything on
>> master.
>> 
>> Due to the nature of schema versioning as done by sqlalchemy-migrate -
>> changing a migration would leave people doing continuous deployments,
>> with broken code (or db state - depends how you look at it) as they will
>> not be able to re-run the migration.
>> 
>> This has to stay like that as long as we are using sqla-migrate I believe.u
> 
> Yes, this is the crux of it. Many OpenStack deployers deploy from git, not 
> from a release, which means we should be able to go from any git commit in 
> the recent past to current git, and things be right.
> 
> But more importantly, a user that upgrades weekly during Havana.
> 
> A -> B -> C -> D -> E -> F  -> Z
> 
> needs to have the same schema as someone that decided to only upgrade from 
> Grizzly to Havana at the end of the release.
> 
> A => Z (hitting all the migrations along the way, but doing this all at once).
> 
> So if you go back and change migration C to C' you end up with the 
> possibility that getting to Z the two ways are different, because some of 
> your users already applied C, and some did not.
> 
> For support reasons if we end up with users at Havana with different 
> schemas well that's not very good (bridging on terrible).
> 
> While it's possible to get this right when you change old migrations, it's 
> much much easier to get this wrong. So as a safety measure we treat 
> migrations as write only, once they've landed the only way to fix them is to 
> apply a new migration later. The only exception is made when the migration 
> would cause data corruption that's not recoverable (like overly truncating a 
> column so we would loose data).
> 
> Anyone working on migrations, or reviewing migrations, needs to be extra 
> careful because of these issues.

As a side note, there is an exception to the rule. If one of the migrations has 
a bug that prevents it from working in some situations, then we fix this 
inline. Sometimes this means we have to fix a migration inline AND add a new 
migration to make the same fix. This has happened in the past for migrations 
that would break in postgres or if certain data was in the database.

Vish

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Mark McClain
All-

I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
Neutron core team.  Both have been very active with valuable reviews and 
contributions to the Neutron community.

Neutron core team members please respond with +1/0/-1.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [horizon] Python client uncapping, currently blocking issues

2013-07-23 Thread Lyle, David (Cloud Services)
> Hi Sean,

>> A couple weeks ago after a really *fun* night we started down this 
>> road of uncapping all the python clients to ensure that we're actually 
>> testing the git clients in the gate. We're close, but we need the help 
>> of the horizon and ceilometerclient teams to get us there:
>>
>> 1) we need a rebase on this patch for Horizon - 
>> https://review.openstack.org/#/c/36897/
>>
>> 2) we need a python-ceilometerclient release, as ceilometer uses 
>> python-ceilometerclient (for unit tests) which means we can't bump 
>> ceilometer client (https://review.openstack.org/#/c/36905/) until it's done.

> Sorry for the delay. I think Eoghan wanted to do the release, but he probably 
> got swamped by something else, so I just released 1.0.2.

> Hope that helps,

> --
> Julien Danjou
> // Free Software hacker / freelance consultant // http://julien.danjou.info

Horizon change has now merged.  

-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Gary Kotton
A big +2 for both!
They have been doing a great job as of late.

-Original Message-
From: Mark McClain [mailto:mark.mccl...@dreamhost.com] 
Sent: Tuesday, July 23, 2013 10:15 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] Proposal to add new neutron-core members

All-

I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
Neutron core team.  Both have been very active with valuable reviews and 
contributions to the Neutron community.

Neutron core team members please respond with +1/0/-1.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Aaron Rosen
+1

I think both Kyle and Armando would be a great addition to the team.


On Tue, Jul 23, 2013 at 12:15 PM, Mark McClain
wrote:

> All-
>
> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to
> the Neutron core team.  Both have been very active with valuable reviews
> and contributions to the Neutron community.
>
> Neutron core team members please respond with +1/0/-1.
>
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-23 Thread Vishvananda Ishaya

On Jul 19, 2013, at 3:37 PM, Ian Wells  wrote:

>> [arosen] - sure, in this case though then we'll have to add even more
>> queries between nova-compute and quantum as nova-compute will need to query
>> quantum for ports matching the device_id to see if the port was already
>> created and if not try to create them.
> 
> The cleanup job doesn't look like a job for nova-compute regardless of the 
> rest.
> 
>> Moving the create may for other reasons be a good idea (because compute
>> would *always* deal with ports and *never* with networks - a simpler API) -
>> but it's nothing to do with solving this problem.
>> 
>> [arosen] - It does solve this issue because it moves the quantum port-create
>> calls outside of the retry schedule logic on that compute node. Therefore if
>> the port fails to create the instance goes to error state.  Moving networks
>> out of the nova-api will also solve this issue for us as the client then
>> won't rely on nova anymore to create the port. I'm wondering if creating an
>> additional network_api_class like nova.network.quantumv2.api.NoComputeAPI is
>> the way to prove this out. Most of the code in there would inherit from
>> nova.network.quantumv2.api.API .
> 
> OK, so if we were to say that:
> 
> - nova-api creates the port with an expiry timestamp to catch orphaned
> autocreated ports
> - nova-compute always uses port-update (or, better still, have a
> distinct call that for now works like port-update but clearly
> represents an attach or detach and not a user-initiated update,
> improving the plugin division of labour, but that can be a separate
> proposal) and *never* creates a port; attaching to an
> apparently-attached port attached to the same instance should ensure
> that a previous attachment is destroyed, which should cover the
> multiple-schedule lost-reply case
> - nova-compute is always talked to in terms of ports, and never in
> terms of networks (a big improvement imo)
> - nova-compute attempts to remove autocreated ports on detach
> - a cleanup job in nova-api (or nova-conductor?) cleans up expired
> autocreated ports with no attachment or a broken attachment (which
> would catch failed detachments as well as failed schedules)
> 
> how does that work for people?  It seems to improve the internal
> interface and the transactionality, it means that there's not the
> slightly nasty (and even faintly race-prone) create-update logic in
> nova-compute, it even simplifies the nova-compute interface - though
> we would need to consider how an upgrade path would work, there; newer
> API with older compute should work fine, the reverse not so much.


I definitely prefer the model of creating resources on the api side vs the 
compute side. We are running into similar inconsistency bugs around volumes and 
block device mapping. The allocation of external resources should all happen in 
one place before sending the request to the compute node. Currently this is in 
nova-api although it may eventually be in nova-conductor as part of a workflow 
or even move up another layer into some new openstack-orchestration component.

Vish

> -- 
> Ian.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Nachi Ueno
Great News!

+1 for both guys


2013/7/23 Aaron Rosen :
> +1
>
> I think both Kyle and Armando would be a great addition to the team.
>
>
> On Tue, Jul 23, 2013 at 12:15 PM, Mark McClain 
> wrote:
>>
>> All-
>>
>> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to
>> the Neutron core team.  Both have been very active with valuable reviews and
>> contributions to the Neutron community.
>>
>> Neutron core team members please respond with +1/0/-1.
>>
>> mark
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [horizon] Python client uncapping, currently blocking issues

2013-07-23 Thread Sean Dague

On 07/23/2013 03:19 PM, Lyle, David (Cloud Services) wrote:

Hi Sean,



A couple weeks ago after a really *fun* night we started down this
road of uncapping all the python clients to ensure that we're actually
testing the git clients in the gate. We're close, but we need the help
of the horizon and ceilometerclient teams to get us there:

1) we need a rebase on this patch for Horizon -
https://review.openstack.org/#/c/36897/

2) we need a python-ceilometerclient release, as ceilometer uses
python-ceilometerclient (for unit tests) which means we can't bump
ceilometer client (https://review.openstack.org/#/c/36905/) until it's done.



Sorry for the delay. I think Eoghan wanted to do the release, but he probably 
got swamped by something else, so I just released 1.0.2.



Hope that helps,



--
Julien Danjou
// Free Software hacker / freelance consultant // http://julien.danjou.info


Horizon change has now merged.


Great! Thanks.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Ian Wells
> * periodic updates can overwhelm things.  Solution: remove unneeded updates,
> most scheduling data only changes when an instance does some state change.

It's not clear that periodic updates do overwhelm things, though.
Boris ran the tests.  Apparently 10k nodes updating once a minute
extend the read query by ~10% (the main problem being the read query
is abysmal in the first place).  I don't know how much of the rest of
the infrastructure was involved in his test, though (RabbitMQ,
Conductor).

There are reasonably solid reasons why we would want an alternative to
the DB backend, but I'm not sure the update rate is one of them.   If
we were going for an alternative the obvious candidate to my mind
would be something like ZooKeeper (particularly since in some setups
it's already a channel between the compute hosts and the control
server).
-- 
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Chalenges with highly available service VMs - port adn security group options.

2013-07-23 Thread Aaron Rosen
I agree too. I've posted a work in progress of this here if you want to
start looking at it: https://review.openstack.org/#/c/38230/

Thanks,

Aaron


On Tue, Jul 23, 2013 at 4:21 AM, Samuel Bercovici wrote:

>  Hi,
>
> ** **
>
> I agree that the AutZ should be separated and the service provider should
> be able to control this based on their model.
>
> ** **
>
> For Service VMs who might be serving ~100-~1000 IPs and might use multiple
> MACs per port, it would be better to turn this off altogether that to have
> an IPTABLE rules with thousands of entries. 
>
> This why I prefer to be able to turn-off IP spoofing and turn-off MAC
> spoofing altogether.
>
> ** **
>
> Still from a logical model / declarative reasons an IP that can migrate
> between different ports should be declared as such and maybe also from MAC
> perspective.
>
> ** **
>
> Regards,
>
> -Sam.
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Sunday, July 21, 2013 9:56 PM
>
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] [Neutron] Chalenges with highly available
> service VMs - port adn security group options.
>
> ** **
>
> ** **
>
> ** **
>
> On 19 July 2013 13:14, Aaron Rosen  wrote:
>
> ** **
>
> ** **
>
> On Fri, Jul 19, 2013 at 1:55 AM, Samuel Bercovici 
> wrote:
>
> Hi,
>
>  
>
> I have completely missed this discussion as it does not have
> quantum/Neutron in the subject (modify it now)
>
> I think that the security group is the right place to control this.
>
> I think that this might be only allowed to admins.
>
>  
>
> I think this shouldn't be admin only since tenant's have control of their
> own networks they should be allowed to do this. 
>
> ** **
>
> I reiterate my point that the authZ model for a feature should always be
> completely separated by the business logic of the feature itself.
>
> In my opinion there are grounds both for scoping it as admin only and for
> allowing tenants to use it; it might be better if we just let the policy
> engine deal with this.
>
>  
>
> Let me explain what we need which is more than just disable spoofing.*
> ***
>
> 1.   Be able to allow MACs which are not defined on the port level to
> transmit packets (for example VRRP MACs)== turn off MAC spoofing
>
>  ** **
>
> For this it seems you would need to implement the port security extension
> which allows one to enable/disable port spoofing on a port. 
>
>  ** **
>
> This would be one way of doing it. The other would probably be adding a
> list of allowed VRRP MACs, which should be possible with the blueprint
> pointed by Aaron. 
>
> 2.   Be able to allow IPs which are not defined on the port level
> to transmit packets (for example, IP used for HA service that moves between
> an HA pair) == turn off IP spoofing
>
>  ** **
>
> It seems like this would fit your use case perfectly:
> https://blueprints.launchpad.net/neutron/+spec/allowed-address-pairs
>
>  3.   Be able to allow broadcast message on the port (for example for
> VRRP broadcast) == allow broadcast.
>
>  
>
>  Quantum does have an abstraction for disabling this so we already allow
> this by default.  
>
>   
>
> Regards,
>
> -Sam.
>
>  
>
>  
>
> *From:* Aaron Rosen [mailto:aro...@nicira.com]
> *Sent:* Friday, July 19, 2013 3:26 AM
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] Chalenges with highly available service VMs
> 
>
>  
>
> Yup: 
>
> I'm definitely happy to review and give hints. 
>
> Blueprint:
> https://docs.google.com/document/d/18trYtq3wb0eJK2CapktN415FRIVasr7UkTpWn9mLq5M/edit
>
> https://review.openstack.org/#/c/19279/  < patch that merged the feature;
> 
>
> Aaron
>
>  
>
> On Thu, Jul 18, 2013 at 5:15 PM, Ian Wells  wrote:
> 
>
> On 18 July 2013 19:48, Aaron Rosen  wrote:
> > Is there something this is missing that could be added to cover your use
> > case? I'd be curious to hear where this doesn't work for your case.  One
> > would need to implement the port_security extension if they want to
> > completely allow all ips/macs to pass and they could state which ones are
> > explicitly allowed with the allowed-address-pair extension (at least
> that is
> > my current thought).
>
> Yes - have you got docs on the port security extension?  All I've
> found so far are
>
> http://docs.openstack.org/developer/quantum/api/quantum.extensions.portsecurity.html
> and the fact that it's only the Nicira plugin that implements it.  I
> could implement it for something else, but not without a few hints...
> --
> Ian.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Joe Gordon
On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
>
> > * periodic updates can overwhelm things.  Solution: remove unneeded
updates,
> > most scheduling data only changes when an instance does some state
change.
>
> It's not clear that periodic updates do overwhelm things, though.
> Boris ran the tests.  Apparently 10k nodes updating once a minute
> extend the read query by ~10% (the main problem being the read query
> is abysmal in the first place).  I don't know how much of the rest of
> the infrastructure was involved in his test, though (RabbitMQ,
> Conductor).

A great openstack at scale talk, that covers the scheduler
http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111

>
> There are reasonably solid reasons why we would want an alternative to
> the DB backend, but I'm not sure the update rate is one of them.   If
> we were going for an alternative the obvious candidate to my mind
> would be something like ZooKeeper (particularly since in some setups
> it's already a channel between the compute hosts and the control
> server).
> --
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread David Chadwick



On 23/07/2013 18:36, Adam Young wrote:

On 07/23/2013 01:17 PM, David Chadwick wrote:

Of course the tricky thing is knowing which object attributes to fetch
for which user API requests. In the general case you cannot assume
that Keystone knows the format or structure of the policy rules, or
which attributes each will need, so you would need a specific tailored
context handler to go with a specific policy engine. This implies that
the context handler and policy engine should be pluggable Keystone
components that it calls, and that can be switchable as people decide
use different policy engines.

We are using a model where Keystone plays the mediator, and decides what
attributes to include.  The only attributes we currently claim to
support are


what I am saying is that, in the long term, this model is too 
restrictive. It would be much better for Keystone to call a plugin 
module that determines which attributes are needed to match the policy 
engine that is implemented.




userid
domainid
role_assignments: a collection of tuples  (project, role)


I thought in your blog post you said "While OpenStack calls this Role 
Based Access Control (RBAC) there is nothing in the mechanism that 
specifies that only roles can be used for these decisions. Any attribute 
in the token response could reasonably be used to provide/deny access. 
Thus, we speak of the token as containing authorization attributes."


Thus the plugin should be capable of adding any attribute to the request 
to the policy engine.





Objects in openstack are either owned by users (in Swift) or by Projects
(Nova and elsewhere).  Thus, providing userid and role_assignments
should be sufficient to make access decisions.


this is too narrow a viewpoint and contradicts your blog posting.

 If there are other

attributes that people want consume for  policy enforcement, they can
add them to custom token providers.


the token is not the only place that attributes can come from. The token 
contains subject attributes, but there are also resource attributes and 
environmental attributes that may be needed by the policy engine. Thus I 
am suggesting that we should design for eventuality. I think that 
re-engineering the existing code base should allow the context handler 
to be pluggable, whilst the first implementation will simply use the 
attributes that are currently being used, so that you have backwards 
compatibility


regards

David


 The policy enforcement mechanism is

flexible enough that extending it to other attributes should be fairly
straightforward.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread David Chadwick



On 23/07/2013 18:31, Adam Young wrote:

On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we took was as follows

Lie to people who are not authorised.


Is that your verbage?  I am going to reuse that quote, and I would like
to get the attribution correct.


Yes its my verbiage. But the concept is not. The concept was "tell an 
unauthorised user the same answer regardless of whether the object 
exists or not, so that he cannot gain information from leakage through 
error codes".






So applying this approach to your situation, you could reply Not Found
to people who are authorised to see the object if it had existed but
does not, and Not Found to those not authorised to see it, regardless
of whether it exists or not. In this case, only those who are
authorised to see the object will get it if it exists. Those not
authorised cannot tell the difference between objects that dont exist
and those that do exist


So, to try and apply this to a semi-real example:  There are two types
of URLs.  Ones that are like this:

users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global collection, and in
the second, against a scoped collection.

For unscoped, you have to treat all users as equal, and thus a 404
probably makes sense.

For a scoped collection we could return a 404 or a 403 Forbidden
 based on the users
credentials:  all resources under domain/66DEADBEEF  would show up
as 403s regardless of existantce or not if the user had no roles in the
domain 66DEADBEEF.


yes that conforms to the general principle.

 A user that would be allowed access to resources

in 66DEADBEEF  would get a 403 only for an object that existed but
that they had no permission to read, and 404 for a resource that doesn't
exist.


Yes, so that the authorised person gets information but the unauthorised 
one does not


regards

David








regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
I have uploaded some example WIP code showing a proposed approach for
just a few API calls (one easy, one more complex). I'd appreciate
early feedback on this before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get a
target object before doing you policy check.  What do you do if the
object does not exist?  If you return NotFound, then someone, who was
not authorized  could troll for the existence of entities by seeing
whether they got NotFound or Forbidden. If however, you return
Forbidden, then users who are authorized to, say, manage users in a
domain would aways get Forbidden for objects that didn't exist (since
we can know where the non-existant object was!).  So this would
modify the expected return codes.

- I really think we need some good documentation on how to bud
keystone policy files.  I'm happy to take a first cut as such a thing
- what do you think the right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KMIP client for volume encryption key management

2013-07-23 Thread Clark, Robert Graham
All of the main IaaS services are looking to bring in encryption of one form or 
another.

I believe that KMIP support for HSM integration at the Barbican backend is 
going to be very important to some deployers but the outstanding requirement 
has to be to solve the key management problem in a smart way that is easy to 
integrate with the rest of OpenStack and acceptable to the developers. To put 
it somewhat blunty, the less you have to learn to integrate, the less likely 
you are to cock it up. I see large-scale support for KMIP as a maturity feature 
in OpenStack that will add value for lots of organisations but at this point in 
development I believe going down the path of least surprise (for developers) is 
what's going to drive adoption and JSON/ReST is the way to do that.

-Rob



From: , Bill 
mailto:bill.bec...@safenet-inc.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, 23 July 2013 19:27
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] KMIP client for volume encryption key management

Thanks for your review and comments on the blueprint. A few comments / 
clarifications:


· There are about a half dozen key manager products on the market that 
support KMIP. They are offered in different form factors / price points: some 
are physical appliances (with and without embedded HSMs) and some are software 
implementations.

· Agreed that there isn’t an existing KMIP client in python. We are 
offering to port the needed functionality from our current java KMIP client  to 
python and contribute it to openstack.

· Good points about the common features that barbican provides. I will 
take a look at the barbican architecture and join discussions there.


Thanks,
Bill


From: Jarret Raim [mailto:jarret.r...@rackspace.com]
Sent: Friday, July 19, 2013 9:46 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] KMIP client for volume encryption key management

I'm not sure that I agree with this direction. In our investigation, KMIP is a 
problematic protocol for several reasons:

  *   We haven't found an implementation of KMIP for Python. (Let us know if 
there is one!)
  *   Support for KMIP by HSM vendors is limited.
  *   We haven't found software implementations of KMIP suitable for use as an 
HSM replacement. (e.g. Most deployers wanting to use KMIP would have to spend a 
rather large amount of money to purchase HSMs)
  *   From our research, the KMIP spec and implementations seem to lack support 
for multi-tenancy. This makes managing keys for thousands of users difficult or 
impossible.
The goal for the Barbican system is to provide key management for OpenStack. It 
uses the standard interaction mechanisms for OpenStack, namely ReST and JSON. 
We integrate with keystone and will provide common features like usage events, 
role-based access control, fine grained control, policy support, client libs, 
Celiometer support, Horizon support and other things expected of an OpenStack 
service. If every product is forced to implement KMIP, these features would 
most likely not be provided by whatever vendor is used for the Key Manager. 
Additionally, as mentioned in the blueprint, I have concerns that vendor 
specific data will be leaked into the rest of OpenStack for things like key 
identifiers, authentication and the like.

I would propose that rather than each product implement KMIP support, we 
implement KMIP support into Barbican. This will allow the products to speak 
ReST / JSON using our client libraries just like any other OpenStack system and 
Barbican will take care of being a good OpenStack citizen. On the backend, 
Barbican will support the use of KMIP to talk to whatever device the provider 
wishes to deploy. We will also support other interaction mechanisms including 
PKCS through OpenSSH, a development implementation and a fully free and open 
source software implementation. This also allows some advanced uses cases 
including federation. Federation will allow customers of public clouds like 
Rackspace's to maintain custody of their keys while still being able to 
delegate their use to the Cloud for specific tasks.

I've been asked about KMIP support at the Summit and by several of Rackspace's 
partners. I was planning on getting to it at some point, probably after 
Icehouse. This is mostly due to the fact that we didn't find a suitable KMIP 
implementation for Python so it looks like we'd have to write one. If there is 
interest from people to create that implementation, we'd be happy to help do 
the work to integrate it into Barbican.

We just released our M2 milestone and we are on track for our 1.0 release for 
Havana. I would encourage anyone interested to check our what we are working on 
and come help us out. We use this list for most of our discussions and we hang 
out on #openstack-cloudkeep on free node.


Thanks,
Jarret




From: , Bill 
mailto:bill.bec...@saf

Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Edgar Magana
+1 Absolutely for both!

BTW. I also would like to propose to Eugene Nikanorov

Thanks,

Edgar

On 7/23/13 12:15 PM, "Mark McClain"  wrote:

>All-
>
>I'd like to propose that Kyle Mestery and Armando Migliaccio be added to
>the Neutron core team.  Both have been very active with valuable reviews
>and contributions to the Neutron community.
>
>Neutron core team members please respond with +1/0/-1.
>
>mark
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Robert Kukura
On 07/23/2013 03:15 PM, Mark McClain wrote:
> All-
> 
> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
> Neutron core team.  Both have been very active with valuable reviews and 
> contributions to the Neutron community.
> 
> Neutron core team members please respond with +1/0/-1.

+1 for each!

-Bob

> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Boris Pavlovic
Ian,

There are serious scalability and performance problems with DB usage in
current scheduler.
Rapid Updates + Joins makes current solution absolutely not scalable.

Bleuhost example just shows personally for me just a trivial thing. (It
just won't work)

We will add tomorrow antother graphic:
Avg user req / sec in current and our approaches.

I hope it will help you to better understand situation.


Joshua,

Our current discussion is about could we remove information about compute
nodes from Nova saftly.
Both our and your approach will remove data from nova DB.

Also your approach had much more:
1) network load
2) latency
3) one more service (memcached)

So I am not sure that it is better then just send directly to scheduler
information.


Best regards,
Boris Pavlovic
---
Mirantis Inc.






On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon  wrote:

>
> On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
> >
> > > * periodic updates can overwhelm things.  Solution: remove unneeded
> updates,
> > > most scheduling data only changes when an instance does some state
> change.
> >
> > It's not clear that periodic updates do overwhelm things, though.
> > Boris ran the tests.  Apparently 10k nodes updating once a minute
> > extend the read query by ~10% (the main problem being the read query
> > is abysmal in the first place).  I don't know how much of the rest of
> > the infrastructure was involved in his test, though (RabbitMQ,
> > Conductor).
>
> A great openstack at scale talk, that covers the scheduler
> http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111
>
> >
> > There are reasonably solid reasons why we would want an alternative to
> > the DB backend, but I'm not sure the update rate is one of them.   If
> > we were going for an alternative the obvious candidate to my mind
> > would be something like ZooKeeper (particularly since in some setups
> > it's already a channel between the compute hosts and the control
> > server).
> > --
> > Ian.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Mike Wilson
Just some added info for that talk, we are using qpid as our messaging
backend. I have no data for RabbitMQ, but our schedulers are _always_
behind on processing updates. It may be different with rabbit.

-Mike


On Tue, Jul 23, 2013 at 1:56 PM, Joe Gordon  wrote:

>
> On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
> >
> > > * periodic updates can overwhelm things.  Solution: remove unneeded
> updates,
> > > most scheduling data only changes when an instance does some state
> change.
> >
> > It's not clear that periodic updates do overwhelm things, though.
> > Boris ran the tests.  Apparently 10k nodes updating once a minute
> > extend the read query by ~10% (the main problem being the read query
> > is abysmal in the first place).  I don't know how much of the rest of
> > the infrastructure was involved in his test, though (RabbitMQ,
> > Conductor).
>
> A great openstack at scale talk, that covers the scheduler
> http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111
>
> >
> > There are reasonably solid reasons why we would want an alternative to
> > the DB backend, but I'm not sure the update rate is one of them.   If
> > we were going for an alternative the obvious candidate to my mind
> > would be something like ZooKeeper (particularly since in some setups
> > it's already a channel between the compute hosts and the control
> > server).
> > --
> > Ian.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Joe Gordon
On Tue, Jul 23, 2013 at 1:09 PM, Boris Pavlovic  wrote:

> Ian,
>
> There are serious scalability and performance problems with DB usage in
> current scheduler.
> Rapid Updates + Joins makes current solution absolutely not scalable.
>
> Bleuhost example just shows personally for me just a trivial thing. (It
> just won't work)
>
> We will add tomorrow antother graphic:
> Avg user req / sec in current and our approaches.
>

Will you be releasing your code to generate the results? Without that the
graphic isn't very useful


> I hope it will help you to better understand situation.
>
>
> Joshua,
>
> Our current discussion is about could we remove information about compute
> nodes from Nova saftly.
> Both our and your approach will remove data from nova DB.
>
> Also your approach had much more:
> 1) network load
> 2) latency
> 3) one more service (memcached)
>
> So I am not sure that it is better then just send directly to scheduler
> information.
>
>
> Best regards,
> Boris Pavlovic
> ---
> Mirantis Inc.
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon wrote:
>
>>
>> On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
>> >
>> > > * periodic updates can overwhelm things.  Solution: remove unneeded
>> updates,
>> > > most scheduling data only changes when an instance does some state
>> change.
>> >
>> > It's not clear that periodic updates do overwhelm things, though.
>> > Boris ran the tests.  Apparently 10k nodes updating once a minute
>> > extend the read query by ~10% (the main problem being the read query
>> > is abysmal in the first place).  I don't know how much of the rest of
>> > the infrastructure was involved in his test, though (RabbitMQ,
>> > Conductor).
>>
>> A great openstack at scale talk, that covers the scheduler
>> http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111
>>
>> >
>> > There are reasonably solid reasons why we would want an alternative to
>> > the DB backend, but I'm not sure the update rate is one of them.   If
>> > we were going for an alternative the obvious candidate to my mind
>> > would be something like ZooKeeper (particularly since in some setups
>> > it's already a channel between the compute hosts and the control
>> > server).
>> > --
>> > Ian.
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread David Chadwick



On 23/07/2013 19:02, Henry Nash wrote:

One thing we could do is:

- Return Forbidden or NotFound if we can determine the correct answer
- When we can't (i.e. the object doesn't exist), then return NotFound
unless a new config value 'policy_harden' (?) is set to true (default
false) in which case we translate NotFound into Forbidden.


I am not sure that this achieves your objective of no data leakage 
through error codes, does it?


Its not a question of determining the correct answer or not, its a 
question of whether the user is authorised to see the correct answer or not


regards

David


Henry
On 23 Jul 2013, at 18:31, Adam Young wrote:


On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we took was as follows

Lie to people who are not authorised.


Is that your verbage?  I am going to reuse that quote, and I would
like to get the attribution correct.



So applying this approach to your situation, you could reply Not
Found to people who are authorised to see the object if it had
existed but does not, and Not Found to those not authorised to see
it, regardless of whether it exists or not. In this case, only those
who are authorised to see the object will get it if it exists. Those
not authorised cannot tell the difference between objects that dont
exist and those that do exist


So, to try and apply this to a semi-real example:  There are two types
of URLs.  Ones that are like this:

users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global collection, and
in the second, against a scoped collection.

For unscoped, you have to treat all users as equal, and thus a 404
probably makes sense.

For a scoped collection we could return a 404 or a 403 Forbidden
 based on the users
credentials:  all resources under domain/66DEADBEEF  would show up
as 403s regardless of existantce or not if the user had no roles in
the domain 66DEADBEEF.  A user that would be allowed access to
resources in 66DEADBEEF  would get a 403 only for an object that
existed but that they had no permission to read, and 404 for a
resource that doesn't exist.






regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
I have uploaded some example WIP code showing a proposed approach
for just a few API calls (one easy, one more complex). I'd
appreciate early feedback on this before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get
a target object before doing you policy check.  What do you do if
the object does not exist?  If you return NotFound, then someone,
who was not authorized  could troll for the existence of entities by
seeing whether they got NotFound or Forbidden. If however, you
return Forbidden, then users who are authorized to, say, manage
users in a domain would aways get Forbidden for objects that didn't
exist (since we can know where the non-existant object was!).  So
this would modify the expected return codes.

- I really think we need some good documentation on how to bud
keystone policy files.  I'm happy to take a first cut as such a
thing - what do you think the right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Alex Glikson
Russell Bryant  wrote on 23/07/2013 07:19:48 PM:

> I understand the use case, but can't it just be achieved with 2 flavors
> and without this new aggreagte-policy mapping?
> 
> flavor 1 with extra specs to say aggregate A and policy Y
> flavor 2 with extra specs to say aggregate B and policy Z

I agree that this approach is simpler to implement. One of the differences 
is the level of enforcement that instances within an aggregate are managed 
under the same policy. For example, nothing would prevent the admin to 
define 2 flavors with conflicting policies that can be applied to the same 
aggregate. Another aspect of the same problem is the case when admin wants 
to apply 2 different policies in 2 aggregates with same 
capabilities/properties. A natural way to distinguish between the two 
would be to add an artificial property that would be different between the 
two -- but then just specifying the policy would make most sense.

> > Well, I can think of few use-cases when the selection approach might 
be
> > different. For example, it could be based on tenant properties 
(derived
> > from some kind of SLA associated with the tenant, determining the
> > over-commit levels), or image properties (e.g., I want to determine
> > placement of Windows instances taking into account Windows licensing
> > considerations), etc
> 
> Well, you can define tenant specific flavors that could have different
> policy configurations.

Would it possible to express something like 'I want CPU over-commit of 2.0 
for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?

> I think I'd rather hold off on the extra complexity until there is a
> concrete implementation of something that requires and justifies it.

The extra complexity is actually not that huge.. we reuse the existing 
mechanism of generic filters.

Regarding both suggestions -- I think the value of this blueprint will be 
somewhat limited if we keep just the simplest version. But if people think 
that it makes a lot of sense to do it in small increments -- we can 
probably split the patch into smaller pieces.

Regards,
Alex

> -- 
> Russell Bryant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Boris Pavlovic
Joe,
Sure we will.

Mike,
Thanks for sharing information about scalability problems, presentation was
great.
Also could you say what do you think is 150 req/sec is it big load for qpid
or rabbit? I think it is just nothing..


Best regards,
Boris Pavlovic
---
Mirantis Inc.



On Wed, Jul 24, 2013 at 12:17 AM, Joe Gordon  wrote:

>
>
>
> On Tue, Jul 23, 2013 at 1:09 PM, Boris Pavlovic  wrote:
>
>> Ian,
>>
>> There are serious scalability and performance problems with DB usage in
>> current scheduler.
>> Rapid Updates + Joins makes current solution absolutely not scalable.
>>
>> Bleuhost example just shows personally for me just a trivial thing. (It
>> just won't work)
>>
>> We will add tomorrow antother graphic:
>> Avg user req / sec in current and our approaches.
>>
>
> Will you be releasing your code to generate the results? Without that the
> graphic isn't very useful
>
>
>> I hope it will help you to better understand situation.
>>
>>
>> Joshua,
>>
>> Our current discussion is about could we remove information about compute
>> nodes from Nova saftly.
>> Both our and your approach will remove data from nova DB.
>>
>> Also your approach had much more:
>> 1) network load
>> 2) latency
>> 3) one more service (memcached)
>>
>> So I am not sure that it is better then just send directly to scheduler
>> information.
>>
>>
>> Best regards,
>> Boris Pavlovic
>> ---
>> Mirantis Inc.
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon wrote:
>>
>>>
>>> On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
>>> >
>>> > > * periodic updates can overwhelm things.  Solution: remove unneeded
>>> updates,
>>> > > most scheduling data only changes when an instance does some state
>>> change.
>>> >
>>> > It's not clear that periodic updates do overwhelm things, though.
>>> > Boris ran the tests.  Apparently 10k nodes updating once a minute
>>> > extend the read query by ~10% (the main problem being the read query
>>> > is abysmal in the first place).  I don't know how much of the rest of
>>> > the infrastructure was involved in his test, though (RabbitMQ,
>>> > Conductor).
>>>
>>> A great openstack at scale talk, that covers the scheduler
>>> http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111
>>>
>>> >
>>> > There are reasonably solid reasons why we would want an alternative to
>>> > the DB backend, but I'm not sure the update rate is one of them.   If
>>> > we were going for an alternative the obvious candidate to my mind
>>> > would be something like ZooKeeper (particularly since in some setups
>>> > it's already a channel between the compute hosts and the control
>>> > server).
>>> > --
>>> > Ian.
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Split the Identity Backend blueprint

2013-07-23 Thread Alexius Ludeman
Perhaps I'm confused but there does not appear to be a shim for
ProjectApi():

https://github.com/openstack/keystone/blob/master/keystone/assignment/backends/ldap.py#L270
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap.py#L277

I believe this holds true for RoleApi() as well.

thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Dan Wendlandt
+1's from this emeritus member of the core team :)


On Tue, Jul 23, 2013 at 1:04 PM, Robert Kukura  wrote:

> On 07/23/2013 03:15 PM, Mark McClain wrote:
> > All-
> >
> > I'd like to propose that Kyle Mestery and Armando Migliaccio be added to
> the Neutron core team.  Both have been very active with valuable reviews
> and contributions to the Neutron community.
> >
> > Neutron core team members please respond with +1/0/-1.
>
> +1 for each!
>
> -Bob
>
> >
> > mark
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Salvatore Orlando
totally +1 for mestery
+1 for Armax only if he stops serially -1'ing my patches.

Cheers,
Salvatore

PS: of course the official vote is +1 for both.


On 23 July 2013 13:37, Dan Wendlandt  wrote:

> +1's from this emeritus member of the core team :)
>
>
> On Tue, Jul 23, 2013 at 1:04 PM, Robert Kukura  wrote:
>
>> On 07/23/2013 03:15 PM, Mark McClain wrote:
>> > All-
>> >
>> > I'd like to propose that Kyle Mestery and Armando Migliaccio be added
>> to the Neutron core team.  Both have been very active with valuable reviews
>> and contributions to the Neutron community.
>> >
>> > Neutron core team members please respond with +1/0/-1.
>>
>> +1 for each!
>>
>> -Bob
>>
>> >
>> > mark
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 3

2013-07-23 Thread Logan McNaughton
I'm sure this has been asked before, but what exactly is the plan for
Python 3 support?

Is the plan to support 2 and 3 at the same time? I was looking around for a
blue print or something but I can't seem to find anything.

If Python 3 support is part of the plan, can I start running 2to3 and
making edits to keep changes compatible with Python 2?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-23 Thread Eric Windisch
On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton wrote:

> I'm sure this has been asked before, but what exactly is the plan for
> Python 3 support?
>
> Is the plan to support 2 and 3 at the same time? I was looking around for
> a blue print or something but I can't seem to find anything.
>
> I suppose a wiki page is due.  This was discussed at the last summit:
https://etherpad.openstack.org/havana-python3

The plan is to support Python 2.6+ for the 2.x series and Python 3.3+. This
effort has begun for libraries (oslo) and clients. Work is appreciated on
the primary projects, but will ultimately become stalled if the library
work is not first completed.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Mike Wilson
Again I can only speak for qpid, but it's not really a big load on the
qpidd server itself. I think the issue is that the updates come in serially
into each scheduler that you have running. We don't process those quickly
enough for it to do any good, which is why the lookup from db. You can see
this for yourself using the fake hypervisor, launch yourself a bunch of
simulated nova-compute, launch a nova-scheduler on the same host and even
with 1k or so you will notice the latency between the update being sent and
the update actually meaning anything for the scheduler.

I think a few points that have been brought up could mitigate this quite a
bit. My personal view is the following:

-Only update when you have to (ie. 10k nodes all sending update every
periodic interval is heavy, only send when you have to)
-Don't fanout to schedulers, update a single scheduler which in turn
updates a shared store that is fast such as memcache

I guess that effectively is what you are proposing with the added twist of
the shared store.

-Mike


On Tue, Jul 23, 2013 at 2:25 PM, Boris Pavlovic  wrote:

> Joe,
> Sure we will.
>
> Mike,
> Thanks for sharing information about scalability problems, presentation
> was great.
> Also could you say what do you think is 150 req/sec is it big load for
> qpid or rabbit? I think it is just nothing..
>
>
> Best regards,
> Boris Pavlovic
> ---
> Mirantis Inc.
>
>
>
> On Wed, Jul 24, 2013 at 12:17 AM, Joe Gordon wrote:
>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 1:09 PM, Boris Pavlovic wrote:
>>
>>> Ian,
>>>
>>> There are serious scalability and performance problems with DB usage in
>>> current scheduler.
>>> Rapid Updates + Joins makes current solution absolutely not scalable.
>>>
>>> Bleuhost example just shows personally for me just a trivial thing. (It
>>> just won't work)
>>>
>>> We will add tomorrow antother graphic:
>>> Avg user req / sec in current and our approaches.
>>>
>>
>> Will you be releasing your code to generate the results? Without that the
>> graphic isn't very useful
>>
>>
>>> I hope it will help you to better understand situation.
>>>
>>>
>>> Joshua,
>>>
>>> Our current discussion is about could we remove information about
>>> compute nodes from Nova saftly.
>>> Both our and your approach will remove data from nova DB.
>>>
>>> Also your approach had much more:
>>> 1) network load
>>> 2) latency
>>> 3) one more service (memcached)
>>>
>>> So I am not sure that it is better then just send directly to scheduler
>>> information.
>>>
>>>
>>> Best regards,
>>> Boris Pavlovic
>>> ---
>>> Mirantis Inc.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon wrote:
>>>

 On Jul 23, 2013 3:44 PM, "Ian Wells"  wrote:
 >
 > > * periodic updates can overwhelm things.  Solution: remove unneeded
 updates,
 > > most scheduling data only changes when an instance does some state
 change.
 >
 > It's not clear that periodic updates do overwhelm things, though.
 > Boris ran the tests.  Apparently 10k nodes updating once a minute
 > extend the read query by ~10% (the main problem being the read query
 > is abysmal in the first place).  I don't know how much of the rest of
 > the infrastructure was involved in his test, though (RabbitMQ,
 > Conductor).

 A great openstack at scale talk, that covers the scheduler
 http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111

 >
 > There are reasonably solid reasons why we would want an alternative to
 > the DB backend, but I'm not sure the update rate is one of them.   If
 > we were going for an alternative the obvious candidate to my mind
 > would be something like ZooKeeper (particularly since in some setups
 > it's already a channel between the compute hosts and the control
 > server).
 > --
 > Ian.
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.o

Re: [openstack-dev] Python 3

2013-07-23 Thread Doug Hellmann
On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton wrote:

> I'm sure this has been asked before, but what exactly is the plan for
> Python 3 support?
>
> Is the plan to support 2 and 3 at the same time? I was looking around for
> a blue print or something but I can't seem to find anything.
>
> If Python 3 support is part of the plan, can I start running 2to3 and
> making edits to keep changes compatible with Python 2?
>

Eric replied with details, but I wanted to address the question of 2to3.

Using 2to3 is no longer the preferred way to port to Python 3. With changes
that landed in 3.3, it is easier to create code that will run under python
2.7 and 3.3, without resorting to the translation steps that were needed
for 3.0-3.2. Chuck Short has landed a series of patches modifying code by
hand for some cases (mostly print and exceptions) and by using the six
library in others (for iteration and module renaming).

Doug


>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-23 Thread Russell Bryant
On 07/23/2013 04:24 PM, Alex Glikson wrote:
> Russell Bryant  wrote on 23/07/2013 07:19:48 PM:
> 
>> I understand the use case, but can't it just be achieved with 2 flavors
>> and without this new aggreagte-policy mapping?
>>
>> flavor 1 with extra specs to say aggregate A and policy Y
>> flavor 2 with extra specs to say aggregate B and policy Z
> 
> I agree that this approach is simpler to implement. One of the
> differences is the level of enforcement that instances within an
> aggregate are managed under the same policy. For example, nothing would
> prevent the admin to define 2 flavors with conflicting policies that can
> be applied to the same aggregate. Another aspect of the same problem is
> the case when admin wants to apply 2 different policies in 2 aggregates
> with same capabilities/properties. A natural way to distinguish between
> the two would be to add an artificial property that would be different
> between the two -- but then just specifying the policy would make most
> sense.

I'm not sure I understand this.  I don't see anything here that couldn't
be accomplished with flavor extra specs.  Is that what you're saying?
Or are you saying there are cases that can not be set up using that
approach?

>> > Well, I can think of few use-cases when the selection approach might be
>> > different. For example, it could be based on tenant properties (derived
>> > from some kind of SLA associated with the tenant, determining the
>> > over-commit levels), or image properties (e.g., I want to determine
>> > placement of Windows instances taking into account Windows licensing
>> > considerations), etc
>>
>> Well, you can define tenant specific flavors that could have different
>> policy configurations.
> 
> Would it possible to express something like 'I want CPU over-commit of
> 2.0 for tenants with SLA=GOLD, and 4.0 for tenants with SLA=SILVER'?

Sure.  Define policies for sla=gold and sla=silver, and the flavors for
each tenant would refer to those policies.

>> I think I'd rather hold off on the extra complexity until there is a
>> concrete implementation of something that requires and justifies it.
> 
> The extra complexity is actually not that huge.. we reuse the existing
> mechanism of generic filters.

I just want to see something that actually requires it before it goes
in.  I take exposing a pluggable interface very seriously.  I don't want
to expose more random plug points than necessary.

> Regarding both suggestions -- I think the value of this blueprint will
> be somewhat limited if we keep just the simplest version. But if people
> think that it makes a lot of sense to do it in small increments -- we
> can probably split the patch into smaller pieces.

I'm certainly not trying to diminish value, but I am looking for
specific cases that can not be accomplished with a simpler solution.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2013-07-19 07:52:55 -0700:
> Hi all,
> 
> 
> 
> In Mirantis Alexey Ovtchinnikov and me are working on nova scheduler
> improvements.
> 
> As far as we can see the problem, now scheduler has two major issues:
> 
> 
> 1) Scalability. Factors that contribute to bad scalability are these:
> 
> *) Each compute node every periodic task interval (60 sec by default)
> updates resources state in DB.
> 
> *) On every boot request scheduler has to fetch information about all
> compute nodes from DB.
> 
> 2) Flexibility. Flexibility perishes due to problems with:
> 
> *) Addiing new complex resources (such as big lists of complex objects e.g.
> required by PCI Passthrough
> https://review.openstack.org/#/c/34644/5/nova/db/sqlalchemy/models.py)
> 
> *) Using different sources of data in Scheduler for example from cinder or
> ceilometer.
> 
> (as required by Volume Affinity Filter
> https://review.openstack.org/#/c/29343/)
> 
> 
> We found a simple way to mitigate this issues by avoiding of DB usage for
> host state storage.
> 
> 
> A more detailed discussion of the problem state and one of a possible
> solution can be found here:
> 
> https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsWf0UWiQ/edit#
> 

This is really interesting work, thanks for sharing it with us. The
discussion that has followed has brought up some thoughts I've had for
a while about this choke point in what is supposed to be an extremely
scalable cloud platform (OpenStack).

I feel like the discussions have all been centered around making "the"
scheduler(s) intelligent.  There seems to be a commonly held belief that
scheduling is a single step, and should be done with as much knowledge
of the system as possible by a well informed entity.

Can you name for me one large scale system that has a single entity,
human or computer, that knows everything about the system and can make
good decisions quickly?

This problem is screaming to be broken up, de-coupled, and distributed.

I keep asking myself these questions:

Why are all of the compute nodes informing all of the schedulers?

Why are all of the schedulers expecting to know about all of the compute nodes?

Can we break this problem up into simpler problems and distribute the load to
the entire system?

This has been bouncing around in my head for a while now, but as a
shallow observer of nova dev, I feel like there are some well known
scaling techniques which have not been brought up. Here is my idea,
forgive me if I have glossed over something or missed a huge hole:

* Schedulers break up compute nodes by hash table, only caring about
  those in their hash table.
* Schedulers, upon claiming a compute node by hash table, poll compute
  node directly for its information.
* Requests to boot go into fanout.
* Schedulers get request and try to satisfy using only their own compute
  nodes.
* Failure to boot results in re-insertion in the fanout.

This gives up the certainty that the scheduler will find a compute node
for a boot request on the first try. It is also possible that a request
gets unlucky and takes a long time to find the one scheduler that has
the one last "X" resource that it is looking for. There are some further
optimization strategies that can be employed (like queues based on hashes
already tried.. etc).

Anyway, I don't see any point in trying to hot-rod the intelligent
scheduler to go super fast, when we can just optimize for having many
many schedulers doing the same body of work without blocking and without
pounding a database.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-23 Thread Randall Burt

On Jul 23, 2013, at 11:03 AM, Clint Byrum 
 wrote:

> Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:
>> On 07/23/2013 10:46 AM, Angus Salkeld wrote:
>>> On 22/07/13 16:52 +0200, Bartosz Górski wrote:
 Hi folks,
 
 I would like to start a discussion about the blueprint I raised about
 multi region support.
 I would like to get feedback from you. If something is not clear or
 you have questions do not hesitate to ask.
 Please let me know what you think.
 
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/multi-region-support
 
 Wikipage:
 https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
 
>>> 
>>> What immediatley looks odd to me is you have a MultiCloud Heat talking
>>> to other Heat's in each region. This seems like unneccessary
>>> complexity to me.
>>> I would have expected one Heat to do this job.
>> 
>> It should be possible to achieve this with a single Heat installation -
>> that would make the architecture much simpler.
>> 
> 
> Agreed that it would be simpler and is definitely possible.
> 
> However, consider that having a Heat in each region means Heat is more
> resilient to failure. So focusing on a way to make multiple Heat's
> collaborate, rather than on a way to make one Heat talk to two regions
> may be a more productive exercise.

Perhaps, but wouldn't having an engine that only requires the downstream 
services running (nova, cinder, etc) in a given region be equally if not more 
resilient? A heat engine in region 1 can still provision resources in region 2 
even if the heat service in region 2 is unavailable. Seems that one could 
handle global availability via any cast, DR strategy or some other routing 
magic rather than having the engine itself implement some support for it.

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3

2013-07-23 Thread Brian Curtin

On Jul 23, 2013, at 3:51 PM, Eric Windisch 
mailto:e...@cloudscaling.com>>
 wrote:




On Tue, Jul 23, 2013 at 4:41 PM, Logan McNaughton 
mailto:lo...@bacoosta.com>> wrote:

I'm sure this has been asked before, but what exactly is the plan for Python 3 
support?

Is the plan to support 2 and 3 at the same time? I was looking around for a 
blue print or something but I can't seem to find anything.

I suppose a wiki page is due.  This was discussed at the last summit: 
https://etherpad.openstack.org/havana-python3

The plan is to support Python 2.6+ for the 2.x series and Python 3.3+. This 
effort has begun for libraries (oslo) and clients. Work is appreciated on the 
primary projects, but will ultimately become stalled if the library work is not 
first completed.

FWIW, I came across https://wiki.openstack.org/wiki/Python3Deps and updated 
"routes", which currently works with 3.3. One small step, for free!

I'm a newcomer to this list, but I'm a CPython core contributor and am working 
in Developer Relations at Rackspace, so supporting Python 3 is right up my 
alley.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-07-23 Thread Joshua Harlow
I like the idea clint.

It appears to me that the kind of scheduler 'buckets' that are being
established allow for different kind of policies around how accurate and
how 'global' the deployer wants scheduling to be (which might be a
differing policies depending on the deployer). All of these kind of
reasons start to get even more problematic when you start to do
cross-resource scheduling (volumes near compute nodes) which is I think
there was proposals for a kind of unified scheduling 'framework' (its own
project?) that focuses on this type of work. Said project stills seems
appropriate in my mind (and is desperately needed to handle the
cross-resource scheduling concerns).

- https://etherpad.openstack.org/UnifiedResourcePlacement

I'm unsure what the nova (and other projects that have similar scheduling
concepts) folks think about such a thing existing but from the last summit
there was talk about possibly figuring out how to do that. It is of course
a lot of refactoring (and cross-project refactoring) to get there but
it seems like it would be very beneficial if all projects that were
involved with resource scheduling could use a single 'thing' to update
resource information and to ask for scheduling decisions (aka, providing a
list of desired resources and getting back where those resources are, aka
a reservation on those resources, with a later commit of those resources,
so that the resources are freed if the process asking for them fails).

-Josh

On 7/23/13 3:00 PM, "Clint Byrum"  wrote:

>Excerpts from Boris Pavlovic's message of 2013-07-19 07:52:55 -0700:
>> Hi all,
>> 
>> 
>> 
>> In Mirantis Alexey Ovtchinnikov and me are working on nova scheduler
>> improvements.
>> 
>> As far as we can see the problem, now scheduler has two major issues:
>> 
>> 
>> 1) Scalability. Factors that contribute to bad scalability are these:
>> 
>> *) Each compute node every periodic task interval (60 sec by default)
>> updates resources state in DB.
>> 
>> *) On every boot request scheduler has to fetch information about all
>> compute nodes from DB.
>> 
>> 2) Flexibility. Flexibility perishes due to problems with:
>> 
>> *) Addiing new complex resources (such as big lists of complex objects
>>e.g.
>> required by PCI Passthrough
>> https://review.openstack.org/#/c/34644/5/nova/db/sqlalchemy/models.py)
>> 
>> *) Using different sources of data in Scheduler for example from cinder
>>or
>> ceilometer.
>> 
>> (as required by Volume Affinity Filter
>> https://review.openstack.org/#/c/29343/)
>> 
>> 
>> We found a simple way to mitigate this issues by avoiding of DB usage
>>for
>> host state storage.
>> 
>> 
>> A more detailed discussion of the problem state and one of a possible
>> solution can be found here:
>> 
>> 
>>https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpmWL4NWsW
>>f0UWiQ/edit#
>> 
>
>This is really interesting work, thanks for sharing it with us. The
>discussion that has followed has brought up some thoughts I've had for
>a while about this choke point in what is supposed to be an extremely
>scalable cloud platform (OpenStack).
>
>I feel like the discussions have all been centered around making "the"
>scheduler(s) intelligent.  There seems to be a commonly held belief that
>scheduling is a single step, and should be done with as much knowledge
>of the system as possible by a well informed entity.
>
>Can you name for me one large scale system that has a single entity,
>human or computer, that knows everything about the system and can make
>good decisions quickly?
>
>This problem is screaming to be broken up, de-coupled, and distributed.
>
>I keep asking myself these questions:
>
>Why are all of the compute nodes informing all of the schedulers?
>
>Why are all of the schedulers expecting to know about all of the compute
>nodes?
>
>Can we break this problem up into simpler problems and distribute the
>load to
>the entire system?
>
>This has been bouncing around in my head for a while now, but as a
>shallow observer of nova dev, I feel like there are some well known
>scaling techniques which have not been brought up. Here is my idea,
>forgive me if I have glossed over something or missed a huge hole:
>
>* Schedulers break up compute nodes by hash table, only caring about
>  those in their hash table.
>* Schedulers, upon claiming a compute node by hash table, poll compute
>  node directly for its information.
>* Requests to boot go into fanout.
>* Schedulers get request and try to satisfy using only their own compute
>  nodes.
>* Failure to boot results in re-insertion in the fanout.
>
>This gives up the certainty that the scheduler will find a compute node
>for a boot request on the first try. It is also possible that a request
>gets unlucky and takes a long time to find the one scheduler that has
>the one last "X" resource that it is looking for. There are some further
>optimization strategies that can be employed (like queues based on hashes
>already tried.. etc).
>
>Anyway, I don't see any 

Re: [openstack-dev] [Horizon] Navigation UX Enhancements - Collecting Issues

2013-07-23 Thread Tim Schnell
Most of my navigation related issues can be summed up into 3 problems.

  1.  Not having a secondary level of navigation in the left-nav really 
restricts the level of granularity that can be achieved through the navigation. 
Having an accordion-like nav structure would help this as well as setting a 
corresponding url convention like we have for the current nav (I.e. The url 
should be "dashboard/primary_nav/secondary_nav")
  2.  Which leads to my second issue, having a robust breadcrumb system so it 
is easy for the user to backtrack to previous pages would really help the user 
from getting lost in drill downs. A strong url convention would make this 
fairly easy to implement.
  3.  The fixed width of the left nav makes it awkward to have more than 3 
dashboards. Instead of the current tab-like structure for adding dashboard we 
could switch to a drop down.

Thanks for working on this Jarda!

[cid:2D3BAC12-0F3F-43C1-A5CC-C6F28BABB8C4]

From: Jaromir Coufal mailto:jcou...@redhat.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 9, 2013 7:37 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Horizon] Navigation UX Enhancements - Collecting 
Issues

Hi everybody,

in UX community group on G+ popped out a need for enhancing user experience of 
main navigation, because there are spreading out various issues .

There is already created a BP for this: 
https://blueprints.launchpad.net/horizon/+spec/navigation-enhancement

Toshi had great idea to start discussion about navigation issues on mailing 
list.

So I'd like to ask all of you, if you have some issues with navigation, what 
are the issues you are dealing with? I'd like to gather as much feedback as 
possible, so we can design the best solution which covers most of the cases. 
Issues will be listed in BP and I will try to come out with design proposals 
which hopefully will help all of you.

Examples are following:
* Navigation is not scaling for more dashboards (Project, Admin, ...)
* Each dashboard might contain different hierarchy (number of levels)

What problems do you experience with navigation?

Thanks all for contributing
-- Jarda
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Henry Nash
...the problem is that if the object does not exists we might not be able tell 
whether the use is authorized or not (since authorization might depend on 
attributes of the object itself)so how do we know wether to lie or not?

Henry
On 23 Jul 2013, at 21:23, David Chadwick wrote:

> 
> 
> On 23/07/2013 19:02, Henry Nash wrote:
>> One thing we could do is:
>> 
>> - Return Forbidden or NotFound if we can determine the correct answer
>> - When we can't (i.e. the object doesn't exist), then return NotFound
>> unless a new config value 'policy_harden' (?) is set to true (default
>> false) in which case we translate NotFound into Forbidden.
> 
> I am not sure that this achieves your objective of no data leakage through 
> error codes, does it?
> 
> Its not a question of determining the correct answer or not, its a question 
> of whether the user is authorised to see the correct answer or not
> 
> regards
> 
> David
>> 
>> Henry
>> On 23 Jul 2013, at 18:31, Adam Young wrote:
>> 
>>> On 07/23/2013 12:54 PM, David Chadwick wrote:
 When writing a previous ISO standard the approach we took was as follows
 
 Lie to people who are not authorised.
>>> 
>>> Is that your verbage?  I am going to reuse that quote, and I would
>>> like to get the attribution correct.
>>> 
 
 So applying this approach to your situation, you could reply Not
 Found to people who are authorised to see the object if it had
 existed but does not, and Not Found to those not authorised to see
 it, regardless of whether it exists or not. In this case, only those
 who are authorised to see the object will get it if it exists. Those
 not authorised cannot tell the difference between objects that dont
 exist and those that do exist
>>> 
>>> So, to try and apply this to a semi-real example:  There are two types
>>> of URLs.  Ones that are like this:
>>> 
>>> users/55FEEDBABECAFE
>>> 
>>> and ones like this:
>>> 
>>> domain/66DEADBEEF/users/55FEEDBABECAFE
>>> 
>>> 
>>> In the first case, you are selecting against a global collection, and
>>> in the second, against a scoped collection.
>>> 
>>> For unscoped, you have to treat all users as equal, and thus a 404
>>> probably makes sense.
>>> 
>>> For a scoped collection we could return a 404 or a 403 Forbidden
>>>  based on the users
>>> credentials:  all resources under domain/66DEADBEEF  would show up
>>> as 403s regardless of existantce or not if the user had no roles in
>>> the domain 66DEADBEEF.  A user that would be allowed access to
>>> resources in 66DEADBEEF  would get a 403 only for an object that
>>> existed but that they had no permission to read, and 404 for a
>>> resource that doesn't exist.
>>> 
>>> 
>>> 
>>> 
 
 regards
 
 David
 
 
 On 23/07/2013 16:40, Henry Nash wrote:
> Hi
> 
> As part of bp
> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
> I have uploaded some example WIP code showing a proposed approach
> for just a few API calls (one easy, one more complex). I'd
> appreciate early feedback on this before I take it any further.
> 
> https://review.openstack.org/#/c/38308/
> 
> A couple of points:
> 
> - One question is on how to handle errors when you are going to get
> a target object before doing you policy check.  What do you do if
> the object does not exist?  If you return NotFound, then someone,
> who was not authorized  could troll for the existence of entities by
> seeing whether they got NotFound or Forbidden. If however, you
> return Forbidden, then users who are authorized to, say, manage
> users in a domain would aways get Forbidden for objects that didn't
> exist (since we can know where the non-existant object was!).  So
> this would modify the expected return codes.
> 
> - I really think we need some good documentation on how to bud
> keystone policy files.  I'm happy to take a first cut as such a
> thing - what do you think the right place is for such documentation
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 


__

Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Tiwari, Arvind
Hi Henry,

Do you have etherpad to capture these stuff?   

Arvind

-Original Message-
From: Henry Nash [mailto:hen...@linux.vnet.ibm.com] 
Sent: Tuesday, July 23, 2013 4:48 PM
To: David Chadwick
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Extending policy checking to include 
target entities

...the problem is that if the object does not exists we might not be able tell 
whether the use is authorized or not (since authorization might depend on 
attributes of the object itself)so how do we know wether to lie or not?

Henry
On 23 Jul 2013, at 21:23, David Chadwick wrote:

> 
> 
> On 23/07/2013 19:02, Henry Nash wrote:
>> One thing we could do is:
>> 
>> - Return Forbidden or NotFound if we can determine the correct answer
>> - When we can't (i.e. the object doesn't exist), then return NotFound
>> unless a new config value 'policy_harden' (?) is set to true (default
>> false) in which case we translate NotFound into Forbidden.
> 
> I am not sure that this achieves your objective of no data leakage through 
> error codes, does it?
> 
> Its not a question of determining the correct answer or not, its a question 
> of whether the user is authorised to see the correct answer or not
> 
> regards
> 
> David
>> 
>> Henry
>> On 23 Jul 2013, at 18:31, Adam Young wrote:
>> 
>>> On 07/23/2013 12:54 PM, David Chadwick wrote:
 When writing a previous ISO standard the approach we took was as follows
 
 Lie to people who are not authorised.
>>> 
>>> Is that your verbage?  I am going to reuse that quote, and I would
>>> like to get the attribution correct.
>>> 
 
 So applying this approach to your situation, you could reply Not
 Found to people who are authorised to see the object if it had
 existed but does not, and Not Found to those not authorised to see
 it, regardless of whether it exists or not. In this case, only those
 who are authorised to see the object will get it if it exists. Those
 not authorised cannot tell the difference between objects that dont
 exist and those that do exist
>>> 
>>> So, to try and apply this to a semi-real example:  There are two types
>>> of URLs.  Ones that are like this:
>>> 
>>> users/55FEEDBABECAFE
>>> 
>>> and ones like this:
>>> 
>>> domain/66DEADBEEF/users/55FEEDBABECAFE
>>> 
>>> 
>>> In the first case, you are selecting against a global collection, and
>>> in the second, against a scoped collection.
>>> 
>>> For unscoped, you have to treat all users as equal, and thus a 404
>>> probably makes sense.
>>> 
>>> For a scoped collection we could return a 404 or a 403 Forbidden
>>>  based on the users
>>> credentials:  all resources under domain/66DEADBEEF  would show up
>>> as 403s regardless of existantce or not if the user had no roles in
>>> the domain 66DEADBEEF.  A user that would be allowed access to
>>> resources in 66DEADBEEF  would get a 403 only for an object that
>>> existed but that they had no permission to read, and 404 for a
>>> resource that doesn't exist.
>>> 
>>> 
>>> 
>>> 
 
 regards
 
 David
 
 
 On 23/07/2013 16:40, Henry Nash wrote:
> Hi
> 
> As part of bp
> https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
> I have uploaded some example WIP code showing a proposed approach
> for just a few API calls (one easy, one more complex). I'd
> appreciate early feedback on this before I take it any further.
> 
> https://review.openstack.org/#/c/38308/
> 
> A couple of points:
> 
> - One question is on how to handle errors when you are going to get
> a target object before doing you policy check.  What do you do if
> the object does not exist?  If you return NotFound, then someone,
> who was not authorized  could troll for the existence of entities by
> seeing whether they got NotFound or Forbidden. If however, you
> return Forbidden, then users who are authorized to, say, manage
> users in a domain would aways get Forbidden for objects that didn't
> exist (since we can know where the non-existant object was!).  So
> this would modify the expected return codes.
> 
> - I really think we need some good documentation on how to bud
> keystone policy files.  I'm happy to take a first cut as such a
> thing - what do you think the right place is for such documentation
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.opensta

Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Sumit Naiksatam
+1, congrats and welcome Kyle and Armando.

~Sumit.

On Tue, Jul 23, 2013 at 12:15 PM, Mark McClain
 wrote:
> All-
>
> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
> Neutron core team.  Both have been very active with valuable reviews and 
> contributions to the Neutron community.
>
> Neutron core team members please respond with +1/0/-1.
>
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-23 Thread Simo Sorce
On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:
> ...the problem is that if the object does not exists we might not be able 
> tell whether the use is authorized or not (since authorization might depend 
> on attributes of the object itself)so how do we know wether to lie or not?

If the error you return is always 'Not Found', why do you care ?

Simo.

> Henry
> On 23 Jul 2013, at 21:23, David Chadwick wrote:
> 
> > 
> > 
> > On 23/07/2013 19:02, Henry Nash wrote:
> >> One thing we could do is:
> >> 
> >> - Return Forbidden or NotFound if we can determine the correct answer
> >> - When we can't (i.e. the object doesn't exist), then return NotFound
> >> unless a new config value 'policy_harden' (?) is set to true (default
> >> false) in which case we translate NotFound into Forbidden.
> > 
> > I am not sure that this achieves your objective of no data leakage through 
> > error codes, does it?
> > 
> > Its not a question of determining the correct answer or not, its a question 
> > of whether the user is authorised to see the correct answer or not
> > 
> > regards
> > 
> > David
> >> 
> >> Henry
> >> On 23 Jul 2013, at 18:31, Adam Young wrote:
> >> 
> >>> On 07/23/2013 12:54 PM, David Chadwick wrote:
>  When writing a previous ISO standard the approach we took was as follows
>  
>  Lie to people who are not authorised.
> >>> 
> >>> Is that your verbage?  I am going to reuse that quote, and I would
> >>> like to get the attribution correct.
> >>> 
>  
>  So applying this approach to your situation, you could reply Not
>  Found to people who are authorised to see the object if it had
>  existed but does not, and Not Found to those not authorised to see
>  it, regardless of whether it exists or not. In this case, only those
>  who are authorised to see the object will get it if it exists. Those
>  not authorised cannot tell the difference between objects that dont
>  exist and those that do exist
> >>> 
> >>> So, to try and apply this to a semi-real example:  There are two types
> >>> of URLs.  Ones that are like this:
> >>> 
> >>> users/55FEEDBABECAFE
> >>> 
> >>> and ones like this:
> >>> 
> >>> domain/66DEADBEEF/users/55FEEDBABECAFE
> >>> 
> >>> 
> >>> In the first case, you are selecting against a global collection, and
> >>> in the second, against a scoped collection.
> >>> 
> >>> For unscoped, you have to treat all users as equal, and thus a 404
> >>> probably makes sense.
> >>> 
> >>> For a scoped collection we could return a 404 or a 403 Forbidden
> >>>  based on the users
> >>> credentials:  all resources under domain/66DEADBEEF  would show up
> >>> as 403s regardless of existantce or not if the user had no roles in
> >>> the domain 66DEADBEEF.  A user that would be allowed access to
> >>> resources in 66DEADBEEF  would get a 403 only for an object that
> >>> existed but that they had no permission to read, and 404 for a
> >>> resource that doesn't exist.
> >>> 
> >>> 
> >>> 
> >>> 
>  
>  regards
>  
>  David
>  
>  
>  On 23/07/2013 16:40, Henry Nash wrote:
> > Hi
> > 
> > As part of bp
> > https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
> > I have uploaded some example WIP code showing a proposed approach
> > for just a few API calls (one easy, one more complex). I'd
> > appreciate early feedback on this before I take it any further.
> > 
> > https://review.openstack.org/#/c/38308/
> > 
> > A couple of points:
> > 
> > - One question is on how to handle errors when you are going to get
> > a target object before doing you policy check.  What do you do if
> > the object does not exist?  If you return NotFound, then someone,
> > who was not authorized  could troll for the existence of entities by
> > seeing whether they got NotFound or Forbidden. If however, you
> > return Forbidden, then users who are authorized to, say, manage
> > users in a domain would aways get Forbidden for objects that didn't
> > exist (since we can know where the non-existant object was!).  So
> > this would modify the expected return codes.
> > 
> > - I really think we need some good documentation on how to bud
> > keystone policy files.  I'm happy to take a first cut as such a
> > thing - what do you think the right place is for such documentation
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
>  
>  ___
>  OpenStack-dev mailing list
>  OpenStack-dev@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> 
> >>> ___
> >>> OpenStack-dev mailing list
> >

Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread Akihiro MOTOKI
+1, Welcome to the team, Kyle and Armando!

Akihiro


2013/7/24 Sumit Naiksatam 

> +1, congrats and welcome Kyle and Armando.
>
> ~Sumit.
>
> On Tue, Jul 23, 2013 at 12:15 PM, Mark McClain
>  wrote:
> > All-
> >
> > I'd like to propose that Kyle Mestery and Armando Migliaccio be added to
> the Neutron core team.  Both have been very active with valuable reviews
> and contributions to the Neutron community.
> >
> > Neutron core team members please respond with +1/0/-1.
> >
> > mark
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Akihiro MOTOKI 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Problem When Have Test In Master Branch

2013-07-23 Thread Wentian Jiang
Thank you:), it works when requirements builds.


On Tue, Jul 23, 2013 at 2:04 PM, Devananda van der Veen <
devananda@gmail.com> wrote:

> The tests are working for me, and for Jenkins.
>
> Several versions of packages in requirements.txt and test-requirements.txt
> were updated yesterday -- try to update your dev environment with "source
> .tox/venv/bin/activate & pip install --upgrade -r requirements.txt -r
> test-requirements.txt" and then run testr again.
>
>
> On Tue, Jul 23, 2013 at 10:22 AM, Wentian Jiang 
> wrote:
>
>> stdout: {{{
>> GET: /v1/nodes/1be26c0b-03f2-4d2e-ae87-c02d7f33c123 {}
>> GOT:Response: 401 Unauthorized
>> Content-Type: text/plain; charset=UTF-8
>> Www-Authenticate: Keystone uri='https://127.0.0.1:35357'
>> 401 Unauthorized
>> This server could not verify that you are authorized to access the
>> document you requested. Either you supplied the wrong credentials (e.g.,
>> bad password), or your browser does not understand how to supply the
>> credentials required.
>>  Authentication required
>> }}}
>>
>> Traceback (most recent call last):
>>   File "ironic/tests/api/test_acl.py", line 70, in test_non_admin
>> self.assertEqual(response.status_int, 403)
>>   File
>> "/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
>> line 322, in assertEqual
>> self.assertThat(observed, matcher, message)
>>   File
>> "/home/jiangwt100/WorkingProject/ironic/.tox/venv/local/lib/python2.7/site-packages/testtools/testcase.py",
>> line 417, in assertThat
>> raise MismatchError(matchee, matcher, mismatch, verbose)
>> MismatchError: 401 != 403
>>
>>
>> --
>> Wentian Jiang
>> UnitedStack Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Wentian Jiang
UnitedStack Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] question on Application class concurrency; paste.app_factory mechanism

2013-07-23 Thread Luse, Paul E

Trying to understand a bit more about the Application class defined in 
proxy/server.py.  Looks like it instantiated by means of paste deploy 
application factories which I know really nothing about.   Using the saio 
environment it looks like a class is instantiated when the service is started 
for the first time and when I perform the first operation against the service 
but not subsequent operations.  I was thinking that each connection would get 
its own instance thus it would be sage to store connection-transient 
information there but I was surprised by my quick test.

Basically, here's all I did:

a)  Add an element to the application class and init it to 0

b)  Add a log statement to the _init_ method of Application

c)  In the _call_ method of Application I increment the test element and 
print it

And what I see is (1) the log statement in _init_ runs when the service starts 
and when I do the very first GET (via curl cmd line) and (2) subsequent GET 
calls don't show the _init_ log statement and show the variable incrementing 
with every call.

Any insight would be appreciated...

Thx
Paul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-23 Thread Adrian Otto
Clint,

On Jul 23, 2013, at 10:03 AM, Clint Byrum 
 wrote:

> Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:
>> On 07/23/2013 10:46 AM, Angus Salkeld wrote:
>>> On 22/07/13 16:52 +0200, Bartosz Górski wrote:
 Hi folks,
 
 I would like to start a discussion about the blueprint I raised about
 multi region support.
 I would like to get feedback from you. If something is not clear or
 you have questions do not hesitate to ask.
 Please let me know what you think.
 
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/multi-region-support
 
 Wikipage:
 https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
 
>>> 
>>> What immediatley looks odd to me is you have a MultiCloud Heat talking
>>> to other Heat's in each region. This seems like unneccessary
>>> complexity to me.
>>> I would have expected one Heat to do this job.
>> 
>> It should be possible to achieve this with a single Heat installation -
>> that would make the architecture much simpler.
>> 
> 
> Agreed that it would be simpler and is definitely possible.
> 
> However, consider that having a Heat in each region means Heat is more
> resilient to failure. So focusing on a way to make multiple Heat's
> collaborate, rather than on a way to make one Heat talk to two regions
> may be a more productive exercise.

I agree with Angus, Steve Baker, and Randall on this one. We should aim for 
simplicity where practical. Having Heat services interacting with other Heat 
services seems like a whole category of complexity that's difficult to justify. 
If it were implemented as Steve Baker described, and the local Heat service 
were unavailable, the client may still have the option to use a Heat service in 
another region and still successfully orchestrate. That seems to me like a 
failure mode that's easier for users to anticipate and plan for.

Can you further explain your perspective? What sort of failures would you 
expect a network of coordinated Heat services to be more effective with? Is 
there any way this would be more simple or more elegant than other options?

Adrian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposal to add new neutron-core members

2013-07-23 Thread P Balaji-B37839
+1 for Kyle and
+1  for Armando

Both are going to adding muscle to Neutron Core Team.

Congrats Guys!


On 07/23/2013 03:15 PM, Mark McClain wrote:
> All-
>
> I'd like to propose that Kyle Mestery and Armando Migliaccio be added to the 
> Neutron core team.  Both have been very active with valuable reviews and 
> contributions to the Neutron community.
>
> Neutron core team members please respond with +1/0/-1.
+1 for each!

-Bob

>
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for API version discovery

2013-07-23 Thread Jamie Lennox
On Thu, 2013-05-02 at 00:46 +, Gabriel Hurley wrote:
> Based on input from several of the PTLs (and others), I'd like to propose the 
> following outline for how version discovery should be handled across 
> heterogeneous clouds:
> 
> https://etherpad.openstack.org/api-version-discovery-proposal
> 
> Further discussion is absolutely welcome, and I would like to propose it as a 
> topic for review by the Technical Committee at the next meeting since it 
> involves a significant cross-project standardization effort.
> 
> All the best,
> 
> - Gabriel

So I started on this somewhat independently before i found the blueprint
and given I have a semi working implementation i've got a few questions
or essentially an area i find inconsistent.

AFAIK there are no other serious attempts at this so i've got nothing to
go off. Also for the time being I don't care about HATEOS, or future
discovery protocols, just getting something that works for keystone now.

I think the way version is treated in the current blueprint is off.
Looking at 'Auth with a Specified Version' point 2 says that we should
not infer the version from the URL and point 3 says that if i provide a
version number with a non-versioned endpoint to retrieve possible
versions and instantiate a client if an endpoint is available for that
version.

I don't think that we should be checking the url for what is and what
isn't a versioned endpoint for the same reasons we shouldn't be
retrieving the version from the URL. 

What i would like to do is treat the version parameter as the requested
version, rather than using it to prevent a lookup for versions. What
this means is that i can say: 
client.Client(auth_url="http://example.com:5000/";, user=foo,
version=2.0, ...)
and retrieve a version 2 client from a provider that may provide both
versions v2 & v3. This will still require a lookup of the auth_url even
though version is specified. 

Keystone (not sure on others) provides version information at GET /v2.0
(and other versions) as well as GET / so if i say:
client.Client(endpoint="http://example.com:5000/v2.0";,
version=2.0, token=foo)
It should be validated that the endpoint is capable of using the V2 api
and doing:
client.Client(endpoint="http://example.com:5000/v2.0";,
version=3.0, token=foo)
should fail immediately.

To summarize: every call to client.Client should result in a query to
check available versions. The version parameter is an indicator of what
version client should be retrieved from those specified as available and
it should fail if it can't deliver. If you _really_ want to use a
particular client without a lookup you should use the original
keystoneclient.v2_0.Client which is what is returned from a successful
client.Client (with version=2) call anyway and takes the same
parameters.

I've posted the work for review: https://review.openstack.org/#/c/38414/
and would appreciate comments/clarification. 


Thanks, 

Jamie
 



> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-23 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2013-07-23 21:22:14 -0700:
> Clint,
> 
> On Jul 23, 2013, at 10:03 AM, Clint Byrum 
>  wrote:
> 
> > Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:
> >> On 07/23/2013 10:46 AM, Angus Salkeld wrote:
> >>> On 22/07/13 16:52 +0200, Bartosz Górski wrote:
>  Hi folks,
>  
>  I would like to start a discussion about the blueprint I raised about
>  multi region support.
>  I would like to get feedback from you. If something is not clear or
>  you have questions do not hesitate to ask.
>  Please let me know what you think.
>  
>  Blueprint:
>  https://blueprints.launchpad.net/heat/+spec/multi-region-support
>  
>  Wikipage:
>  https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
>  
> >>> 
> >>> What immediatley looks odd to me is you have a MultiCloud Heat talking
> >>> to other Heat's in each region. This seems like unneccessary
> >>> complexity to me.
> >>> I would have expected one Heat to do this job.
> >> 
> >> It should be possible to achieve this with a single Heat installation -
> >> that would make the architecture much simpler.
> >> 
> > 
> > Agreed that it would be simpler and is definitely possible.
> > 
> > However, consider that having a Heat in each region means Heat is more
> > resilient to failure. So focusing on a way to make multiple Heat's
> > collaborate, rather than on a way to make one Heat talk to two regions
> > may be a more productive exercise.
> 
> I agree with Angus, Steve Baker, and Randall on this one. We should aim for 
> simplicity where practical. Having Heat services interacting with other Heat 
> services seems like a whole category of complexity that's difficult to 
> justify. If it were implemented as Steve Baker described, and the local Heat 
> service were unavailable, the client may still have the option to use a Heat 
> service in another region and still successfully orchestrate. That seems to 
> me like a failure mode that's easier for users to anticipate and plan for.
> 

I'm all for keeping the solution simple. However, I am not for making
it simpler than it needs to be to actually achieve its stated goals.

> Can you further explain your perspective? What sort of failures would you 
> expect a network of coordinated Heat services to be more effective with? Is 
> there any way this would be more simple or more elegant than other options?

I expect partitions across regions to be common. Regions should be
expected to operate completely isolated from one another if need be. What
is the point of deploying a service to two regions, if one region's
failure means you cannot manage the resources in the standing region?

Active/Passive means you now have an untested passive heat engine in
the passive region. You also have a lot of pointers to update when the
active is taken offline or when there is a network partition. Also split
brain is basically guaranteed in that scenario.

Active/Active(/Active/...), where each region's Heat service collaborates
and owns its own respective pieces of the stack, means that on partition,
one is simply prevented from telling one region to scale/migrate/
etc. onto another one. It also affords a local Heat the ability to
replace resources in a failed region with local resources.

The way I see it working is actually pretty simple. One stack would
lead to resources in multiple regions. The collaboration I speak of
would simply be that if given a stack that requires crossing regions,
the other Heat is contacted and the same stack is deployed. Cross-region
attribute/ref sharing would need an efficient way to pass data about
resources as well.

Anyway, I'm not the one doing the work, so I'll step back from the
position, but if I were a user who wanted multi-region, I'd certainly
want _a plan_ from day 1 to handle partitions.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev