[openstack-dev] [State-Management] Proposal to add Ivan Melnikov to taskflow-core

2013-09-06 Thread Joshua Harlow
Greetings all stackers,

I propose that we add Ivan Melnikovhttps://launchpad.net/~imelnikov to the 
taskflow-core team [1].

Ivan has been actively contributing to taskflow for a while now, both in
code and reviews.  He provides superb quality reviews and is doing an awesome 
job
with the engine concept. So I think he would make a great addition to the core
review team.

Please respond with +1/-1.

Thanks much!

[1] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-06 Thread Simon Pasquier

Gary (or others), did you have some time to look at my issue?
FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of 
this discussion.

Cheers,
Simon

[1] https://bugs.launchpad.net/nova/+bug/1218878

Le 03/09/2013 15:54, Simon Pasquier a écrit :

I've done a wrong copypaste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First, I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list
+--+-+++-+--+


| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
+--+-+++-+--+



Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --hint group=foo vm1-foo


$ nova list
+--+-+++-+--+


| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |
+--+-+++-+--+



I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start
with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and
developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e


xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add
GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug
the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f



Re: [openstack-dev] [Nova] Api samples and the feature freeze

2013-09-06 Thread John Garbutt
To me this sounds like extra docs and bugs, which is exactly what we
need to tidy up before RC.

So I think this should be given an exception.

John

On 6 September 2013 01:31, Christopher Yeoh cbky...@gmail.com wrote:
 Hi,

 I'd just like to clarify whether adding api samples for the V3 API
 is considered a feature and whether they can be added during the freeze.
 Adding api samples just adds extra testcases and the output from those
 testcases in the doc substree.

 The risk is very very low as neither addition can affect the normal
 operation of the Nova services. And if anything, the extra testcases
 can help pick up bugs, either existing or new changes with the gate.
 It also makes it much easier to generate API documentation.

 Regards,

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Api samples and the feature freeze

2013-09-06 Thread Thierry Carrez
John Garbutt wrote:
 To me this sounds like extra docs and bugs, which is exactly what we
 need to tidy up before RC.
 
 So I think this should be given an exception.

Yes, extra tests and docs don't really count as feature code. No need
for an exception for that.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Thierry Carrez
Mark McLoughlin wrote:
 I'd like to request a feature freeze exception for the final (and
 admittedly the largest) patch in the series of 40 patches to port Nova
 to oslo.messaging:
 
   https://review.openstack.org/39929

I'm generally adverse to granting feature freeze exceptions to code
refactoring: the user benefit of having them in the release is
inexistent, while they introduce some risk by changing deep code
relatively late in the cycle. That's why I prefer those to be targeted
to earlier development milestones, this avoids having to make hard calls
once all the work is done and almost completed.

That said, if the risk is under control and the patch is ready to merge,
I'm fine with this as long as there is some other benefits in having it
*in* the release rather than landed first thing in icehouse.

Would having it *in* the release facilitate stable/havana branch
maintenance, for example ?

 While this change doesn't provide any immediate user-visible benefit, it
 would be massively helpful in maintaining momentum behind the effort all
 through the Havana cycle to move the RPC code from oslo-incubator into a
 library.

Could you expand on why this would be a lot more helpful to have it in
the release rather than early in icehouse ?

And to have all cards on the table, how much sense would the alternative
make (i.e. not land this final patch while a lot of this feature code
has already been merged) ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-06 Thread Thierry Carrez
Debojyoti Dutta wrote:
 As per my IRC chats with dansmith, russellb, this feature needs the
 user auth checks (being taken care of in
 https://bugs.launchpad.net/nova/+bug/1221396).
 
 Dan has some more comments 
 
 Could we please do a FFE for this one  has been waiting for a long
 time and we have done all that we were asked relatively quickly 
 in H2 it was gated by API object refactor. Most of the current
 comments (pre last one by Dan) are due to
 https://bugs.launchpad.net/nova/+bug/1221396

This sounds relatively self-contained and ready. If you can get it
merged (with the additional bugfix) before the release meeting Tuesday,
personally I'm fine with it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: hyper-v-rdp-console

2013-09-06 Thread Thierry Carrez
Alessandro Pilotti wrote:
 This is an FFE request for adding console support for Hyper-V. Unlike
 most other hypervisors, Hyper-V guest console access is based on RDP
 instead of VNC. This blueprint adds RDP support in Nova, implemented in
 a way consistent with the existing VNC and SPICE protocols.  
 
 It's an essential feature for Hyper-V, requiring a relatively small
 implementation in the Hyper-V driver and a Nova public API .

So I'm a bit reluctant on this one... It seems to still need some
significant review work (and i don't want to distract Nova core team and
prevent them to focus on bugs), it introduces new parameters and
therefore impacts documentation, and the feature will see limited
testing due to its late landing.

I'm happy to be overruled by Russell if he thinks this can make it in
the next few days, but personally I doubt that.

The exception game always seems quite unfair to the people who get
caught on the wrong side of this artificial fence... but the line has to
be drawn somewhere so that we can focus on bugfixing as soon as
possible. Nova's H3 timeframe has seen so many features added that I'm
quite concerned about our ability to close the most obvious bugs in all
of them in time for our final release next month.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: unix domain socket consoles for libvirt

2013-09-06 Thread Thierry Carrez
Michael Still wrote:
 Hi. This code has been in review since July 29, but a combination of
 my focus on code reviews for others and having a baby has resulted in
 it not landing. This feature is important to libvirt and closes a
 critical bug we've had open way too long. The reviews:
 
 https://review.openstack.org/#/c/39048/
 https://review.openstack.org/#/c/43099/
 
 I'd appreciate peoples thoughts on a FFE for this feature.

IMHO this is more of a critical bugfix than a feature, so as long as it
merges soon I'm definitely fine with it. My only concern is that it
doesn't really look like the reviews are ready yet... and I'd really
like this to land ASAP so that it can see maximum testing mileage. How
far do you think it is ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: utilization aware scheduling

2013-09-06 Thread Thierry Carrez
Wang, Shane wrote:
 Hi core developers and everyone,
 
 Please allow me to make an FFE request for adding utilization aware 
 scheduling support in Havana.
 
 The blueprint: 
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
 [...]

This is a bit in the same bucket as the hyper-v-rdp-console above...
It's a significant feature but it's not critical to the success of the
Havana integrated release (doesn't affect other projects), and it looks
like it still needs a significant review effort before it can be merged.

I tend to prefer to reject it early rather than grant a time-limited
exception that will distract core developers from bugfixing and have an
even harder call to make when this will be 99.9% done next week...

So unless a quick look by Nova core devs indicates that this could
actually be merged with some limited effort before next week's release
meeting (Tuesday 2100 UTC) than I would rather punt this to Icehouse.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Flavio Percoco

On 05/09/13 16:51 +0100, Mark McLoughlin wrote:

On Thu, 2013-09-05 at 11:00 -0400, Russell Bryant wrote:

On 09/05/2013 10:17 AM, Mark McLoughlin wrote:
 Hi

 I'd like to request a feature freeze exception for the final (and
 admittedly the largest) patch in the series of 40 patches to port Nova
 to oslo.messaging:

   https://review.openstack.org/39929

 While this change doesn't provide any immediate user-visible benefit, it
 would be massively helpful in maintaining momentum behind the effort all
 through the Havana cycle to move the RPC code from oslo-incubator into a
 library.

 In terms of risk of regression, there is certainly some risk but that
 risk is mitigated by the fact that the core code of each of the
 transport drivers has been modified minimally. The idea was to delay
 re-factoring these drivers until we were sure that we hadn't caused any
 regressions in Nova. The code has been happily passing the
 devstack/tempest based integration tests for 10 days now.

When do you expect major refactoring to happen in oslo.messaging?  I get
that the current code was minimally modified, but I just want to
understand how the timelines line up with the release and ongoing
maintenance of the Havana release.


Yep, good question.

AFAIR we discussed this at the last Oslo IRC meeting and decided that
re-factoring will wait until Icehouse so we can more easily sync fixes
from oslo-incubator to oslo.messaging.

Porting Quantum, Cinder, Ceilometer and Heat, memoving the code from
oslo-incubator and re-factoring the drivers in oslo.messaging would be
goals for early on in the Icehouse cycle.


FWIW, yes, this is what we discussed in the last meeting.


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: utilization aware scheduling

2013-09-06 Thread Nikola Đipanov
On 06/09/13 11:28, Thierry Carrez wrote:
 Wang, Shane wrote:
 Hi core developers and everyone,

 Please allow me to make an FFE request for adding utilization aware 
 scheduling support in Havana.

 The blueprint: 
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
 [...]
 
 This is a bit in the same bucket as the hyper-v-rdp-console above...
 It's a significant feature but it's not critical to the success of the
 Havana integrated release (doesn't affect other projects), and it looks
 like it still needs a significant review effort before it can be merged.
 

There seems to be a pretty fundamental disagreement on the approach of
the first patch in the series, and it was stated several times[1] that
the issue needs consensus that we were hoping to reach on the summit or
on the ML and that it was too late in the H cycle to be proposing such
changes.

With this in mind (unless I am missing something, of course) I don't see
how we could suddenly decide to merge this. I don't think this is
related to code quality in any way, as Shane seems to imply in the first
message, but the lack of agreement on the approach.

Cheers,

Nikola

[1] https://review.openstack.org/#/c/35759/ and
https://review.openstack.org/#/c/38802/

 I tend to prefer to reject it early rather than grant a time-limited
 exception that will distract core developers from bugfixing and have an
 even harder call to make when this will be 99.9% done next week...
 
 So unless a quick look by Nova core devs indicates that this could
 actually be merged with some limited effort before next week's release
 meeting (Tuesday 2100 UTC) than I would rather punt this to Icehouse.
 
 Regards,
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: hyper-v-remotefx

2013-09-06 Thread Alessandro Pilotti
The RemoteFX feature allows Hyper-V compute nodes to provide GPU acceleration 
to instances by sharing the host's GPU resources.

Blueprint: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx

This feature provides big improvements for VDI related scenarios based on 
OpenStack and Hyper-V compute nodes.
It basically impacts on the driver code only plus an additional optional 
scheduler filter.

A full architectural description has been added in the blueprint.

The patch has been published during H3 on Aug 18th and initially reviewed on 
Sept 4th with some very good ideas for improvements at a larger scale, 
subsequently implemented on the same day, unfortunately too late in the cycle.


Thanks,

Alessandro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: unix domain socket consoles for libvirt

2013-09-06 Thread Day, Phil
Seems like a reasonable FFE to me.

(And congratulations on the baby ;-)

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 06 September 2013 02:20
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] FFE request: unix domain socket consoles for
 libvirt
 
 Hi. This code has been in review since July 29, but a combination of my focus 
 on
 code reviews for others and having a baby has resulted in it not landing. This
 feature is important to libvirt and closes a critical bug we've had open way 
 too
 long. The reviews:
 
 https://review.openstack.org/#/c/39048/
 https://review.openstack.org/#/c/43099/
 
 I'd appreciate peoples thoughts on a FFE for this feature.
 
 Thanks,
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: hyper-v-remotefx

2013-09-06 Thread Daniel P. Berrange
On Fri, Sep 06, 2013 at 10:56:10AM +, Alessandro Pilotti wrote:
 The RemoteFX feature allows Hyper-V compute nodes to provide GPU acceleration 
 to instances by sharing the host's GPU resources.
 
 Blueprint: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx
 
 This feature provides big improvements for VDI related scenarios based on 
 OpenStack and Hyper-V compute nodes.
 It basically impacts on the driver code only plus an additional optional 
 scheduler filter.
 
 A full architectural description has been added in the blueprint.
 
 The patch has been published during H3 on Aug 18th and initially reviewed
 on Sept 4th with some very good ideas for improvements at a larger scale,
 subsequently implemented on the same day, unfortunately too late in the cycle.

Simply adding the blueprint description is not sufficient to remove
my objections. The patch as proposed is too incomplete to be merged IMHO.
I pointed out a number of design flaws in the review. The updated info in
the blueprint just re-inforces my review points, to further demonstrate
why this patch should not be merged as it is coded today. It will require
non-negliglable additional dev work to address this properly I believe,
so I'm afraid that I think this is not suitable material for Havana at
this point in time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] rebased after approval

2013-09-06 Thread Gary Kotton
Hi,
The following patch was approved but failed as it required a rebase - 
https://review.openstack.org/#/c/41058/. Would it be possible that a core 
reviewer takes a look.
Thanks in advance
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] rebased after approval

2013-09-06 Thread Nikola Đipanov
On 06/09/13 14:12, Gary Kotton wrote:
 Hi, 
 The following patch was approved but failed as it required a rebase
 - https://review.openstack.org/#/c/41058/. Would it be possible that a
 core reviewer takes a look.

Re-approved.

Cheers,

N.

 Thanks in advance
 Gary
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-06 Thread Gary Kotton
Hi,
Sorry for the delayed response (it is new years my side of the world and
have some family obligations).
Would it be possible that you please provide the nova configuration file
(I would like to see if you have the group anti affinity filter in your
filter list), and if this exists to at least see a trace that the filter
has been invoked.
I have tested this with the patches that I mentioned below and it works. I
will invest some time on this on Sunday to make sure that it is all
working with the latest code.
Thanks
Gary

On 9/6/13 10:31 AM, Simon Pasquier simon.pasqu...@bull.net wrote:

Gary (or others), did you have some time to look at my issue?
FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of
this discussion.
Cheers,
Simon

[1] https://bugs.launchpad.net/nova/+bug/1218878

Le 03/09/2013 15:54, Simon Pasquier a écrit :
 I've done a wrong copypaste, see correction inline.

 Le 03/09/2013 12:34, Simon Pasquier a écrit :
 Hello,

 Thanks for the reply.

 First of all, do you agree that the current documentation for these
 filters is inaccurate?

 My test environment has 2 compute nodes: compute1 and compute3. First,
I
 launch 1 instance (not being tied to any group) on each node:
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
 local --availability-zone nova:compute1 vm-compute1-nogroup
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
 local --availability-zone nova:compute3 vm-compute3-nogroup

 So far so good, everything's active:
 $ nova list
 
+--+-++-
---+-+--+


 | ID   | Name| Status |
 Task State | Power State | Networks |
 
+--+-++-
---+-+--+


 | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
 None   | Running | private=10.0.0.3 |
 | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
 None   | Running | private=10.0.0.4 |
 
+--+-++-
---+-+--+



 Then I try to launch one instance in group 'foo' but it fails:
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
 local --availability-zone nova:compute3 vm-compute3-nogroup

 The command is:

 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
 local --hint group=foo vm1-foo

 $ nova list
 
+--+-++-
---+-+--+


 | ID   | Name| Status |
 Task State | Power State | Networks |
 
+--+-++-
---+-+--+


 | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
 None   | Running | private=10.0.0.3 |
 | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
 None   | Running | private=10.0.0.4 |
 | 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
 None   | NOSTATE |  |
 
+--+-++-
---+-+--+



 I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
 will see, the log message is there but it looks like group_hosts() [3]
 is returning all my hosts instead of only the ones that run instances
 from the group.

 Concerning GroupAffinityFilter, I understood that it couldn't work
 simultaneously with GroupAntiAffinityFilter but since I missed the
 multiple schedulers, I couldn't figure out how it would be useful. So I
 got it now.

 Best regards,

 Simon

 [1] http://paste.openstack.org/show/45672/
 [2] http://paste.openstack.org/show/45671/
 [3]
 
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L
137


 Le 03/09/2013 10:49, Gary Kotton a écrit :
 Hi,
 Hopefully I will be able to address your questions. First lets start
 with
 the group anti-affinity. This was added towards the end of the Grizzly
 release cycle as a scheduling hint. At the last summit we sat and
agreed
 on a more formal approach to deal with this and we proposed and
 developed
 
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-a
pi-e


 xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
 At the moment the following are still in review and I hope that we
will
 make the feature freeze deadline:
 Api support:
 https://review.openstack.org/#/c/30028/

 Scheduler support:
 https://review.openstack.org/#/c/33956/

 Client support:
 https://review.openstack.org/#/c/32904/

 In order to make use of the above you need to add
 GroupAntiAffinityFilter
 to the filters that will be 

Re: [openstack-dev] [Nova] FFE Request: hyper-v-remotefx

2013-09-06 Thread Alessandro Pilotti

On Sep 6, 2013, at 14:26 , Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com
 wrote:

On Fri, Sep 06, 2013 at 10:56:10AM +, Alessandro Pilotti wrote:
The RemoteFX feature allows Hyper-V compute nodes to provide GPU acceleration 
to instances by sharing the host's GPU resources.

Blueprint: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx

This feature provides big improvements for VDI related scenarios based on 
OpenStack and Hyper-V compute nodes.
It basically impacts on the driver code only plus an additional optional 
scheduler filter.

A full architectural description has been added in the blueprint.

The patch has been published during H3 on Aug 18th and initially reviewed
on Sept 4th with some very good ideas for improvements at a larger scale,
subsequently implemented on the same day, unfortunately too late in the cycle.

Simply adding the blueprint description is not sufficient to remove
my objections. The patch as proposed is too incomplete to be merged IMHO.
I pointed out a number of design flaws in the review. The updated info in
the blueprint just re-inforces my review points, to further demonstrate
why this patch should not be merged as it is coded today. It will require
non-negliglable additional dev work to address this properly I believe,
so I'm afraid that I think this is not suitable material for Havana at
this point in time.

I already committed an updated patch after your review on the 4th addressing 
your observations, agreeing that the initial implementation was missing 
flexibility (I only didn't commit the scheduler filter yet, as IMO it requires 
a separate dependent patch, but this can be done anytime).

To allow others to add their opinion easily, I'm following up here to your 
objections in the blueprint which basically can be reduced (correct me if I'm 
wrong) to the following positions, beside the areas on which we already agreed 
regarding scheduling and which have already been implemented:

1) You suggest to define the amount of RemoteFX GPU resources required in the 
flavour
2) I suggest to provide those requirements in the image custom properties 
(which is how it is currently implemented, see blueprint description as well).

Beside the fact that the two solutions are IMO not mutually exclusive as 
scheduling filters could simply apply both, the reason why I don't see how 
flavours should be considered for this patch is simple:

AFAIK Nova flavors at the moment don't support custom properties (please 
correct me if I'm wrong here, I looked at the flavors APIs and implementation, 
but I admit my ignorance in the flavor internals), so

1) Adding the remotefx requisites at the system metadata level 
https://github.com/openstack/nova/blob/master/nova/compute/flavors.py#L60 would 
be IMO wrong, as it would tightly couple a hypervisor specific limit with a 
generic compute API.
2) Adding custom properties to flavors goes way beyond the scope of this 
blueprint.

From an administrative and ACL perspective, administrators can selectively 
provide access to users to a given image, thus limiting the access to RemoteFX 
GPU resources (video memory in primis).

Being able to do it ALSO at the flavour level would add additional granularity, 
e.g. assigning different screen resolutions, using a single image. I personally 
see this as a separate blueprint as it impacts on the design of a fundamental 
Nova feature that will need quite some discussion.


Thanks,

Alessandro


Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-06 Thread Simon Pasquier

Thanks for the answer.
I already posted the links in my previous email but here they are again:
* nova.conf = http://paste.openstack.org/show/45671/
* scheduler logs = http://paste.openstack.org/show/45672/

Just to re-iterate, my setup consists of 2 compute nodes which already 
run instances not in any group. You'll see in the logs that the 
group_hosts list passed to the filter contains the 2 nodes while *no* 
instance has been booted in that group yet.


Cheers,

Simon

Le 06/09/2013 14:18, Gary Kotton a écrit :

Hi,
Sorry for the delayed response (it is new years my side of the world and
have some family obligations).
Would it be possible that you please provide the nova configuration file
(I would like to see if you have the group anti affinity filter in your
filter list), and if this exists to at least see a trace that the filter
has been invoked.
I have tested this with the patches that I mentioned below and it works. I
will invest some time on this on Sunday to make sure that it is all
working with the latest code.
Thanks
Gary

On 9/6/13 10:31 AM, Simon Pasquier simon.pasqu...@bull.net wrote:


Gary (or others), did you have some time to look at my issue?
FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of
this discussion.
Cheers,
Simon

[1] https://bugs.launchpad.net/nova/+bug/1218878

Le 03/09/2013 15:54, Simon Pasquier a écrit :

I've done a wrong copypaste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First,
I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list

+--+-++-
---+-+--+


| ID   | Name| Status |
Task State | Power State | Networks |

+--+-++-
---+-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |

+--+-++-
---+-+--+



Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --hint group=foo vm1-foo


$ nova list

+--+-++-
---+-+--+


| ID   | Name| Status |
Task State | Power State | Networks |

+--+-++-
---+-+--+


| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |

+--+-++-
---+-+--+



I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]

https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L
137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start
with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and
agreed
on a more formal approach to deal with this and we proposed and
developed

https://blueprints.launchpad.net/openstack/?searchtext=instance-group-a

Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Mark McLoughlin
On Fri, 2013-09-06 at 10:59 +0200, Thierry Carrez wrote:
 Mark McLoughlin wrote:
  I'd like to request a feature freeze exception for the final (and
  admittedly the largest) patch in the series of 40 patches to port Nova
  to oslo.messaging:
  
https://review.openstack.org/39929
 
 I'm generally adverse to granting feature freeze exceptions to code
 refactoring: the user benefit of having them in the release is
 inexistent, while they introduce some risk by changing deep code
 relatively late in the cycle. That's why I prefer those to be targeted
 to earlier development milestones, this avoids having to make hard calls
 once all the work is done and almost completed.

Yes, absolutely understood.

To be clear - while I think there's a strong case for an exception here,
I am very close to this, so I would be cool with a denial of this
request.

 That said, if the risk is under control and the patch is ready to merge,
 I'm fine with this as long as there is some other benefits in having it
 *in* the release rather than landed first thing in icehouse.
 
 Would having it *in* the release facilitate stable/havana branch
 maintenance, for example ?
 
  While this change doesn't provide any immediate user-visible benefit, it
  would be massively helpful in maintaining momentum behind the effort all
  through the Havana cycle to move the RPC code from oslo-incubator into a
  library.
 
 Could you expand on why this would be a lot more helpful to have it in
 the release rather than early in icehouse ?
 
 And to have all cards on the table, how much sense would the alternative
 make (i.e. not land this final patch while a lot of this feature code
 has already been merged) ?

If the patch was merged now, while it's not a user-visible feature
per-se, I think oslo.messaging is something we would celebrate in the
Havana release e.g.

  While OpenStack continues to add new projects and developers while, 
  in parallel, aggressively takes steps to enable the project to scale. 
  The oslo.messaging library added in the Havana release is an example 
  of where code which was previously copied and pasted between projects 
  and has now been re-factored out into a shared library with a clean 
  API. This library will provide a structured way for OpenStack 
  projects to collaborate on adopting new messaging patterns and 
  features without the disruption of incompatible API changes nor the 
  pain of keeping copied and pasted code in sync.

Obviously, as Oslo PTL, I think this is an important theme that we
should continue to emphasise and build momentum around. The messaging
library is by far the most significant example so far of how this
process of creating libraries can be effective. Nova using the library
in the Havana release would (IMHO) be the signal that this process is
working and hopefully inspire others to take a similar approach with
other chunks of common code.

Conversely, if we delayed merging this code until Icehouse, I think it
would leave somewhat of a question mark hanging over oslo.messaging and
Oslo going into the summit:

  Is this oslo.messaging thing for real? Or will it be abandoned and we
  need to continue with the oslo-incubator RPC code? Why is it taking so
  long to create these libraries? This sucks!

That's all very meta, I know. But I do really believe making progress
with Oslo libraries is important for OpenStack long-term. While it's not
a user-visible benefit per-se, I do think this work benefits the project
broadly and is also something worth marketing.

We measure our progress in terms of what we achieved in each release
cycle. I think we've made great progress on oslo.messaging in Havana,
but unless a project uses it in the release it won't be something we
really celebrate until Icehouse.

If we agree the risk is manageable, I hope the above shows there is
ample benefit in comparison to the risk.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)

2013-09-06 Thread Matthew Farrellee

That's great.

Once done, what will the procedure be for me to verify it without 
becoming a Cloudera customer? What will the limitations be to its use, 
if any, if I'm not a Cloudera customer?


Best,


matt

On 09/05/2013 08:13 AM, Andrei Savu wrote:

Thanks Matt!

I've added the following blueprint (check the full specification for
more details):
https://blueprints.launchpad.net/savanna/+spec/cdh-plugin

I'm now working on some code to get early feedback.

Regards,

-- Andrei Savu / axemblr.com http://axemblr.com/

On Wed, Sep 4, 2013 at 11:35 PM, Matthew Farrellee m...@redhat.com
mailto:m...@redhat.com wrote:

On 09/04/2013 04:06 PM, Andrei Savu wrote:

Hi guys -

I have just started to play with Savanna a few days ago - I'm still
going through the code. Next week I want to start to work on a
plugin
that will deploy CDH using Cloudera Manager.

What process should I follow? I'm new to launchpad / Gerrit.
Should I
start by creating a blueprint and a bug / improvement request?


Savanna is following all OpenStack community practices so you can
check out https://wiki.openstack.org/__wiki/How_To_Contribute
https://wiki.openstack.org/wiki/How_To_Contribute to get a good
idea of what to do.

In short, yes please use launchpad and gerrit and create a blueprint.


Is there any public OpenStack deployment that I can use for testing?
Should 0.2 work with Grizzly at trystack.org
http://trystack.org http://trystack.org?


0.2 will work with Grizzly. I've not tried trystack so let us know
if it works.


Best,


matt





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: hyper-v-remotefx

2013-09-06 Thread Alessandro Pilotti



On Sep 6, 2013, at 15:36 , Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com
 wrote:

On Fri, Sep 06, 2013 at 12:22:27PM +, Alessandro Pilotti wrote:

On Sep 6, 2013, at 14:26 , Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.commailto:berra...@redhat.com
wrote:

On Fri, Sep 06, 2013 at 10:56:10AM +, Alessandro Pilotti wrote:
The RemoteFX feature allows Hyper-V compute nodes to provide GPU acceleration 
to instances by sharing the host's GPU resources.

Blueprint: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx

This feature provides big improvements for VDI related scenarios based on 
OpenStack and Hyper-V compute nodes.
It basically impacts on the driver code only plus an additional optional 
scheduler filter.

A full architectural description has been added in the blueprint.

The patch has been published during H3 on Aug 18th and initially reviewed
on Sept 4th with some very good ideas for improvements at a larger scale,
subsequently implemented on the same day, unfortunately too late in the cycle.

Simply adding the blueprint description is not sufficient to remove
my objections. The patch as proposed is too incomplete to be merged IMHO.
I pointed out a number of design flaws in the review. The updated info in
the blueprint just re-inforces my review points, to further demonstrate
why this patch should not be merged as it is coded today. It will require
non-negliglable additional dev work to address this properly I believe,
so I'm afraid that I think this is not suitable material for Havana at
this point in time.

I already committed an updated patch after your review on the 4th addressing
your observations, agreeing that the initial implementation was missing
flexibility (I only didn't commit the scheduler filter yet, as IMO it
requires a separate dependent patch, but this can be done anytime).

IMHO the lack of schedular support is a blocker item. Running a VM with
this feature requires that the scheduler be able to place the VM on a
host which supports the feature. Without this, users are just relying on
lucky placement when trying to boot a VM to use this feature.

One detail that has been most probably misanderstood from my previous reply: 
with the following sentence I meant now, as part of this blueprint, not 
tacked on later.
So, scheduler support (as in a filter) is definitely going to be part of the 
blueprint in discussion, sorry if this was not clear.

(I only didn't commit the scheduler filter yet, as IMO it
requires a separate dependent patch, but this can be done anytime).


To allow others to add their opinion easily, I'm following up here to
your objections in the blueprint which basically can be reduced (correct
me if I'm wrong) to the following positions, beside the areas on which
we already agreed regarding scheduling and which have already been
implemented:

1) You suggest to define the amount of RemoteFX GPU resources required in the 
flavour
2) I suggest to provide those requirements in the image custom properties 
(which is how it is currently implemented, see blueprint description as well).

Beside the fact that the two solutions are IMO not mutually exclusive
as scheduling filters could simply apply both, the reason why I don't
see how flavours should be considered for this patch is simple:

IIUC, this feature consumes finite host  network resources. Allowing
users to request it via the image properties is fine, but on its own
I consider that insecure. The administrator needs to be able to control
how can access these finite resources, in the same way that you don't
allow users to specify arbitrary amounts of memory, or as many vCPUS
as they want. It seems that flavours are the only place to do this.
Again, I consider this a blocker, not something to be tacked on later.

AFAIK Nova flavors at the moment don't support custom properties
(please correct me if I'm wrong here, I looked at the flavors APIs
and implementation, but I admit my ignorance in the flavor internals), so

I'm honestly not sure, since I don't know enough about the flavours
APIs.

1) Adding the remotefx requisites at the system metadata level
  https://github.com/openstack/nova/blob/master/nova/compute/flavors.py#L60
  would be IMO wrong, as it would tightly couple a hypervisor specific limit
  with a generic compute API.
2) Adding custom properties to flavors goes way beyond the scope of this 
blueprint.

From an administrative and ACL perspective, administrators can selectively 
provide access to users to a given image, thus limiting the access to RemoteFX 
GPU resources (video memory in primis).

Being able to do it ALSO at the flavour level would add additional granularity,
e.g. assigning different screen resolutions, using a single image. I personally
see this as a separate blueprint as it impacts on the design of a fundamental
Nova feature that will need quite some discussion.

That is quite likely/possibly correct.

That we're having this level of 

Re: [openstack-dev] [State-Management] Proposal to add Ivan Melnikov to taskflow-core

2013-09-06 Thread Changbin Liu
+1

Great work Ivan!


Thanks

Changbin


On Fri, Sep 6, 2013 at 1:55 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Greetings all stackers,

 I propose that we add Ivan Melnikov https://launchpad.net/~imelnikov to
 the taskflow-core team [1].

 Ivan has been actively contributing to taskflow for a while now, both in
 code and reviews.  He provides superb quality reviews and is doing an
 awesome job
 with the engine concept. So I think he would make a great addition to the
 core
 review team.

 Please respond with +1/-1.

 Thanks much!

 [1] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove]

2013-09-06 Thread Giuseppe Galeota
Dear all,
I think that there is a poor documentation about Trove architecture and
operation.

1) Can you link me a guide to the Trove architecture, in order to better
understand how databases instances are created by Trove's components?

Thank you very much,
Giuseppe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Russell Bryant
On 09/06/2013 08:30 AM, Mark McLoughlin wrote:
 On Fri, 2013-09-06 at 10:59 +0200, Thierry Carrez wrote:
 Mark McLoughlin wrote:
 I'd like to request a feature freeze exception for the final (and
 admittedly the largest) patch in the series of 40 patches to port Nova
 to oslo.messaging:

   https://review.openstack.org/39929

 I'm generally adverse to granting feature freeze exceptions to code
 refactoring: the user benefit of having them in the release is
 inexistent, while they introduce some risk by changing deep code
 relatively late in the cycle. That's why I prefer those to be targeted
 to earlier development milestones, this avoids having to make hard calls
 once all the work is done and almost completed.
 
 Yes, absolutely understood.
 
 To be clear - while I think there's a strong case for an exception here,
 I am very close to this, so I would be cool with a denial of this
 request.
 
 That said, if the risk is under control and the patch is ready to merge,
 I'm fine with this as long as there is some other benefits in having it
 *in* the release rather than landed first thing in icehouse.

 Would having it *in* the release facilitate stable/havana branch
 maintenance, for example ?

 While this change doesn't provide any immediate user-visible benefit, it
 would be massively helpful in maintaining momentum behind the effort all
 through the Havana cycle to move the RPC code from oslo-incubator into a
 library.

 Could you expand on why this would be a lot more helpful to have it in
 the release rather than early in icehouse ?

 And to have all cards on the table, how much sense would the alternative
 make (i.e. not land this final patch while a lot of this feature code
 has already been merged) ?
 
 If the patch was merged now, while it's not a user-visible feature
 per-se, I think oslo.messaging is something we would celebrate in the
 Havana release e.g.
 
   While OpenStack continues to add new projects and developers while, 
   in parallel, aggressively takes steps to enable the project to scale. 
   The oslo.messaging library added in the Havana release is an example 
   of where code which was previously copied and pasted between projects 
   and has now been re-factored out into a shared library with a clean 
   API. This library will provide a structured way for OpenStack 
   projects to collaborate on adopting new messaging patterns and 
   features without the disruption of incompatible API changes nor the 
   pain of keeping copied and pasted code in sync.
 
 Obviously, as Oslo PTL, I think this is an important theme that we
 should continue to emphasise and build momentum around. The messaging
 library is by far the most significant example so far of how this
 process of creating libraries can be effective. Nova using the library
 in the Havana release would (IMHO) be the signal that this process is
 working and hopefully inspire others to take a similar approach with
 other chunks of common code.
 
 Conversely, if we delayed merging this code until Icehouse, I think it
 would leave somewhat of a question mark hanging over oslo.messaging and
 Oslo going into the summit:
 
   Is this oslo.messaging thing for real? Or will it be abandoned and we
   need to continue with the oslo-incubator RPC code? Why is it taking so
   long to create these libraries? This sucks!
 
 That's all very meta, I know. But I do really believe making progress
 with Oslo libraries is important for OpenStack long-term. While it's not
 a user-visible benefit per-se, I do think this work benefits the project
 broadly and is also something worth marketing.
 
 We measure our progress in terms of what we achieved in each release
 cycle. I think we've made great progress on oslo.messaging in Havana,
 but unless a project uses it in the release it won't be something we
 really celebrate until Icehouse.
 
 If we agree the risk is manageable, I hope the above shows there is
 ample benefit in comparison to the risk.

I'm actually quite impressed that we were able to merge as much of this
as we did given how big it was and that it started mid-h3.  If fewer
patches had merged, waiting to Icehouse would look a lot more painful.

I'm not sure that Nova gains a whole lot by merging this now vs in a few
weeks.  The arguments for merging seem to be less technical, and more
around project momentum.  I totally get that and would like to support
it.  I wonder though, what if we merged this as soon as master opens up
for Icehouse development, which would be before the summit?  If we went
that route, the project momentum would still be there going into the
summit.  There should be less of a question of if around
oslo.messaging, and more about how and when you can get the rest of the
projects converted to use it.

I propose a NACK on the FFE, and instead going with the above plan.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] FFE Request: hyper-v-remotefx

2013-09-06 Thread Russell Bryant
On 09/06/2013 08:36 AM, Daniel P. Berrange wrote:
 That we're having this level of design debate, is exactly why I think this
 is not suitable for a feature freeze exception. Freeze exception is for
 things that are basically complete, baring small bug fixes / changes.

Agreed.  If there's more work to do, and more design discussion needed
to reach consensus around a feature, then it should wait for Icehouse.

As a result, NACK on this FFE.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: utilization aware scheduling

2013-09-06 Thread Russell Bryant
On 09/06/2013 07:07 AM, Nikola Đipanov wrote:
 On 06/09/13 11:28, Thierry Carrez wrote:
 Wang, Shane wrote:
 Hi core developers and everyone,

 Please allow me to make an FFE request for adding utilization aware 
 scheduling support in Havana.

 The blueprint: 
 https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
 [...]

 This is a bit in the same bucket as the hyper-v-rdp-console above...
 It's a significant feature but it's not critical to the success of the
 Havana integrated release (doesn't affect other projects), and it looks
 like it still needs a significant review effort before it can be merged.

 
 There seems to be a pretty fundamental disagreement on the approach of
 the first patch in the series, and it was stated several times[1] that
 the issue needs consensus that we were hoping to reach on the summit or
 on the ML and that it was too late in the H cycle to be proposing such
 changes.
 
 With this in mind (unless I am missing something, of course) I don't see
 how we could suddenly decide to merge this. I don't think this is
 related to code quality in any way, as Shane seems to imply in the first
 message, but the lack of agreement on the approach.

Agreed with points from Thierry and Nikola.  We definitely need
consensus here.  I'm also quite concerned about the amount of review
time this will require to get it right.  I don't think we can afford to
do that and I definitely don't want it to be rushed.  I think this
should wait for Icehouse.

NACK on the FFE.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: unix domain socket consoles for libvirt

2013-09-06 Thread Russell Bryant
On 09/06/2013 05:19 AM, Thierry Carrez wrote:
 Michael Still wrote:
 Hi. This code has been in review since July 29, but a combination of
 my focus on code reviews for others and having a baby has resulted in
 it not landing. This feature is important to libvirt and closes a
 critical bug we've had open way too long. The reviews:

 https://review.openstack.org/#/c/39048/
 https://review.openstack.org/#/c/43099/

 I'd appreciate peoples thoughts on a FFE for this feature.
 
 IMHO this is more of a critical bugfix than a feature, so as long as it
 merges soon I'm definitely fine with it. My only concern is that it
 doesn't really look like the reviews are ready yet... and I'd really
 like this to land ASAP so that it can see maximum testing mileage. How
 far do you think it is ?
 

Yes, let's make this one happen.

ACK on the FFE.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: hyper-v-rdp-console

2013-09-06 Thread Russell Bryant
On 09/06/2013 05:16 AM, Thierry Carrez wrote:
 Alessandro Pilotti wrote:
 This is an FFE request for adding console support for Hyper-V. Unlike
 most other hypervisors, Hyper-V guest console access is based on RDP
 instead of VNC. This blueprint adds RDP support in Nova, implemented in
 a way consistent with the existing VNC and SPICE protocols.  

 It's an essential feature for Hyper-V, requiring a relatively small
 implementation in the Hyper-V driver and a Nova public API .
 
 So I'm a bit reluctant on this one... It seems to still need some
 significant review work (and i don't want to distract Nova core team and
 prevent them to focus on bugs), it introduces new parameters and
 therefore impacts documentation, and the feature will see limited
 testing due to its late landing.
 
 I'm happy to be overruled by Russell if he thinks this can make it in
 the next few days, but personally I doubt that.
 
 The exception game always seems quite unfair to the people who get
 caught on the wrong side of this artificial fence... but the line has to
 be drawn somewhere so that we can focus on bugfixing as soon as
 possible. Nova's H3 timeframe has seen so many features added that I'm
 quite concerned about our ability to close the most obvious bugs in all
 of them in time for our final release next month.

Yeah, this one will require a significant review time investment, so I
think it should wait.  I'm glad we were able to get a number of your
other features in, at least.

NACK on the FFE.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove]

2013-09-06 Thread Michael Basnight
On Sep 6, 2013, at 8:30 AM, Giuseppe Galeota giuseppegale...@gmail.com wrote:

 Dear all,
 I think that there is a poor documentation about Trove architecture and 
 operation.

Thanks for your interest in trove. I agree. As ptl I will devote time (now that 
the h3 madness has slowed for us) to doc'ing better information for you. 

 
 1) Can you link me a guide to the Trove architecture, in order to better 
 understand how databases instances are created by Trove's components?

Give me a bit, I'm pretty sure we had some reddwarf (pre rename) docs somewhere 
on the wiki. Ill try to find/reorg them today. 

 
 Thank you very much,
 Giuseppe
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-06 Thread Dan Smith
 Could we please do a FFE for this one  has been waiting for a long
 time and we have done all that we were asked relatively quickly 
 in H2 it was gated by API object refactor. Most of the current
 comments (pre last one by Dan) are due to
 https://bugs.launchpad.net/nova/+bug/1221396

It's probably obvious from my review, but I'm -1 on this being an FFE.
I think it needs some significant refactoring, not just in the
structure of the implementation, but also in the format of the data
exposed via the API.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-06 Thread Russell Bryant
On 09/06/2013 05:06 AM, Thierry Carrez wrote:
 Debojyoti Dutta wrote:
 As per my IRC chats with dansmith, russellb, this feature needs the
 user auth checks (being taken care of in
 https://bugs.launchpad.net/nova/+bug/1221396).

 Dan has some more comments 

 Could we please do a FFE for this one  has been waiting for a long
 time and we have done all that we were asked relatively quickly 
 in H2 it was gated by API object refactor. Most of the current
 comments (pre last one by Dan) are due to
 https://bugs.launchpad.net/nova/+bug/1221396
 
 This sounds relatively self-contained and ready. If you can get it
 merged (with the additional bugfix) before the release meeting Tuesday,
 personally I'm fine with it.
 

I originally was fine with a FFE for this.  However, after some
discussion yesterday, it seems there are quite a few comments on the
review.  There's more iterating to do on this, so I don't have a high
confidence that we can merge it in time.

I would hate to grant the FFE, have you work super hard to iterate on
this quickly, and it still not make it in time.  I think at this point
we should just defer to Icehouse to ensure that there is proper time to
do the updates and get good review on them.

NACK on the FFE.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Dan Smith
 I propose a NACK on the FFE, and instead going with the above plan.

I agree, especially with Thierry's concerns. This is a lot of deep
change for little user-visible benefit (user-visible release notes
notwithstanding!)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Bob Ball
Puppet is failing to build the packages - which is probably understandable 
given it's a refactor.

I'm not sure if this is something that points to a problem with the refactor or 
the packaging itself - hopefully Dan can review the logs, or if others want to 
see more of the details look at the logs at https://review.openstack.org/39929.

I don't see any point in triggering another Smokestack run as it's likely to 
fail again until the packaging issue has been resolved by Dan's team.

Bob

From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: 05 September 2013 17:04
To: Mark McLoughlin; OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

Thanks Mark. Looks like we need to get someone to manually trigger Smokestack 
to run against this review at least once since i don't see any +1's from 
Smokestack for some reason.

On Thu, Sep 5, 2013 at 11:49 AM, Mark McLoughlin 
mar...@redhat.commailto:mar...@redhat.com wrote:
Hi

On Thu, 2013-09-05 at 10:43 -0400, Davanum Srinivas wrote:
 Mark,

 Has this changeset get through a full tempest with QPid enabled?
No, I've only done local testing with the qpid transport to date.

I think Smokestack is the only CI tool actively testing the qpid driver.
I ran out of time adding oslo.messaging to Smokestack before heading off
on vacation, but I expect I'll get to it next week.

Cheers,
Mark.



 thanks,
 dims


 On Thu, Sep 5, 2013 at 10:17 AM, Mark McLoughlin 
 mar...@redhat.commailto:mar...@redhat.com wrote:

  Hi
 
  I'd like to request a feature freeze exception for the final (and
  admittedly the largest) patch in the series of 40 patches to port Nova to
  oslo.messaging:
 
https://review.openstack.org/39929
 
  While this change doesn't provide any immediate user-visible benefit, it
  would be massively helpful in maintaining momentum behind the effort all
  through the Havana cycle to move the RPC code from oslo-incubator into a
  library.
 
  In terms of risk of regression, there is certainly some risk but that risk
  is mitigated by the fact that the core code of each of the transport
  drivers has been modified minimally. The idea was to delay re-factoring
  these drivers until we were sure that we hadn't caused any regressions in
  Nova. The code has been happily passing the devstack/tempest based
  integration tests for 10 days now.
 
  Thanks,
  Mark.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: utilization aware scheduling

2013-09-06 Thread Daniel P. Berrange
On Fri, Sep 06, 2013 at 01:07:49PM +0200, Nikola Đipanov wrote:
 On 06/09/13 11:28, Thierry Carrez wrote:
  Wang, Shane wrote:
  Hi core developers and everyone,
 
  Please allow me to make an FFE request for adding utilization aware 
  scheduling support in Havana.
 
  The blueprint: 
  https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
  [...]
  
  This is a bit in the same bucket as the hyper-v-rdp-console above...
  It's a significant feature but it's not critical to the success of the
  Havana integrated release (doesn't affect other projects), and it looks
  like it still needs a significant review effort before it can be merged.
  
 
 There seems to be a pretty fundamental disagreement on the approach of
 the first patch in the series, and it was stated several times[1] that
 the issue needs consensus that we were hoping to reach on the summit or
 on the ML and that it was too late in the H cycle to be proposing such
 changes.
 
 With this in mind (unless I am missing something, of course) I don't see
 how we could suddenly decide to merge this. I don't think this is
 related to code quality in any way, as Shane seems to imply in the first
 message, but the lack of agreement on the approach.

Agreed, if there are unresolved design questions, this is a strong
sign that it should wait for the next cycle.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: hyper-v-rdp-console

2013-09-06 Thread Dan Smith
 This blueprint, implemented during the H3 timeframe

I remember this being a hot topic at the end of the previous cycle, and
the last summit. Thus, I was surprised to see it make such a late
entrance this cycle. Since it has seen little review thus far, I don't
think we should expect to squeeze it all in at the last minute. So, I'm
-1 on this.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FFE Request-ish: oslo.version split

2013-09-06 Thread Monty Taylor
Hey all,

One of the things we were working on this cycle was splitting the
version handling code out of pbr into its own library - oslo.version.
This work did not make Feature Freeze. I wanted to check in with folks
to take temperature about whether it's worth attempting to get it done
by the havanna release. The code itself isn't changing - it's really
just splitting one library in to two so that pbr doesn't need to be a
run time requirement and so that separation of concerns can be handled
better.

I don't have strong feelings either way, as pbr-as-runtime-dep doesn't
really bother me. Thoughts?

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] devstack with ml2 setup problem

2013-09-06 Thread Luke Gorrie
Howdy!

I'm trying to get ml2 up and running with devstack. I'm falling at the
first hurdle - getting devstack working with Neutron. I would love a hint!

Here is my localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# Optional, to enable tempest configuration as part of devstack
#enable_service tempest

#Q_PLUGIN=ml2
#ENABLE_TENANT_VLANS=True
#ML2_VLAN_RANGES=mynetwork:100:200
#Q_ML2_PLUGIN_MECHANISM_DRIVERS=log,ncs

DATABASE_PASSWORD=admin
RABBIT_PASSWORD=admin
SERVICE_TOKEN=admin
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

but after firing up stack and logging into the GUI the system seems not
entirely healthy and I see messages like:

*Error: *Unauthorized: Unable to retrieve usage information.
*Error: *Unauthorized: Unable to retrieve limit information.
*Error: *Unauthorized: Unable to retrieve project list.

It had looked okay before I tried enabled Neutron.

This is with Ubuntu raring in vagrant/virtualbox with 2GB RAM.

Any tips appreciated!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Poor documentation about Trove architecture and operation.

2013-09-06 Thread Giuseppe Galeota
Hi Michael,
I am very grateful to you. I can't wait to see your documentation about
Trove/Reddwarf!


Giuseppe.

2013/9/6 Michael Basnight mbasni...@gmail.com

 On Sep 6, 2013, at 8:30 AM, Giuseppe Galeota giuseppegale...@gmail.com
 wrote:

 Dear all,
 I think that there is a poor documentation about Trove architecture and
 operation.


 Thanks for your interest in trove. I agree. As ptl I will devote time (now
 that the h3 madness has slowed for us) to doc'ing better information for
 you.


 1) Can you link me a guide to the Trove architecture, in order to better
 understand how databases instances are created by Trove's components?


 Give me a bit, I'm pretty sure we had some reddwarf (pre rename) docs
 somewhere on the wiki. Ill try to find/reorg them today.


 Thank you very much,
 Giuseppe

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] devstack with ml2 setup problem

2013-09-06 Thread Rich Curran (rcurran)
Hi Luke -

Make sure you have the latest devstack which contains the ml2 hooks.

Remove the comments on the ML2 info below. Note that the mech driver keyword 
for the logger is logger, not log.

Thanks,
Rich

From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Friday, September 06, 2013 11:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] devstack with ml2 setup problem

Howdy!

I'm trying to get ml2 up and running with devstack. I'm falling at the first 
hurdle - getting devstack working with Neutron. I would love a hint!

Here is my localrc:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# Optional, to enable tempest configuration as part of devstack
#enable_service tempest

#Q_PLUGIN=ml2
#ENABLE_TENANT_VLANS=True
#ML2_VLAN_RANGES=mynetwork:100:200
#Q_ML2_PLUGIN_MECHANISM_DRIVERS=log,ncs

DATABASE_PASSWORD=admin
RABBIT_PASSWORD=admin
SERVICE_TOKEN=admin
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

but after firing up stack and logging into the GUI the system seems not 
entirely healthy and I see messages like:

Error: Unauthorized: Unable to retrieve usage information.
Error: Unauthorized: Unable to retrieve limit information.
Error: Unauthorized: Unable to retrieve project list.

It had looked okay before I tried enabled Neutron.

This is with Ubuntu raring in vagrant/virtualbox with 2GB RAM.

Any tips appreciated!
-Luke

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-06 Thread Devananda van der Veen
+1


On Thu, Sep 5, 2013 at 3:47 AM, Alex Xu x...@linux.vnet.ibm.com wrote:

 +1

 On 2013年09月05日 17:51, John Garbutt wrote:

 +1 I meant to raise that myself when I saw some changes there the other
 day.

 On 4 September 2013 15:52, Thierry Carrez thie...@openstack.org wrote:

 Russell Bryant wrote:

 On 09/04/2013 10:26 AM, Dan Smith wrote:

 Hi all,

 As someone who has felt about as much pain as possible from the
 dual-maintenance of the v2 and v3 API extensions, I felt compelled to
 bring up one that I think we can drop. The baremetal extension was
 ported to v3 API before (I think) the decision was made to make v3
 experimental for Havana. There are a couple of patches up for review
 right now that make obligatory changes to one or both of the versions,
 which is what made me think about this.

 Since Ironic is on the horizon and was originally slated to deprecate
 the in-nova-tree baremetal support for Havana, and since v3 is only
 experimental in Havana, I think we can drop the baremetal extension for
 the v3 API for now. If Nova's baremetal support isn't ready for
 deprecation by the time we're ready to promote the v3 API, we can
 re-introduce it at that time. Until then, I propose we avoid carrying
 it for a soon-to-be-deprecated feature.

 Thoughts?

 Sounds reasonable to me.  Anyone else have a differing opinion about it?

 +1

 --
 Thierry Carrez (ttx)

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Davanum Srinivas
+1 to NACK on the FFE. Let's do this first thing in Icehouse :)


On Fri, Sep 6, 2013 at 10:06 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Fri, Sep 06, 2013 at 09:49:18AM -0400, Russell Bryant wrote:
  On 09/06/2013 08:30 AM, Mark McLoughlin wrote:
   On Fri, 2013-09-06 at 10:59 +0200, Thierry Carrez wrote:
   Mark McLoughlin wrote:
   I'd like to request a feature freeze exception for the final (and
   admittedly the largest) patch in the series of 40 patches to port
 Nova
   to oslo.messaging:
  
 https://review.openstack.org/39929
  
   I'm generally adverse to granting feature freeze exceptions to code
   refactoring: the user benefit of having them in the release is
   inexistent, while they introduce some risk by changing deep code
   relatively late in the cycle. That's why I prefer those to be targeted
   to earlier development milestones, this avoids having to make hard
 calls
   once all the work is done and almost completed.
  
   Yes, absolutely understood.
  
   To be clear - while I think there's a strong case for an exception
 here,
   I am very close to this, so I would be cool with a denial of this
   request.
  
   That said, if the risk is under control and the patch is ready to
 merge,
   I'm fine with this as long as there is some other benefits in having
 it
   *in* the release rather than landed first thing in icehouse.
  
   Would having it *in* the release facilitate stable/havana branch
   maintenance, for example ?
  
   While this change doesn't provide any immediate user-visible
 benefit, it
   would be massively helpful in maintaining momentum behind the effort
 all
   through the Havana cycle to move the RPC code from oslo-incubator
 into a
   library.
  
   Could you expand on why this would be a lot more helpful to have it in
   the release rather than early in icehouse ?
  
   And to have all cards on the table, how much sense would the
 alternative
   make (i.e. not land this final patch while a lot of this feature code
   has already been merged) ?
  
   If the patch was merged now, while it's not a user-visible feature
   per-se, I think oslo.messaging is something we would celebrate in the
   Havana release e.g.
  
 While OpenStack continues to add new projects and developers while,
 in parallel, aggressively takes steps to enable the project to scale.
 The oslo.messaging library added in the Havana release is an example
 of where code which was previously copied and pasted between projects
 and has now been re-factored out into a shared library with a clean
 API. This library will provide a structured way for OpenStack
 projects to collaborate on adopting new messaging patterns and
 features without the disruption of incompatible API changes nor the
 pain of keeping copied and pasted code in sync.
  
   Obviously, as Oslo PTL, I think this is an important theme that we
   should continue to emphasise and build momentum around. The messaging
   library is by far the most significant example so far of how this
   process of creating libraries can be effective. Nova using the library
   in the Havana release would (IMHO) be the signal that this process is
   working and hopefully inspire others to take a similar approach with
   other chunks of common code.
  
   Conversely, if we delayed merging this code until Icehouse, I think it
   would leave somewhat of a question mark hanging over oslo.messaging and
   Oslo going into the summit:
  
 Is this oslo.messaging thing for real? Or will it be abandoned and we
 need to continue with the oslo-incubator RPC code? Why is it taking
 so
 long to create these libraries? This sucks!

 I think you're under-selling yourself here. As an interested 3rd party to
 oslo development, that certainly isn't my impression of what has happened
 with oslo.messaging development until this point.

 I think you have a pretty credible story to tell about the work done to
 get oslo.messaging to where it is now for Havana, even without it being
 used by Nova  don't think anyone could credibly claim it is dead or
 moving too slowly. Reusable library design is hard to get right  takes
 time if you want to support a stable API long term.

 I don't know about your non-Nova plans, but doing the final conversion in
 Icehouse timeframe may give you time in which to convert other openstack
 projects to oslo.messaging at the same time, so we would have everything
 aligned at once.

   That's all very meta, I know. But I do really believe making progress
   with Oslo libraries is important for OpenStack long-term. While it's
 not
   a user-visible benefit per-se, I do think this work benefits the
 project
   broadly and is also something worth marketing.
  
   We measure our progress in terms of what we achieved in each release
   cycle. I think we've made great progress on oslo.messaging in Havana,
   but unless a project uses it in the release it won't be something we
   really celebrate until 

[openstack-dev] [Trove] Modify source code for Postgres engine

2013-09-06 Thread Giuseppe Galeota
Dear all,
this is a technical question. I would try to modify the source code of
Trove in order to create databases instances using Postgres engine. I think
that it is necessary to modify the create method in the
InstanceController class. Is it right? What other things should I modify?

Thank you very much.
Giuseppe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-06 Thread Gary Kotton


On 9/6/13 5:07 PM, Russell Bryant rbry...@redhat.com wrote:

On 09/06/2013 05:06 AM, Thierry Carrez wrote:
 Debojyoti Dutta wrote:
 As per my IRC chats with dansmith, russellb, this feature needs the
 user auth checks (being taken care of in
 https://bugs.launchpad.net/nova/+bug/1221396).

 Dan has some more comments 

 Could we please do a FFE for this one  has been waiting for a long
 time and we have done all that we were asked relatively quickly 
 in H2 it was gated by API object refactor. Most of the current
 comments (pre last one by Dan) are due to
 https://bugs.launchpad.net/nova/+bug/1221396
 
 This sounds relatively self-contained and ready. If you can get it
 merged (with the additional bugfix) before the release meeting Tuesday,
 personally I'm fine with it.
 

I originally was fine with a FFE for this.  However, after some
discussion yesterday, it seems there are quite a few comments on the
review.  There's more iterating to do on this, so I don't have a high
confidence that we can merge it in time.

I would hate to grant the FFE, have you work super hard to iterate on
this quickly, and it still not make it in time.  I think at this point
we should just defer to Icehouse to ensure that there is proper time to
do the updates and get good review on them.

I am a little sad that this not been granted an FFE. We worked really hard
on this. To be honest it was ready for review at the end of H2. I guess
that our goal will now be to get this valuable feature in beginning of the
Icehouse release.

Thanks
Gary


NACK on the FFE.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-06 Thread Baldwin, Carl (HPCS Neutron)
This is a great lead on 'pool_recycle'.  Thank you.  Last night I was
poking around in the sqlalchemy pool code but hadn't yet come to a
complete solution.  I will do some testing on this today and hopefully
have an updated patch out soon.

Carl

From:  Yingjun Li liyingjun1...@gmail.com
Reply-To:  OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:  Thursday, September 5, 2013 8:28 PM
To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron] The three API server multi-worker
process patches.


+1 for Carl's patch, and i have abandoned my patch..

About the `MySQL server gone away` problem, I fixed it by set
'pool_recycle' to 1 in db/api.py.

在 2013年9月6日星期五,Nachi Ueno 写道:

Hi Folks

We choose https://review.openstack.org/#/c/37131/ -- This patch to go on.
We are also discussing in this patch.

Best
Nachi



2013/9/5 Baldwin, Carl (HPCS Neutron) carl.bald...@hp.com:
 Brian,

 As far as I know, no consensus was reached.

 A problem was discovered that happens when spawning multiple processes.
 The mysql connection seems to go away after between 10-60 seconds in my
 testing causing a seemingly random API call to fail.  After that, it is
 okay.  This must be due to some interaction between forking the process
 and the mysql connection pool.  This needs to be solved but I haven't had
 the time to look in to it this week.

 I'm not sure if the other proposal suffers from this problem.

 Carl

 On 9/4/13 3:34 PM, Brian Cline bcl...@softlayer.com wrote:

Was any consensus on this ever reached? It appears both reviews are still
open. I'm partial to review 37131 as it attacks the problem a more
concisely, and, as mentioned, combined the efforts of the two more
effective patches. I would echo Carl's sentiments that it's an easy
review minus the few minor behaviors discussed on the review thread
today.

We feel very strongly about these making it into Havana -- being confined
to a single neutron-server instance per cluster or region is a huge
bottleneck--essentially the only controller process with massive CPU
churn in environments with constant instance churn, or sudden large
batches of new instance requests.

In Grizzly, this behavior caused addresses not to be issued to some
instances during boot, due to quantum-server thinking the DHCP agents
timed out and were no longer available, when in reality they were just
backlogged (waiting on quantum-server, it seemed).

Is it realistically looking like this patch will be cut for h3?

--
Brian Cline
Software Engineer III, Product Innovation

SoftLayer, an IBM Company
4849 Alpha Rd, Dallas, TX 75244
214.782.7876 direct  |  bcl...@softlayer.com


-Original Message-
From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
Sent: Wednesday, August 28, 2013 3:04 PM
To: Mark McClain
Cc: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] The three API server multi-worker
process patches.

All,

We've known for a while now that some duplication of work happened with
respect to adding multiple worker processes to the neutron-server.  There
were a few mistakes made which led to three patches being done
independently of each other.

Can we settle on one and accept it?

I have changed my patch at the suggestion of one of the other 2 authors,
Peter Feiner, in attempt to find common ground.  It now uses openstack
common code and therefore it is more concise than any of the original
three and should be pretty easy to review.  I'll admit to some bias
toward
my own implementation but most importantly, I would like for one of these
implementations to land and start seeing broad usage in the community
earlier than later.

Carl Baldwin

PS Here are the two remaining patches.  The third has been abandoned.

https://review.openstack.org/#/c/37131/
https://review.openstack.org/#/c/36487/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FFE Request-ish: oslo.version split

2013-09-06 Thread Monty Taylor


On 09/06/2013 10:51 AM, Thierry Carrez wrote:
 Monty Taylor wrote:
 Hey all,

 One of the things we were working on this cycle was splitting the
 version handling code out of pbr into its own library - oslo.version.
 This work did not make Feature Freeze. I wanted to check in with folks
 to take temperature about whether it's worth attempting to get it done
 by the havanna release. The code itself isn't changing - it's really
 just splitting one library in to two so that pbr doesn't need to be a
 run time requirement and so that separation of concerns can be handled
 better.

 I don't have strong feelings either way, as pbr-as-runtime-dep doesn't
 really bother me. Thoughts?
 
 I would rather not touch versioning code post-H3 since the milestone
 drop serves to exercise the whole infrastructure and check that it
 should be ready to roll for RCs... we lived with it in for all havana,
 it can probably wait for icehouse ?

Sounds great to me!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Scaling of TripleO

2013-09-06 Thread James Slagle
The purpose of this email is to brainstorm some ideas about how TripleO could
be scaled out for large deployments.

Idea 0
--
According to what we've read (and watched), the TripleO idea is that you're
going to have a single undercloud composed of at least 2 machines running the
undercloud services (nova, glance, keystone, neutron, rabbitmq, mysql) in HA
mode. The way you would add horizontal scale to this model is by adding more
undercloud machines running the same stack of services in HA mode, so they
could share the workload.

Does this match others current thinking about TripleO at scale?

I attempted to diagram this idea at [1].  Sorry, if it's a bit crude :).  A
couple points to mention about the diagram:
 * it's showing scalability as opposed to full HA. there's a shared message
   bus, would be shared db's, a load balancer in front of API services, etc.
 * For full HA, you can add additional nodes that didn't share single points of
   failures (like the bus).
 * The green lines are meant to show the management network domain, and can be
   thought of roughly as managed by.
 * Logical Rack is just meant to imply a grouping of baremetal hardware.  It
   might be a physical rack, but it doesn't have to be.
 * Just of note, there's a box there representing where we feel Tuskar would
   get plugged in.

Pros/Cons (+/-):
+ Easy to install (You start with only one machine in the
  datacenter  running the whole stack of services in HA mode and from there you
  can  just expand it to another machine, enroll the rest of the
machines in  the
  datacenter and you're ready to go.)
+ Easy to upgrade (Since we have fully HA, you could then turn off one machine
  in the control plane triggering a HA failover, update that machine, bring it
  up, turn off another machine in the control plane,  etc...)
- Every node in the overcloud has to be able to talk back to controller rack
  (e.g. heat/nova)
- Possible performance issues when bringing up a large number of machines.
  (think hyperscale).
- Large failure domain.  If the HA cluster fails, you've lost all visibility
  into and management of the infrastructure.
- What does the IPMI network look like in this model?  Can we assume full IPMI
  connectivity across racks, logical or physical?

In addition, here are a couple of other ideas to bring to the conversation.
Note that all the ideas assume 1 Overcloud.

Idea 1
--
The thought here is to have 1 Undercloud again, but be able to deploy N
Undercloud Leaf Nodes as needed for scale.  The Leaf Node is a smaller subset
of services than what is needed on the full Undercloud Node.  Essentially, it
is enough services to do baremetal provisioning, Heat orchestration, and
Neutron for networking.  Diagram of this idea is at [2].  In the diagram, there
is one Leaf Node per logical rack.

In this model, the Undercloud provisions and deploys Leaf Nodes as needed when
new hardware is added to the environment.  The Leaf Nodes then handle
deployment requests from the Undercloud for the Overcloud nodes.

As such, there is some scalability built into the architecture in a distributed
fashion.  Adding more scalability and HA would be accomplished in a similar
fashion to Idea 0, by adding additional HA Leaf Nodes, etc.

Pros/Cons (+/-):
+ As scale is added with more Leaf Nodes, it's a smaller set of services.
- Additional image management of the Leaf Node image
- Additional rack space wasted for the Leaf Node
+ Smaller failure domain as the logical rack is only dependent on the Leaf
  Node.
+ The ratio of HA Management Nodes would be smaller because of the offloaded
  services.
+ Better security due to IPMI/RMCP isolation within the rack.

Idea 2
--
In this idea, there are N Underclouds, each with the full set of Undercloud
services.  As new hardware is brought online, an Undercloud is deployed (if
desired) for scalability.  Diagram for this idea is at [3].

A single Control Undercloud handles deployment and provisioning of the other
Underclouds.  This is similar to the seed vm concept of TripleO for Undercloud
deployment.  However, in this model, the Control Undercloud is not meant to be
short lived or go away, so we didn't want to call this the seed directly.

Again, HA can be added in a similar fashion to the other ideas.

In a way, this idea is not all that different from Idea 0.  It could be thought
of as using an Idea 0 to deploy other Idea 0's.  However, it allows for some
additional constraints around network and security with the isolation of each
Undercloud in the logical rack.

Pros/Cons (+/-):
+ network/security isolation
- multiple Undercloud complexity
- Additional rack space wasted for the N Underclouds.
+ Smaller failure domain as the logical rack is only dependent on it's managing
  Undercloud.
+ Better security due to IPMI/RMCP isolation within the rack.
+ Doesn't necessarily preclude Idea 0

[1] http://fedorapeople.org/~slagle/drawing0.png
[2] http://fedorapeople.org/~slagle/drawing1.png
[3] 

[openstack-dev] [Murano] Alpha version of Murano User Guide just released!

2013-09-06 Thread Ekaterina Fedorova
Hi everyone!

We would like to introduce you initial version of Murano User Guide!
It's available in attachment and
herehttp://murano-docs.github.io/0.2/user-guide/content/ch02.html
.

Check out Murano with this document!

Regards,
Ekaterina Fedorova
Junior Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to do nova v3 tests in tempest

2013-09-06 Thread David Kranz

On 09/04/2013 09:11 AM, Zhu Bo wrote:

hi,
  I'm working on bp:nova-v3-tests in tempest.  The nova tests in 
tempest mostly have been ported into v3 and sent off.
but we got some feedbacks that there was mass code duplication and 
suggested to do this by inheritance.
So I have sent another patch to do this by inheritance. But in this 
way, another issue is not easy to drop v2 client and tests.
I want to get more feedbacks about this blue-print to make sure we do 
this in the right way, which is the better one or is there

another better way? I'd appreciate every suggestion and comment.

the first way to do this in separate files:
https://review.openstack.org/#/c/39609/ and 
https://review.openstack.org/#/c/39621/6


the second way to do this by inheritance.
https://review.openstack.org/#/c/44876/

Thanks  Best Regards

Ivan

Ivan, I took a look at this. My first thought was that subclassing would 
be good because it could avoid code duplication. But when I looked at 
the patch I saw that although there are subclasses, most of the changes 
are version ifs inside the base class code. IMO that gives us the 
worst of both worlds and we would be better off just copying as we did 
with the new image api. It is not great, but I think that is the least 
of evils here. Any one else have a different view?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-06 Thread Davanum Srinivas
Sounds like a plan, Dan. Thanks.


On Fri, Sep 6, 2013 at 3:12 PM, Dan Prince dpri...@redhat.com wrote:



 - Original Message -
  From: Bob Ball bob.b...@citrix.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org, Mark McLoughlin mar...@redhat.com
  Cc: Dan Prince (dpri...@redhat.com) dpri...@redhat.com
  Sent: Friday, September 6, 2013 10:21:45 AM
  Subject: RE: [openstack-dev] [Nova] FFE Request: oslo-messaging
 
  Puppet is failing to build the packages - which is probably
 understandable
  given it's a refactor.
 
  I'm not sure if this is something that points to a problem with the
 refactor
  or the packaging itself - hopefully Dan can review the logs, or if others
  want to see more of the details look at the logs at
  https://review.openstack.org/39929.
 
  I don't see any point in triggering another Smokestack run as it's
 likely to
  fail again until the packaging issue has been resolved by Dan's team.

 FTR. SmokeStack should have oslo-messaging packages available for use as
 of today. Once I kick the tires on it a bit I'll go and re-run the existing
 Nova Oslo messaging branches so we have a better idea of how this works w/
 Qpid, etc.

 So whether we wait till IceHouse or not we'll be ready.

 Dan

 
  Bob
 
  From: Davanum Srinivas [mailto:dava...@gmail.com]
  Sent: 05 September 2013 17:04
  To: Mark McLoughlin; OpenStack Development Mailing List
  Subject: Re: [openstack-dev] [Nova] FFE Request: oslo-messaging
 
  Thanks Mark. Looks like we need to get someone to manually trigger
 Smokestack
  to run against this review at least once since i don't see any +1's from
  Smokestack for some reason.
 
  On Thu, Sep 5, 2013 at 11:49 AM, Mark McLoughlin
  mar...@redhat.commailto:mar...@redhat.com wrote:
  Hi
 
  On Thu, 2013-09-05 at 10:43 -0400, Davanum Srinivas wrote:
   Mark,
  
   Has this changeset get through a full tempest with QPid enabled?
  No, I've only done local testing with the qpid transport to date.
 
  I think Smokestack is the only CI tool actively testing the qpid driver.
  I ran out of time adding oslo.messaging to Smokestack before heading off
  on vacation, but I expect I'll get to it next week.
 
  Cheers,
  Mark.
 
 
  
   thanks,
   dims
  
  
   On Thu, Sep 5, 2013 at 10:17 AM, Mark McLoughlin
   mar...@redhat.commailto:mar...@redhat.com wrote:
  
Hi
   
I'd like to request a feature freeze exception for the final (and
admittedly the largest) patch in the series of 40 patches to port
 Nova to
oslo.messaging:
   
  https://review.openstack.org/39929
   
While this change doesn't provide any immediate user-visible
 benefit, it
would be massively helpful in maintaining momentum behind the effort
 all
through the Havana cycle to move the RPC code from oslo-incubator
 into a
library.
   
In terms of risk of regression, there is certainly some risk but that
risk
is mitigated by the fact that the core code of each of the transport
drivers has been modified minimally. The idea was to delay
 re-factoring
these drivers until we were sure that we hadn't caused any
 regressions in
Nova. The code has been happily passing the devstack/tempest based
integration tests for 10 days now.
   
Thanks,
Mark.
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
  
  
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Davanum Srinivas :: http://davanum.wordpress.com
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-06 Thread Baldwin, Carl (HPCS Neutron)
This pool_recycle parameter is already configurable using the idle_timeout
configuration variable in neutron.conf.  I tested this with a value of 1
as suggested and it did get rid of the mysql server gone away messages.

This is a great clue but I think I would like a long-term solution that
allows the end-user to still configure this like they were before.

I'm currently thinking along the lines of calling something like
pool.dispose() in each child immediately after it is spawned.  I think
this should invalidate all of the existing connections so that when a
connection is checked out of the pool a new one will be created fresh.

Thoughts?  I'll be testing.  Hopefully, I'll have a fixed patch up soon.

Cheers,
Carl

From:  Yingjun Li liyingjun1...@gmail.com
Reply-To:  OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:  Thursday, September 5, 2013 8:28 PM
To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron] The three API server multi-worker
process patches.


+1 for Carl's patch, and i have abandoned my patch..

About the `MySQL server gone away` problem, I fixed it by set
'pool_recycle' to 1 in db/api.py.

在 2013年9月6日星期五,Nachi Ueno 写道:

Hi Folks

We choose https://review.openstack.org/#/c/37131/ -- This patch to go on.
We are also discussing in this patch.

Best
Nachi



2013/9/5 Baldwin, Carl (HPCS Neutron) carl.bald...@hp.com:
 Brian,

 As far as I know, no consensus was reached.

 A problem was discovered that happens when spawning multiple processes.
 The mysql connection seems to go away after between 10-60 seconds in my
 testing causing a seemingly random API call to fail.  After that, it is
 okay.  This must be due to some interaction between forking the process
 and the mysql connection pool.  This needs to be solved but I haven't had
 the time to look in to it this week.

 I'm not sure if the other proposal suffers from this problem.

 Carl

 On 9/4/13 3:34 PM, Brian Cline bcl...@softlayer.com wrote:

Was any consensus on this ever reached? It appears both reviews are still
open. I'm partial to review 37131 as it attacks the problem a more
concisely, and, as mentioned, combined the efforts of the two more
effective patches. I would echo Carl's sentiments that it's an easy
review minus the few minor behaviors discussed on the review thread
today.

We feel very strongly about these making it into Havana -- being confined
to a single neutron-server instance per cluster or region is a huge
bottleneck--essentially the only controller process with massive CPU
churn in environments with constant instance churn, or sudden large
batches of new instance requests.

In Grizzly, this behavior caused addresses not to be issued to some
instances during boot, due to quantum-server thinking the DHCP agents
timed out and were no longer available, when in reality they were just
backlogged (waiting on quantum-server, it seemed).

Is it realistically looking like this patch will be cut for h3?

--
Brian Cline
Software Engineer III, Product Innovation

SoftLayer, an IBM Company
4849 Alpha Rd, Dallas, TX 75244
214.782.7876 direct  |  bcl...@softlayer.com


-Original Message-
From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
Sent: Wednesday, August 28, 2013 3:04 PM
To: Mark McClain
Cc: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] The three API server multi-worker
process patches.

All,

We've known for a while now that some duplication of work happened with
respect to adding multiple worker processes to the neutron-server.  There
were a few mistakes made which led to three patches being done
independently of each other.

Can we settle on one and accept it?

I have changed my patch at the suggestion of one of the other 2 authors,
Peter Feiner, in attempt to find common ground.  It now uses openstack
common code and therefore it is more concise than any of the original
three and should be pretty easy to review.  I'll admit to some bias
toward
my own implementation but most importantly, I would like for one of these
implementations to land and start seeing broad usage in the community
earlier than later.

Carl Baldwin

PS Here are the two remaining patches.  The third has been abandoned.

https://review.openstack.org/#/c/37131/
https://review.openstack.org/#/c/36487/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-06 Thread Debojyoti Dutta
+1 Gary

If this was allowed in H2 and not stopped due to the mandatory
requirement to use API objects (then), this feature would have
definitely seen the light of day (and we would have improved it to use
the objects/v3api anyway). This API leads to a bunch of new features
that make Openstack even better in terms of resource allocation,
scheduling, policies etc. Hence a little sad.

Hopefully we will have better luck next time :) Thanks to all the
reviewers esp Chris Yeoh, Dan Smith, Alex Xu and Russell Bryant for
their time. Thanks Thierry for trying to help with the FFE!

debo

On Fri, Sep 6, 2013 at 9:09 AM, Gary Kotton gkot...@vmware.com wrote:


 On 9/6/13 5:07 PM, Russell Bryant rbry...@redhat.com wrote:

On 09/06/2013 05:06 AM, Thierry Carrez wrote:
 Debojyoti Dutta wrote:
 As per my IRC chats with dansmith, russellb, this feature needs the
 user auth checks (being taken care of in
 https://bugs.launchpad.net/nova/+bug/1221396).

 Dan has some more comments 

 Could we please do a FFE for this one  has been waiting for a long
 time and we have done all that we were asked relatively quickly 
 in H2 it was gated by API object refactor. Most of the current
 comments (pre last one by Dan) are due to
 https://bugs.launchpad.net/nova/+bug/1221396

 This sounds relatively self-contained and ready. If you can get it
 merged (with the additional bugfix) before the release meeting Tuesday,
 personally I'm fine with it.


I originally was fine with a FFE for this.  However, after some
discussion yesterday, it seems there are quite a few comments on the
review.  There's more iterating to do on this, so I don't have a high
confidence that we can merge it in time.

I would hate to grant the FFE, have you work super hard to iterate on
this quickly, and it still not make it in time.  I think at this point
we should just defer to Icehouse to ensure that there is proper time to
do the updates and get good review on them.

 I am a little sad that this not been granted an FFE. We worked really hard
 on this. To be honest it was ready for review at the end of H2. I guess
 that our goal will now be to get this valuable feature in beginning of the
 Icehouse release.

 Thanks
 Gary


NACK on the FFE.

Thanks,

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Benjamin, Bruce P.
We request that volume encryption [1] be granted an exception to the feature 
freeze for Havana-3.  Volume encryption [2] provides a usable layer of 
protection to user data as it is transmitted through a network and when it is 
stored on disk. The main patch [2] has been under review since the end of May 
and had received two +2s in mid-August.  Subsequently, support was requested 
for booting from encrypted volumes and integrating a working key manager [3][4] 
as a stipulation for acceptance, and both these requests have been satisfied 
within the past week. The risk of disruption to deployments from this exception 
is minimal because the volume encryption feature is unused by default.  Note 
that the corresponding Cinder support for this feature has already been 
approved, so acceptance into Nova will keep this code from becoming abandoned.  
 Thank you for your consideration.

The APL Development Team

[1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
[2] https://review.openstack.org/#/c/30976/
[3] https://review.openstack.org/#/c/45103/
[4] https://review.openstack.org/#/c/45123/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Scaling of TripleO

2013-09-06 Thread Robert Collins
Hey James, thanks for starting this thread : it's clear we haven't
articulated what we've discussed well enough [it's been a slog
building up from the bottom...]

I think we need to set specific goals - latency, HA, diagnostics -
before designing scaling approaches makes any sense : we can't achieve
things we haven't set out to achieve.

For instance, if the entire set of goals was 'support 10K-node
overclouds', I think we can do that today with a 2-machine undercloud
control plane in full-HA mode.

So we need to be really clear : are we talking about scaling, or
latency @ scale, or ops @ scale - and identify failure modes we should
cater, vs ones we shouldn't cater to (or that are outside of our
domain e.g. 'you need a fully end to end multipath network if you want
network resiliency').

My vision for TripleO/undercloud and scale in the long term is:
- A fully redundant self-healing undercloud
  - (implies self hosting)
- And appropriate anti-affinity aggregates so that common failure
domains can be avoided
- With a scale-up Heat template that identifies the way to grow capacity
- Able to deploy to 1K overcloud in  an hour(*)
- And 10K [if we can get a suitable test environment] in  2 hours

So thats sublinear performance degradation as scale increases.

For TripleO/overcloud and scale, thats something we need to synthesis
best practices from existing deployers - e.g. cells and so on - to
deliver K+ scale configurations, but it's fundamentally decoupled from
the undercloud: Heat is growing cross-cloud deployment facilities, so
if we need multiple undercloud's as a failure mitigation strategy, we
can deploy one overcloud across multiple underclouds that way. I'm not
convinced we need that complexity though: large network fabrics are
completely capable of shipping overcloud images to machines in a
couple of seconds per machine...

(*): Number pulled out of hat. We'll need to drive it lower over time,
but given we need time to check new builds are stable, and live
migrate thousands of VMs concurrently across hundreds of hypervisors,
I think 1 hour for a 1K node cloud deployment is sufficiently
aggressive for now.

Now, how to achieve this?

The current all-in-one control plane is like that for three key reasons:
 - small clouds need low-overhead control planes, running 12 or 15
machines to deploy a 3-node overcloud doesn't make sense.
 - bootstrapping an environment has to start on one machine by definition
 - we haven't finished enough of the overall plumbing story to be
working on the scale-out story in much detail
(I'm very interested in where you got the idea that
all-nodes-identical was the scaling plan for TripleO - it isn't :))

Our d-i-b elements are already suitable for scaling different
components independently - thats why nova and nova-kvm are separate:
nova installs the nova software, nova-kvm installs the additional bits
for a kvm hypervisor and configures the service to talk to the bus :
this is how the overcloud scales.

Now that we have reliable all-the-way-to overcloud deployments working
in devtest we've started working on the image-based updates
(https://etherpad.openstack.org/tripleo-image-updates) which is a
necessary precondition to scaling the undercloud control plane -
because if you can't update a machines role, it's really much harder
to evolve a cluster.

The exact design of a scaled cluster isn't pinned down yet : I think
we need much more data before we can sensibly do it: both on
requirements- whats valuable for deployers - and on the scaling
characteristics of nova baremetal/Ironic/keystone etc.

All that said, some specific thoughts on the broad approaches you sketched:
Running all services on all undercloud nodes would drive a lot of
complexity in scale-out : there's a lot of state to migrate to new
Galera nodes, for instance. I would hesitate to structure the
undercloud like that.

I don't really follow some of the discussion in Idea 1 : but scaling
out things that need scaling out seems pretty sensible. We have no
data suggesting how many thousands machines we'll get per nova
baremetal machine at the moment, so it's very hard to say what
services will need scaling at what points in time yet : but clearly we
need to support it at some scale. OTOH once we scale to 'an entire
datacentre' the undercloud doesn't need to scale further : I think
having each datacentre be a separate deployment cloud makes a lot of
sense.

Perhaps if we just turn the discussion around and ask - what do we get
if we add node type X to an undercloud; what do we get when we add a
new undercloud? and the implications thereof.

Firstly, lets talk big picture: N-datacentre clouds. I think the
'build a fabric that clearly exposes performance and failure domains'
has been very successful for containing complexity in the fabric and
enabling [app] deployers to reason about performance and failure, so
we shouldn't try to hide that. If you have two datacentres, that
should be two regions, with no shared 

Re: [openstack-dev] OpenLdap for Keystone

2013-09-06 Thread Brad Topol
Hi Mark,

in localrc you can modify the number of services installed using the 
values below.  You can try uncommenting the last two lines shown below to 
dramatically reduce the amount of openstack software installed by 
devstack.

#ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
#disable_all_services
#enable_service key mysql swift rabbit

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296



From:   Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   09/05/2013 12:22 PM
Subject:Re: [openstack-dev] OpenLdap for Keystone



Thanks Brad for the pointer. Is there any way to just install the OpenLdap 
piece and not the entire OpenStack?
 
Mark
 
From: Brad Topol [mailto:bto...@us.ibm.com] 
Sent: Thursday, September 05, 2013 5:37 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] OpenLdap for Keystone
 
devstack has the ability to install keystone with openldap and configure 
them together.  Look at the online doc for stack.sh on how to configure 
devstack to install keystone with openldap. 

Thanks, 

Brad 

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296 



From:Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.com 
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
Date:09/04/2013 06:32 PM 
Subject:[openstack-dev] OpenLdap for Keystone 




Hello, 
  
I have been struggling trying to configure OpenLdap to work with Keystone. 
I have found a gazillion snippets about this topic, but no step-by-step 
documents on how to install and configure OpenLdap so it will work with 
current Keystone releases. I am hoping that someone has a tested answer 
for me. 
  
Thanks in advance, 
  
Mark Miller___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Scaling of TripleO

2013-09-06 Thread Clint Byrum
Excerpts from James Slagle's message of 2013-09-06 10:27:32 -0700:
 The purpose of this email is to brainstorm some ideas about how TripleO could
 be scaled out for large deployments.
 

Thanks for thinking this through and taking the time to vet the ideas
TripleO has presented thus far.

 Idea 0
 --
 According to what we've read (and watched), the TripleO idea is that you're
 going to have a single undercloud composed of at least 2 machines running the
 undercloud services (nova, glance, keystone, neutron, rabbitmq, mysql) in HA
 mode. The way you would add horizontal scale to this model is by adding more
 undercloud machines running the same stack of services in HA mode, so they
 could share the workload.
 
 Does this match others current thinking about TripleO at scale?
 
 I attempted to diagram this idea at [1].  Sorry, if it's a bit crude :).  A
 couple points to mention about the diagram:
  * it's showing scalability as opposed to full HA. there's a shared message
bus, would be shared db's, a load balancer in front of API services, etc.
  * For full HA, you can add additional nodes that didn't share single points 
 of
failures (like the bus).

I'm not sure I agree that the bus is a SPOF. Both qpid and rabbit+kombu can
operate in HA active/active mode, so why would those be SPOFs? Certainly
not a _likely_ SPOF.

  * The green lines are meant to show the management network domain, and can be
thought of roughly as managed by.
  * Logical Rack is just meant to imply a grouping of baremetal hardware.  It
might be a physical rack, but it doesn't have to be.
  * Just of note, there's a box there representing where we feel Tuskar would
get plugged in.
 
 Pros/Cons (+/-):
 + Easy to install (You start with only one machine in the
   datacenter  running the whole stack of services in HA mode and from there 
 you
   can  just expand it to another machine, enroll the rest of the
 machines in  the
   datacenter and you're ready to go.)
 + Easy to upgrade (Since we have fully HA, you could then turn off one machine
   in the control plane triggering a HA failover, update that machine, bring it
   up, turn off another machine in the control plane,  etc...)
 - Every node in the overcloud has to be able to talk back to controller rack
   (e.g. heat/nova)

Note that this is just OpenStack's architecture. Heat and Nova both
have separated their API from their smarts to make it very straight
forward to isolate tenant access from deeper resources. So each node
just needs access to nova and heat API endpoints, and both in very
limited, predictable capacities. I think that mitigates this to a very
minor concern.

 - Possible performance issues when bringing up a large number of machines.
   (think hyperscale).

This is perhaps the largest concern, but is why we've always suggested
that eventually the scaled out compute boxes will work better with some
hardware affinity.

 - Large failure domain.  If the HA cluster fails, you've lost all visibility
   into and management of the infrastructure.

The point of HA is to make the impact and frequency of these failures
very small. So this one is also mitigated by doing HA well.

 - What does the IPMI network look like in this model?  Can we assume full IPMI
   connectivity across racks, logical or physical?
 

Undercloud compute needs to be able to access IPMI. The current nova
baremetal requires assigning specific hardware to specific compute nodes,
so each rack can easily get its own compute node.

 In addition, here are a couple of other ideas to bring to the conversation.
 Note that all the ideas assume 1 Overcloud.
 
 Idea 1
 --
 The thought here is to have 1 Undercloud again, but be able to deploy N
 Undercloud Leaf Nodes as needed for scale.  The Leaf Node is a smaller subset
 of services than what is needed on the full Undercloud Node.  Essentially, it
 is enough services to do baremetal provisioning, Heat orchestration, and
 Neutron for networking.  Diagram of this idea is at [2].  In the diagram, 
 there
 is one Leaf Node per logical rack.
 

I think this is very close to the near-term evolution I've been thinking
about for TripleO. We want to get good at deploying a simple architecture
first, but then we know we don't need to be putting the heat engines,
nova schedulers, etc, on every scale-out box in the undercloud.

 In this model, the Undercloud provisions and deploys Leaf Nodes as needed when
 new hardware is added to the environment.  The Leaf Nodes then handle
 deployment requests from the Undercloud for the Overcloud nodes.
 
 As such, there is some scalability built into the architecture in a 
 distributed
 fashion.  Adding more scalability and HA would be accomplished in a similar
 fashion to Idea 0, by adding additional HA Leaf Nodes, etc.
 
 Pros/Cons (+/-):
 + As scale is added with more Leaf Nodes, it's a smaller set of services.
 - Additional image management of the Leaf Node image

I think if you've accepted image management 

Re: [openstack-dev] OpenLdap for Keystone

2013-09-06 Thread Anne Gentle
I would lov


On Thu, Sep 5, 2013 at 2:57 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:

  Thanks Dean. I was able to combine sections of each script to make one
 that installs OpenLdap for Keystone.

 **


I would love to have a write up for the docs -- write it on the back of an
envelope, napkin, or your favorite web bits and we'll incorporate it.

Thanks,
Anne


 **

 Mark

 ** **

 *From:* Dean Troyer [mailto:dtro...@gmail.com]
 *Sent:* Thursday, September 05, 2013 9:45 AM

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] OpenLdap for Keystone

 ** **

 On Thu, Sep 5, 2013 at 11:18 AM, Miller, Mark M (EB SW Cloud - RD -
 Corvallis) mark.m.mil...@hp.com wrote:

  Thanks Brad for the pointer. Is there any way to just install the
 OpenLdap piece and not the entire OpenStack?

  ** **

 You can install a Keystone-only DevStack, but I suspect you just want the
 OpenLDAP bits...if that is the case look in lib/keystone[1] and lib/ldap[2]
 for the steps DevStack takes to perform the installation.  The
 configure_keystone()[3] function has all of the bits to configure Keystone.
 

 ** **

 dt

 ** **

 [1] https://github.com/openstack-dev/devstack/blob/master/lib/keystone

 [2] https://github.com/openstack-dev/devstack/blob/master/lib/ldap

 [3]
 https://github.com/openstack-dev/devstack/blob/master/lib/keystone#L102***
 *

 ** **

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Russell Bryant
On 09/06/2013 04:14 PM, Benjamin, Bruce P. wrote:
 We request that volume encryption [1] be granted an exception to the
 feature freeze for Havana-3.  Volume encryption [2] provides a usable
 layer of protection to user data as it is transmitted through a network
 and when it is stored on disk. The main patch [2] has been under review
 since the end of May and had received two +2s in mid-August. 
 Subsequently, support was requested for booting from encrypted volumes
 and integrating a working key manager [3][4] as a stipulation for
 acceptance, and both these requests have been satisfied within the past
 week. The risk of disruption to deployments from this exception is
 minimal because the volume encryption feature is unused by default. 
 Note that the corresponding Cinder support for this feature has already
 been approved, so acceptance into Nova will keep this code from becoming
 abandoned.   Thank you for your consideration.
 
  
 
 The APL Development Team
 
  
 
 [1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
 
 [2] https://review.openstack.org/#/c/30976/
 
 [3] https://review.openstack.org/#/c/45103/
 
 [4] https://review.openstack.org/#/c/45123/ 

Thanks for all of your hard work on this!  It sounds to me like the code
was ready to go aside from the issues you mentioned above, which have
now been addressed.

I think the feature provides a lot of value and has fairly low risk if
we get it merged ASAP, since it's off by default.  The main risk is
around the possibility of security vulnerabilities.  Hopefully good
review (both from a code and security perspective) can mitigate that
risk.  This feature has been in the works for a while and has very good
documentation on the blueprint, so I take it that it has been vetted by
a number of people already.  It would be good to get ACKs on this point
in this thread.

I would be good with the exception for this, assuming that:

1) Those from nova-core that have reviewed the code are still happy with
it and would do a final review to get it merged.

2) There is general consensus that the simple config based key manager
(single key) does provide some amount of useful security.  I believe it
does, just want to make sure we're in agreement on it.  Obviously we
want to improve this in the future.

Again, thank you very much for all of your work on this (both technical
and non-technical)!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenLdap for Keystone

2013-09-06 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Here are my rough notes with contributions from web pages 
https://github.com/openstack-dev/devstack/blob/master/lib/ldap and 
https://github.com/openstack-dev/devstack/blob/master/lib/keystone :


apt-get install slapd
apt-get install ldap-utils

LDAP_PASSWORD=password
SLAPPASS=`slappasswd -s $LDAP_PASSWORD`

TMP_MGR_DIFF_FILE=`mktemp -t manager_ldiff.$$.XX.ldif`
LDAP_OLCDB_NUMBER=1
LDAP_ROOTPW_COMMAND=replace

# sed -e s|\${LDAP_OLCDB_NUMBER}|$LDAP_OLCDB_NUMBER| -e 
s|\${SLAPPASS}|$SLAPPASS| -e 
s|\${LDAP_ROOTPW_COMMAND}|$LDAP_ROOTPW_COMMAND| $FILES/ldap/manager.ldif.in 
 $TMP_MGR_DIFF_FILE
sed -e s|\${LDAP_OLCDB_NUMBER}|$LDAP_OLCDB_NUMBER| -e 
s|\${SLAPPASS}|$SLAPPASS| -e 
s|\${LDAP_ROOTPW_COMMAND}|$LDAP_ROOTPW_COMMAND| ./manager.ldif.in  
$TMP_MGR_DIFF_FILE
ldapmodify -Y EXTERNAL -H ldapi:/// -f $TMP_MGR_DIFF_FILE

# ldapadd -c -x -H ldap://localhost -D dc=Manager,dc=openstack,dc=org -w 
$LDAP_PASSWORD -f  $FILES/ldap/openstack.ldif
ldapadd -c -x -H ldap://localhost -D dc=Manager,dc=openstack,dc=org -w 
$LDAP_PASSWORD -f  ./openstack.ldif

ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b dc=openstack, dc=org -W 
(objectclass=*)
ldapadd -c -x -H ldap://localhost -D dc=Manager,dc=openstack,dc=org -w 
$LDAP_PASSWORD -f  ./addUser.ldif


Files:

manager.ldif.in:

dn: olcDatabase={${LDAP_OLCDB_NUMBER}}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=openstack,dc=org
-
replace: olcRootDN
olcRootDN: dc=Manager,dc=openstack,dc=org
-
${LDAP_ROOTPW_COMMAND}: olcRootPW
olcRootPW: ${SLAPPASS}


openstack.ldif:

dn: dc=openstack,dc=org
dc: openstack
objectClass: dcObject
objectClass: organizationalUnit
ou: openstack

dn: ou=UserGroups,dc=openstack,dc=org
objectClass: organizationalUnit
ou: UserGroups

dn: ou=Users,dc=openstack,dc=org
objectClass: organizationalUnit
ou: Users

dn: ou=Roles,dc=openstack,dc=org
objectClass: organizationalUnit
ou: Roles

dn: ou=Projects,dc=openstack,dc=org
objectClass: organizationalUnit
ou: Projects

dn: cn=9fe2ff9ee4384b1894a90878d3e92bab,ou=Roles,dc=openstack,dc=org
objectClass: organizationalRole
ou: _member_
cn: 9fe2ff9ee4384b1894a90878d3e92bab


addUser.ldif

cn: Donald Duck
givenName: Donald
sn: Duck
uid: donaldduck
mail: donald.d...@disney.com
objectClass: top
objectClass: Users
userPassword: secret






From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: Friday, September 06, 2013 2:36 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] OpenLdap for Keystone

I would lov

On Thu, Sep 5, 2013 at 2:57 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
Thanks Dean. I was able to combine sections of each script to make one that 
installs OpenLdap for Keystone.


I would love to have a write up for the docs -- write it on the back of an 
envelope, napkin, or your favorite web bits and we'll incorporate it.

Thanks,
Anne

Mark

From: Dean Troyer [mailto:dtro...@gmail.commailto:dtro...@gmail.com]
Sent: Thursday, September 05, 2013 9:45 AM

To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] OpenLdap for Keystone

On Thu, Sep 5, 2013 at 11:18 AM, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
Thanks Brad for the pointer. Is there any way to just install the OpenLdap 
piece and not the entire OpenStack?

You can install a Keystone-only DevStack, but I suspect you just want the 
OpenLDAP bits...if that is the case look in lib/keystone[1] and lib/ldap[2] for 
the steps DevStack takes to perform the installation.  The 
configure_keystone()[3] function has all of the bits to configure Keystone.

dt

[1] https://github.com/openstack-dev/devstack/blob/master/lib/keystone
[2] https://github.com/openstack-dev/devstack/blob/master/lib/ldap
[3] https://github.com/openstack-dev/devstack/blob/master/lib/keystone#L102

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
annegen...@justwriteclick.commailto:annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Joe Gordon
On Fri, Sep 6, 2013 at 4:17 PM, Bryan D. Payne bdpa...@acm.org wrote:


 2) There is general consensus that the simple config based key manager
 (single key) does provide some amount of useful security.  I believe it
 does, just want to make sure we're in agreement on it.  Obviously we
 want to improve this in the future.


 I believe that it does add value.  For example, if the config is on a
 different disk than the volumes, then this is very useful for ensuring that
 data remains secure on RMA'd disks.


I stand corrected.



 -bryan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Needs approval again after rebase

2013-09-06 Thread Yingjun Li
Hi, The patch https://review.openstack.org/43583 was approved but failed to
get merged. Could any core reviewer take a look at this after rebase ?

Thanks

Yingjun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Bryan D. Payne
 2) There is general consensus that the simple config based key manager
 (single key) does provide some amount of useful security.  I believe it
 does, just want to make sure we're in agreement on it.  Obviously we
 want to improve this in the future.


I believe that it does add value.  For example, if the config is on a
different disk than the volumes, then this is very useful for ensuring that
data remains secure on RMA'd disks.

-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Bhandaru, Malini K
Thank you Russell for the special consideration.
+1

 The positive vote is for multiple reasons, the JHU team took care of:
1) boot from encrypted volume
2) have laid the foundation for securing volumes with keys served from a strong 
key manager
3) blueprint and diligently addressing concerns
4) feature by default off.

Regards
malini

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Friday, September 06, 2013 2:47 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

On 09/06/2013 04:14 PM, Benjamin, Bruce P. wrote:
 We request that volume encryption [1] be granted an exception to the 
 feature freeze for Havana-3.  Volume encryption [2] provides a usable 
 layer of protection to user data as it is transmitted through a 
 network and when it is stored on disk. The main patch [2] has been 
 under review since the end of May and had received two +2s in mid-August.
 Subsequently, support was requested for booting from encrypted volumes 
 and integrating a working key manager [3][4] as a stipulation for 
 acceptance, and both these requests have been satisfied within the 
 past week. The risk of disruption to deployments from this exception 
 is minimal because the volume encryption feature is unused by default.
 Note that the corresponding Cinder support for this feature has 
 already been approved, so acceptance into Nova will keep this code from 
 becoming
 abandoned.   Thank you for your consideration.
 
  
 
 The APL Development Team
 
  
 
 [1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
 
 [2] https://review.openstack.org/#/c/30976/
 
 [3] https://review.openstack.org/#/c/45103/
 
 [4] https://review.openstack.org/#/c/45123/

Thanks for all of your hard work on this!  It sounds to me like the code was 
ready to go aside from the issues you mentioned above, which have now been 
addressed.

I think the feature provides a lot of value and has fairly low risk if we get 
it merged ASAP, since it's off by default.  The main risk is around the 
possibility of security vulnerabilities.  Hopefully good review (both from a 
code and security perspective) can mitigate that risk.  This feature has been 
in the works for a while and has very good documentation on the blueprint, so I 
take it that it has been vetted by a number of people already.  It would be 
good to get ACKs on this point in this thread.

I would be good with the exception for this, assuming that:

1) Those from nova-core that have reviewed the code are still happy with it and 
would do a final review to get it merged.

2) There is general consensus that the simple config based key manager (single 
key) does provide some amount of useful security.  I believe it does, just want 
to make sure we're in agreement on it.  Obviously we want to improve this in 
the future.

Again, thank you very much for all of your work on this (both technical and 
non-technical)!

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Bhandaru, Malini K
Bruce - well-crafted message. Good work, looks like it is eliciting desired 
result.

From: Benjamin, Bruce P. [mailto:bruce.benja...@jhuapl.edu]
Sent: Friday, September 06, 2013 1:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

We request that volume encryption [1] be granted an exception to the feature 
freeze for Havana-3.  Volume encryption [2] provides a usable layer of 
protection to user data as it is transmitted through a network and when it is 
stored on disk. The main patch [2] has been under review since the end of May 
and had received two +2s in mid-August.  Subsequently, support was requested 
for booting from encrypted volumes and integrating a working key manager [3][4] 
as a stipulation for acceptance, and both these requests have been satisfied 
within the past week. The risk of disruption to deployments from this exception 
is minimal because the volume encryption feature is unused by default.  Note 
that the corresponding Cinder support for this feature has already been 
approved, so acceptance into Nova will keep this code from becoming abandoned.  
 Thank you for your consideration.

The APL Development Team

[1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
[2] https://review.openstack.org/#/c/30976/
[3] https://review.openstack.org/#/c/45103/
[4] https://review.openstack.org/#/c/45123/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TRIPLEO] Sessions for HK

2013-09-06 Thread Robert Collins
Hi there - in HK we have 5 slots for Tripleo Sessions.

Please put forward proposals now, we may have some competition for
slots and I'd rather not be doing that at the last minute.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev