Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-02 Thread Michael Still
I've been working on testing database migrations against real data
sets for the last few weeks. I haven't had a chance to document it
very well yet though -- there's a blog post being drafted at the
moment.

If you go to http://openstack.stillhq.com/ci you can see the results
of tests run against three real mysql databases -- two manually
created trivial databases and a real user database from a mid-sized
deployment. I'd like to add other database backends and sample user
databases, but this is a work in progress.

Any patch against nova which changes or adds a database migration is
run through these tests.

So far this work has found two bugs in migrations.

Michael

On Wed, Jul 3, 2013 at 4:08 PM, Boris Pavlovic  wrote:
> Hi Ben,
>
> This work was started since Grizzly started.
> So there are tons of Blueprints and tons of code.
>
> Nova:
> https://blueprints.launchpad.net/nova/+spec/db-cleanup
> https://blueprints.launchpad.net/nova/+spec/db-enforce-unique-keys
> https://blueprints.launchpad.net/nova/+spec/db-api-tests
> https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
> https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
> https://blueprints.launchpad.net/nova/+spec/db-session-cleanup
> https://blueprints.launchpad.net/nova/+spec/db-archiving
> https://blueprints.launchpad.net/nova/+spec/db-improve-archiving
>
> Oslo:
> https://blueprints.launchpad.net/oslo/+spec/oslo-sqlalchemy-utils
> https://blueprints.launchpad.net/oslo/+spec/test-migrations
> https://blueprints.launchpad.net/oslo/+spec/common-unit-tests
>
> Cinder:
> https://blueprints.launchpad.net/cinder/+spec/db-cleanup
> other you could find using dependencies
>
> Glance:
> https://blueprints.launchpad.net/glance/+spec/db-cleanup
> other you could find using dependencies
>
>
>
>> One small addition I would suggest is a step to remove the unused
>> sqlalchemy-migrate code once this is all done.  That's my main concern
>> with moving it to Oslo right now.
>
>> Also, is this a formal blueprint(s)?  Seems like it should be.
>
>> -Ben
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-02 Thread Eugene Nikanorov
Boris, what do you think?

Thanks,
Eugene.


On Tue, Jul 2, 2013 at 9:55 PM, Ben Nemec  wrote:

> One small addition I would suggest is a step to remove the unused
> sqlalchemy-migrate code once this is all done.  That's my main concern with
> moving it to Oslo right now.
>
> Also, is this a formal blueprint(s)?  Seems like it should be.
>
> -Ben
>
> On 2013-07-02 12:50, Boris Pavlovic wrote:
>
>> ##**##**
>> ###
>>  Goal
>>
>> ##**##**
>> ###
>>
>> We should fix work with DB, unify it in all projects and use oslo code
>> for all common things.
>>
>> In more words:
>>
>> DB API
>>
>>  *) Fully cover by tests.
>>
>>  *) Run tests against all backends (now they are runed only against
>> sqlite).
>>
>>  *) Unique constraints (instead of select + insert)
>>  a) Provide unique constraints.
>>  b) Add missing unique constraints.
>>
>>  *) DB Archiving
>>  a) create shadow tables
>>  b) add tests that checks that shadow and main table are synced.
>>  c) add code that work with shadow tables.
>>
>>  *) DB API performance optimization
>>  a) Remove unused joins.
>>  b) 1 query instead of N (where it is possible).
>>  c) Add methods that could improve performance.
>>  d) Drop unused methods.
>>
>>  *) DB reconnect
>>  a) Don't break huge task if we lost connection for a moment. just
>> retry DB query.
>>
>>  *) DB Session cleanup
>>  a) do not use session parameter in public DB API methods.
>>  b) fix places where we are doing N queries in N transactions instead
>> of 1.
>>  c) get only data that is used (e.g. len(query.all()) =>
>> query.count()).
>>
>> 
>>
>> DB Migrations
>>
>>  *) Test DB Migrations against all backends and real data.
>>
>>  *) Fix: DB schemas after Migrations should be same in different
>> backends
>>
>>  *) Fix: hidden bugs, that are caused by wrong migrations:
>>  a) fix indexes. e.g. 152 migration in Nova drop all Indexes that has
>> deleted column
>>  b) fix wrong types
>>  c) drop unused tables
>>
>>  *) Switch from sqlalchemy-migrate to something that is not death
>> (e.g. alembic).
>>
>> 
>>
>> DB Models
>>
>>  *) Fix: Schema that is created by Models should be the same as after
>> migrations.
>>
>>  *) Fix: Unit tests should be runed on DB that was created by Models
>> not migrations.
>>
>>  *) Add test that checks that Models are synced with migrations.
>>
>> 
>>
>> Oslo Code
>>
>>  *) Base Sqlalchemy Models.
>>
>>  *) Work around engine and session.
>>
>>  *) SqlAlchemy Utils - that helps us with migrations and tests.
>>
>>  *) Test migrations Base.
>>
>>  *) Use common test wrapper that allows us to run tests on different
>> backends.
>>
>> ##**##**
>> ###
>>  Implementation
>>
>> ##**##**
>> ###
>>
>>  This is really really huge task. And we are almost done with Nova=).
>>
>>  In OpenStack for such work there is only one approach ("baby steps"
>> development deriven). So we are making tons of patches that could be
>> easy reviewed. But there is also minuses in such approach. It is
>> pretty hard to track work on high level. And sometimes there are
>> misunderstand.
>>
>>  For example with oslo code. In few words at this moment we would like
>> to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
>> And I got reasonable question from Doug Hellmann. Why? I answer
>> because of our "baby steps". But if you don't have a list of baby
>> steps it is pretty hard to understand why our baby steps need this
>> thing. And why we don't switch to alembic firstly. So I would like to
>> describe our Road Map and write list of "baby steps".
>>
>> ---
>>
>> OSLO
>>
>>  *) (Merged) Base code for Models and sqlalchemy engine (session)
>>
>>  *) (On review) Sqlalchemy utils that are used to:
>>  1. Fix bugs in sqlalchemy-migrate
>>  2. Base code for migrations that provides Unique Constraints.
>>  3. Utils for db.archiving helps us to create and check shadow tables.
>>
>>  *) (On review) Testtools wrapper
>>  We should have only one testtool wrapper in all projects. And this is
>> the one of base steps in task of running tests against all backends.
>>
>>  *) (On review) Test migrations base
>>  Base classes that provides us to test our migrations against all
>> backends on real data
>>
>>  *) (On review, not finished yet) DB Reconnect.
>>
>>  *) (Not finished) Test that checks that schemas and models are synced
>>
>> ---
>>
>> ${PROJECT_NAME}
>>
>> In different projects we could work absolutely simultaneously, and
>> first candidates are Glance and Cinder. But inside project we could
>> also work simultaneously. Here is the workflow:
>>
>>  1) (SYNC) Use base code for Models and sqlalchemy engines (from oslo)
>>
>>  2) (SYNC) Use test migrations base (from oslo)
>>
>>  3) (SYNC) Use

Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-02 Thread Boris Pavlovic
Hi Ben,

This work was started since Grizzly started.
So there are tons of Blueprints and tons of code.

Nova:
https://blueprints.launchpad.net/nova/+spec/db-cleanup
https://blueprints.launchpad.net/nova/+spec/db-enforce-unique-keys
https://blueprints.launchpad.net/nova/+spec/db-api-tests
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
https://blueprints.launchpad.net/nova/+spec/db-session-cleanup
https://blueprints.launchpad.net/nova/+spec/db-archiving
https://blueprints.launchpad.net/nova/+spec/db-improve-archiving

Oslo:
https://blueprints.launchpad.net/oslo/+spec/oslo-sqlalchemy-utils
https://blueprints.launchpad.net/oslo/+spec/test-migrations
https://blueprints.launchpad.net/oslo/+spec/common-unit-tests

Cinder:
https://blueprints.launchpad.net/cinder/+spec/db-cleanup
other you could find using dependencies

Glance:
https://blueprints.launchpad.net/glance/+spec/db-cleanup
other you could find using dependencies



> One small addition I would suggest is a step to remove the unused
> sqlalchemy-migrate code once this is all done.  That's my main concern
> with moving it to Oslo right now.

> Also, is this a formal blueprint(s)?  Seems like it should be.

> -Ben
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-02 Thread Robert Collins
Radix points out I missed the naunce that you're targeting the users
of python-novaclient, for instance, rather than python-novaclient's
own tests.


On 3 July 2013 16:29, Robert Collins  wrote:

>> What I'd like is for each client library, in addition to the actual
>> implementation, is that they ship a fake, in-memory, version of the API. The
>> fake implementations should take the same arguments, have the same return
>> values, raise the same exceptions, and otherwise be identical, besides the
>> fact
>> that they are entirely in memory and never make network requests.
>
> So, +1 on shipping a fake reference copy of the API.
>
> -1 on shipping it in the client.
>
> The server that defines the API should have two implementations - the
> production one, and a testing fake. The server tests should exercise
> *both* code paths [e.g. using testscenarios] to ensure there is no
> skew between them.
>
> Then the client tests can be fast and efficient but not subject to
> implementation skew between fake and prod implementations.
>
> Back on Launchpad I designed a similar thing, but with language
> neutrality as a goal :
> https://dev.launchpad.net/ArchitectureGuide/ServicesRequirements#Test_fake
>
> And in fact, I think that that design would work well here, because we
> have multiple language bindings - Python, Ruby, PHP, Java, Go etc, and
> all of them will benefit from a low(ms or less)-latency test fake.

So taking the aspect I missed into account I'm much happier with the
idea of shipping a fake in the client, but... AFAICT many of our
client behaviours are only well defined in the presence of a server
anyhow.

So it seems to me that a fast server fake can be used in tests of
python-novaclient, *and* in tests of code using python-novaclient
(including for instance, heat itself), and we get to write it just
once per server, rather than once per server per language binding.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-02 Thread Robert Collins
On 2 July 2013 09:08, Alex Gaynor  wrote:
> Hi all,
>
> I suspect many of you don't know me as I've only started to get involved in
> OpenStack recently, I work at Rackspace and I'm pretty involved in other
> Python
> open source stuff, notably Django and PyPy, I also serve on the board of the
> PSF. So hi!
>
> I'd like to propose an addition to all of the python-client libraries going
> forwards (and perhaps a requirement for future ones).
>
> What I'd like is for each client library, in addition to the actual
> implementation, is that they ship a fake, in-memory, version of the API. The
> fake implementations should take the same arguments, have the same return
> values, raise the same exceptions, and otherwise be identical, besides the
> fact
> that they are entirely in memory and never make network requests.

So, +1 on shipping a fake reference copy of the API.

-1 on shipping it in the client.

The server that defines the API should have two implementations - the
production one, and a testing fake. The server tests should exercise
*both* code paths [e.g. using testscenarios] to ensure there is no
skew between them.

Then the client tests can be fast and efficient but not subject to
implementation skew between fake and prod implementations.

Back on Launchpad I designed a similar thing, but with language
neutrality as a goal :
https://dev.launchpad.net/ArchitectureGuide/ServicesRequirements#Test_fake

And in fact, I think that that design would work well here, because we
have multiple language bindings - Python, Ruby, PHP, Java, Go etc, and
all of them will benefit from a low(ms or less)-latency test fake.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-02 Thread Michael Still
+1

On Wed, Jul 3, 2013 at 12:02 PM, Ken'ichi Ohmichi
 wrote:
>
> +1
>
> On Tue, 02 Jul 2013 18:40:31 -0400
> Russell Bryant  wrote:
>>
>> Greetings,
>>
>> I would like to propose Christopher Yoeh to be added to the nova-core team.
>>
>> Christopher has been prolific in his contributions to nova lately, both
>> in code and his general leadership of the v3 API effort.  He has also
>> been regularly contributing to code reviews.  It would be great to have
>> him on board to help review API changes, as well as fixes elsewhere in nova.
>>
>> References:
>>
>> https://review.openstack.org/#/q/owner:5292,n,z
>>
>> https://review.openstack.org/#/q/reviewer:5292,n,z
>>
>> https://review.openstack.org/#/dashboard/5292
>>
>> Please respond with +1s or any concerns.
>>
>> Thanks,
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] failure node muting not working

2013-07-02 Thread Zhou, Yuan
Hi lists,

We're trying to evaluate the node failure performance in Swift.
According the docs Swift should be able to mute the failed nodes:
'if a storage node does not respond in a reasonable about of time, the proxy 
considers it to be unavailable and will not attempt to communicate with it for 
a while.'

We did a simple test on a 5 nodes cluster:

1.   Using COSBench to keep downloading files from the clusters.

2.   Stop the networking on SN1, there are lots of 'connection timeout 
0.5s' error occurs in Proxy's log

3.   Keep workload running and wait for about 1hour

4.   The same error still occurs in Proxy, which means the node is not 
muted, but we expect the SN1 is muted in proxy side and there is no 'connection 
 timeout ' error in Proxy

So is there any special works needs to be done to use this feature?

Regards, -yuanz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-02 Thread Michael Basnight
On Jul 2, 2013, at 8:17 PM, Clint Byrum  wrote:

> Excerpts from Michael Basnight's message of 2013-07-02 19:04:01 -0700:
>> On Jul 2, 2013, at 3:52 PM, Clint Byrum wrote:
>> 
>>> Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
 Howdy,
 
 one of the TC requests for integration of trove was to integrate heat. 
 While this is a small task for single instance installations, when we get 
 into clustering it seems a bit more painful. Id like to submit the 
 following as a place to start the discussion for why we would/wouldnt 
 integrate heat (now). This is, in NO WAY, to say we will not integrate 
 heat. Its just a matter of timing and requirements for our 'soon to be' 
 cluster api. I am, however, targeting getting trove to work in a rpm 
 environment, as it is tied to apt currently.
>>> 
>>> Hi Michael. I do think that it is very cool that Trove will be making
>>> use of Heat for cluster configuration.
>> 
>> I know it really fits the bill!
>> 
>>> 
 
 1) Companies who are looking at trove are not yet looking at heat, and a 
 hard dependency might stifle growth of the product initially
   • CERN
>>> 
>>> I'm sure these users don't explicitly want "MySQL" (or whatever DB
>>> you use) and "RabbitMQ" (or whatever RPC you use) either, but they
>>> are plumbing, and thus things that need to be deployed in the larger
>>> architecture.
>> 
>> Well sure but i also dont want to stop trove from adoption because a company 
>> has not investigated heat. Rabbit and the DB are shared resources between 
>> all OpenStack services. Heat and Trove are not.
> 
> I do understand that. Heat has some growing up to do before it is in the
> same category as those other pieces. Please keep us in the loop where
> you need features and/or bug fixes for Heat.
> 
>>> 
 2) homogeneous LaunchConfiguration
   • a database cluster is heterogeneous
   • Our cluster configuration will need to specify different sized slaves, 
 and allow a customer to upgrade a single slaves configuration
   • heat said if this is something that has a good use case, they could 
 potentially make it happen (not sure of timeframe)
>>> 
>>> There's no requirement that you use AWS::EC2::AutoScalingGroup or
>>> OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
>>> limited. Since all Heat templates are just data structures (expressed
>>> as yaml or json) you can just maintain an array of instances of the size
>>> that you want.
>> 
>> Oh good!
>> 
>>> 
 3) have to modify template to scale out
   • This doable but will require hacking a template in code and pushing 
 that template
   • I assume removing a slave will require the same finagling of the 
 template
   • I understand that a better version of this is coming (not sure of 
 timeframe)
>>> 
>>> The word template makes it sound like it is a text only thing. It is
>>> a data structure, and as such, it is quite easy to modify and maintain
>>> in code.
>>> ...
>>> I hope all of that makes some sense. Eventually yes, resizable arrays
>>> of servers will be in the new format, HOT, but for now, the CFN method
>>> is still useful as you get signals and dependency graph management.
>> 
>> It does, with one caveat. Can i say slave1 has a flavor of 512m and 
>> slave2 has a flavor 2048m? I didnt see that in the example. Its really 
>> useful for a reporting slave to be smaller than a master, and for a 
>> particular slave to be larger due to any sort of requirement that i cant 
>> necessarily dictate!
> 
> Of course flavor can differ per-server. That is kind of my point, the cfn
> template format is fairly low level, making Heat into sort of a really
> smart client library for all of OpenStack. So you can really maintain
> the list of slaves however you want. You could have ReportingSlave0001
> and QuerySlave0002 or just use UUID's for them and give them names
> in Metadata.
> 

Great! <3. Thx again for shedding some light!!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-02 Thread Clint Byrum
Excerpts from Michael Basnight's message of 2013-07-02 19:04:01 -0700:
> On Jul 2, 2013, at 3:52 PM, Clint Byrum wrote:
> 
> > Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
> >> Howdy,
> >> 
> >> one of the TC requests for integration of trove was to integrate heat. 
> >> While this is a small task for single instance installations, when we get 
> >> into clustering it seems a bit more painful. Id like to submit the 
> >> following as a place to start the discussion for why we would/wouldnt 
> >> integrate heat (now). This is, in NO WAY, to say we will not integrate 
> >> heat. Its just a matter of timing and requirements for our 'soon to be' 
> >> cluster api. I am, however, targeting getting trove to work in a rpm 
> >> environment, as it is tied to apt currently.
> > 
> > Hi Michael. I do think that it is very cool that Trove will be making
> > use of Heat for cluster configuration.
> 
> I know it really fits the bill!
> 
> > 
> >> 
> >> 1) Companies who are looking at trove are not yet looking at heat, and a 
> >> hard dependency might stifle growth of the product initially
> >>• CERN
> > 
> > I'm sure these users don't explicitly want "MySQL" (or whatever DB
> > you use) and "RabbitMQ" (or whatever RPC you use) either, but they
> > are plumbing, and thus things that need to be deployed in the larger
> > architecture.
> 
> Well sure but i also dont want to stop trove from adoption because a company 
> has not investigated heat. Rabbit and the DB are shared resources between all 
> OpenStack services. Heat and Trove are not.
> 

I do understand that. Heat has some growing up to do before it is in the
same category as those other pieces. Please keep us in the loop where
you need features and/or bug fixes for Heat.

> > 
> >> 2) homogeneous LaunchConfiguration
> >>• a database cluster is heterogeneous
> >>• Our cluster configuration will need to specify different sized 
> >> slaves, and allow a customer to upgrade a single slaves configuration
> >>• heat said if this is something that has a good use case, they could 
> >> potentially make it happen (not sure of timeframe)
> > 
> > There's no requirement that you use AWS::EC2::AutoScalingGroup or
> > OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
> > limited. Since all Heat templates are just data structures (expressed
> > as yaml or json) you can just maintain an array of instances of the size
> > that you want.
> 
> Oh good!
> 
> > 
> >> 3) have to modify template to scale out
> >>• This doable but will require hacking a template in code and pushing 
> >> that template
> >>• I assume removing a slave will require the same finagling of the 
> >> template
> >>• I understand that a better version of this is coming (not sure of 
> >> timeframe)
> >> 
> > 
> > The word template makes it sound like it is a text only thing. It is
> > a data structure, and as such, it is quite easy to modify and maintain
> > in code.
> > ...
> > I hope all of that makes some sense. Eventually yes, resizable arrays
> > of servers will be in the new format, HOT, but for now, the CFN method
> > is still useful as you get signals and dependency graph management.
> 
> It does, with one caveat. Can i say slave1 has a flavor of 512m and 
> slave2 has a flavor 2048m? I didnt see that in the example. Its really 
> useful for a reporting slave to be smaller than a master, and for a 
> particular slave to be larger due to any sort of requirement that i cant 
> necessarily dictate! 

Of course flavor can differ per-server. That is kind of my point, the cfn
template format is fairly low level, making Heat into sort of a really
smart client library for all of OpenStack. So you can really maintain
the list of slaves however you want. You could have ReportingSlave0001
and QuerySlave0002 or just use UUID's for them and give them names
in Metadata.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-02 Thread Ken'ichi Ohmichi

+1

On Tue, 02 Jul 2013 18:40:31 -0400
Russell Bryant  wrote:
>
> Greetings,
> 
> I would like to propose Christopher Yoeh to be added to the nova-core team.
> 
> Christopher has been prolific in his contributions to nova lately, both
> in code and his general leadership of the v3 API effort.  He has also
> been regularly contributing to code reviews.  It would be great to have
> him on board to help review API changes, as well as fixes elsewhere in nova.
> 
> References:
> 
> https://review.openstack.org/#/q/owner:5292,n,z
> 
> https://review.openstack.org/#/q/reviewer:5292,n,z
> 
> https://review.openstack.org/#/dashboard/5292
> 
> Please respond with +1s or any concerns.
> 
> Thanks,
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Kyle Mestery (kmestery)
On Jul 2, 2013, at 6:23 PM, Jeremy Stanley  wrote:
> On 2013-07-02 21:05:21 + (+), Kyle Mestery (kmestery) wrote:
> [...]
>> remote: New Changes:
>> remote:   https://review.openstack.org/35384
>> remote: 
>> To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
>> ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
>> made)
>> error: failed to push some refs to 
>> 'ssh://mest...@review.openstack.org:29418/openstack/quantum.git'
> [...]
>> In addition, my commit seen at the review URL above does not show
>> the dependancy. Any ideas now?
> [...]
> 
> Based on your description of what transpired, it sounds like you
> also rebased 33297 (the change on which your 35384 change was
> supposed to depend), but it was rejected with the above error while
> not being the same actual commit which Gerrit had. 35384 claims its
> parent commit is 91e0850, which is not a gitsha Gerrit knows for
> 33297 (latest patchset for it is 4421cc1).
> 
> I believe, but would need to test to confirm, that Aaron's mentioned
> the other piece of this puzzle and the "no rebase" -R flag is
> incompatible with attempting to push multiple changes which have
> been updated. It used to be the case that git-review always tried to
> rebase unless you used -R, but with more recent releases it will
> avoid rebasing except when absolutely necessary. I pretty much never
> use -R at this point.
> 
> Of course now attempting to re-submit that top patch probably isn't
> going to work anyway without making some minor change to it, since
> Gerrit will see that it hasn't changed. I tried downloading 35384
> and then rewinding with 'git checkout 91e0850' and resubmitting that
> via git review, but it seems Gerrit still sees that as the same as
> 4421cc1 and refuses it. Instead I retrieved and stacked them as they
> stand now with...
> 
>git review -d 33297
>git review -x 35384
>git review
> 
> ...and that seems to have worked...
> 
>fungi@hastur:~/work/openstack/quantum$ git review
>You are about to submit multiple commits. This is expected if you are 
> submitting a commit that is dependant on one or more in-review commits. 
> Otherwise you should consider squashing your changes into one commit before 
> submitting.
>The outstanding commits are:
> 
>a453866 (HEAD, review/mathieu_rohon/bp/ml2-vxlan) Add VXLAN tunneling 
> support for the ML2 plugin
>4421cc1 Add gre tunneling support for the ML2 plugin
> 
>Do you really want to submit the above commits?
>Type 'yes' to confirm, other to cancel: yes
>remote: Resolving deltas: 100% (12/12)
>remote: Processing changes: updated: 1, done
>remote: (W) a453866: no files changed, was rebased
>To ssh://fu...@review.openstack.org:29418/openstack/quantum.git
> * [new branch]  HEAD -> refs/publish/master/bp/ml2-vxlan
> 
> Hope that helps?

That really helps a lot, thanks for the detailed analysis Jeremy! I see where
I went wrong. I'll try to condense this down a bit and update the wiki page I
referenced earlier in this thread, as it still recommends using "git review -R"
for this type of operation.

Appreciate the help!

Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A patch review

2013-07-02 Thread Wenhao Xu
Got it. Thanks.

Best,
Wenhao


On Tue, Jul 2, 2013 at 10:46 PM, Dolph Mathews wrote:

>
> On Monday, July 1, 2013, Wenhao Xu wrote:
>
>> Hi guys,
>>
>> The review  of (https://review.openstack.org/#/c/34652/) has been idled
>> for a while. I am wondering anyone has a free time slot to review it?
>> Thanks.
>>
>
> Please tag the name of the relevant project in the subject line,
> especially when emailing the list with requests like this. Thanks!
>
>
>> Regards,
>> Wenhao
>>
>
>
> --
>
> -Dolph
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Help with database migration error

2013-07-02 Thread Henry Gessau
I have not worked with databases much and this is my first attempt
at a database migration. I am trying to follow this Howto:
https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

I get the following error at step 3:

/opt/stack/quantum[master] $ quantum-db-manage --config-file 
/etc/quantum/quantum.conf --config-file 
/etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
Traceback (most recent call last):
  File "/usr/local/bin/quantum-db-manage", line 9, in 
load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts', 
'quantum-db-manage')()
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 136, in main
CONF.command.func(config, CONF.command.name)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 81, in do_stamp
sql=CONF.command.sql)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 54, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 221, 
in stamp
script.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 193, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 177, in 
load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File "/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", 
line 100, in 
run_migrations_online()
  File "/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", 
line 73, in run_migrations_online
poolclass=pool.NullPool)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", line 
338, in create_engine
return strategy.create(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 
48, in create
u = url.make_url(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 178, 
in make_url
return _parse_rfc1738_args(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 219, 
in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-02 Thread Michael Basnight
On Jul 2, 2013, at 3:52 PM, Clint Byrum wrote:

> Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
>> Howdy,
>> 
>> one of the TC requests for integration of trove was to integrate heat. While 
>> this is a small task for single instance installations, when we get into 
>> clustering it seems a bit more painful. Id like to submit the following as a 
>> place to start the discussion for why we would/wouldnt integrate heat (now). 
>> This is, in NO WAY, to say we will not integrate heat. Its just a matter of 
>> timing and requirements for our 'soon to be' cluster api. I am, however, 
>> targeting getting trove to work in a rpm environment, as it is tied to apt 
>> currently.
> 
> Hi Michael. I do think that it is very cool that Trove will be making
> use of Heat for cluster configuration.

I know it really fits the bill!

> 
>> 
>> 1) Companies who are looking at trove are not yet looking at heat, and a 
>> hard dependency might stifle growth of the product initially
>>• CERN
> 
> I'm sure these users don't explicitly want "MySQL" (or whatever DB
> you use) and "RabbitMQ" (or whatever RPC you use) either, but they
> are plumbing, and thus things that need to be deployed in the larger
> architecture.

Well sure but i also dont want to stop trove from adoption because a company 
has not investigated heat. Rabbit and the DB are shared resources between all 
OpenStack services. Heat and Trove are not.

> 
>> 2) homogeneous LaunchConfiguration
>>• a database cluster is heterogeneous
>>• Our cluster configuration will need to specify different sized slaves, 
>> and allow a customer to upgrade a single slaves configuration
>>• heat said if this is something that has a good use case, they could 
>> potentially make it happen (not sure of timeframe)
> 
> There's no requirement that you use AWS::EC2::AutoScalingGroup or
> OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
> limited. Since all Heat templates are just data structures (expressed
> as yaml or json) you can just maintain an array of instances of the size
> that you want.

Oh good!

> 
>> 3) have to modify template to scale out
>>• This doable but will require hacking a template in code and pushing 
>> that template
>>• I assume removing a slave will require the same finagling of the 
>> template
>>• I understand that a better version of this is coming (not sure of 
>> timeframe)
>> 
> 
> The word template makes it sound like it is a text only thing. It is
> a data structure, and as such, it is quite easy to modify and maintain
> in code.
> ...
> I hope all of that makes some sense. Eventually yes, resizable arrays
> of servers will be in the new format, HOT, but for now, the CFN method
> is still useful as you get signals and dependency graph management.

It does, with one caveat. Can i say slave1 has a flavor of 512m and 
slave2 has a flavor 2048m? I didnt see that in the example. Its really 
useful for a reporting slave to be smaller than a master, and for a particular 
slave to be larger due to any sort of requirement that i cant necessarily 
dictate! 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-02 Thread Dan Smith
> Please respond with +1s or any concerns.

My only concern is that now he and Mikal can collude on patches while
the rest of us are asleep. Actually, maybe that's okay.

+1 from me :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Jeremy Stanley
On 2013-07-02 21:05:21 + (+), Kyle Mestery (kmestery) wrote:
[...]
> remote: New Changes:
> remote:   https://review.openstack.org/35384
> remote: 
> To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
>  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
> made)
> error: failed to push some refs to 
> 'ssh://mest...@review.openstack.org:29418/openstack/quantum.git'
[...]
> In addition, my commit seen at the review URL above does not show
> the dependancy. Any ideas now?
[...]

Based on your description of what transpired, it sounds like you
also rebased 33297 (the change on which your 35384 change was
supposed to depend), but it was rejected with the above error while
not being the same actual commit which Gerrit had. 35384 claims its
parent commit is 91e0850, which is not a gitsha Gerrit knows for
33297 (latest patchset for it is 4421cc1).

I believe, but would need to test to confirm, that Aaron's mentioned
the other piece of this puzzle and the "no rebase" -R flag is
incompatible with attempting to push multiple changes which have
been updated. It used to be the case that git-review always tried to
rebase unless you used -R, but with more recent releases it will
avoid rebasing except when absolutely necessary. I pretty much never
use -R at this point.

Of course now attempting to re-submit that top patch probably isn't
going to work anyway without making some minor change to it, since
Gerrit will see that it hasn't changed. I tried downloading 35384
and then rewinding with 'git checkout 91e0850' and resubmitting that
via git review, but it seems Gerrit still sees that as the same as
4421cc1 and refuses it. Instead I retrieved and stacked them as they
stand now with...

git review -d 33297
git review -x 35384
git review

...and that seems to have worked...

fungi@hastur:~/work/openstack/quantum$ git review
You are about to submit multiple commits. This is expected if you are 
submitting a commit that is dependant on one or more in-review commits. 
Otherwise you should consider squashing your changes into one commit before 
submitting.
The outstanding commits are:

a453866 (HEAD, review/mathieu_rohon/bp/ml2-vxlan) Add VXLAN tunneling 
support for the ML2 plugin
4421cc1 Add gre tunneling support for the ML2 plugin

Do you really want to submit the above commits?
Type 'yes' to confirm, other to cancel: yes
remote: Resolving deltas: 100% (12/12)
remote: Processing changes: updated: 1, done
remote: (W) a453866: no files changed, was rebased
To ssh://fu...@review.openstack.org:29418/openstack/quantum.git
 * [new branch]  HEAD -> refs/publish/master/bp/ml2-vxlan

Hope that helps?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Savanna version 0.3 - on demand Hadoop task execution

2013-07-02 Thread Alexander Kuznetsov
We want to initiate discussion about Elastic Data Processing (EDP) Savanna
component. This functionality is planned to be implemented in the next
development phase starting on July 15. The main questions to address:

   -

   what kind of functionality should be implemented for EDP?
   -

   what are the main components and their responsibilities?
   -

   which existing tools like Hue or Oozie should be used?


To have something to start, we have prepared an overview of our thoughts in
the following document https://wiki.openstack.org/wiki/Savanna/EDP. For you
convenience, you can find the text below. Your comments and suggestions are
welcome.

Key Features

Starting the job:

   -

   Simple REST API and UI
   -

   TODO: mockups
   -

   Job can be entered through UI/API or pulled through VCS
   -

   Configurable data source



Job execution modes:

   -

   Run job on one of the existing cluster
   -

  Expose information on cluster load
  -

  Provide hints for optimizing data locality TODO: more details
  -

   Create new transient cluster for the job


Job structure:

   -

   Individual job via jar file, Pig or Hive script
   -

   Oozie workflow
   -

  In future to support EMR job flows import


Job execution tracking and monitoring

   -

   Any existent components that can help to visualize? (Twitter
Ambrose
   )
   -

   Terminate job
   -

   Auto-scaling functionality


Main EDP Components Data discovery component

EDP can have several sources of data for processing. Data can be pulled
from Swift, GlusterFS or NoSQL database like Cassandra or HBase. To provide
an unified access to this data we’ll introduce a component responsible for
discovering data location and providing right configuration for Hadoop
cluster. It should have a pluggable system.
Job Source

Users would like to execute different types of jobs: jar file, Pig and Hive
scripts, Oozie job flows, etc.  Job description and source code can be
supplied in a different way. Some users just want to insert hive script and
run it. Other users want to save this script in Savanna internal database
for later use. We also need to provide an ability to run a job from source
code stored in vcs.
Savanna Dispatcher Component

This component is responsible for provisioning a new cluster, scheduling
job on new or existing cluster, resizing cluster and gathering information
from clusters about current jobs and utilization. Also, it should provide
information to help to make a right decision where to schedule job, create
a new cluster or use existing one. For example, current loads on clusters,
their proximity to the data location etc.
UI Component

Integration into OpenStack Dashboard - Horizon. It should provide
instruments for job creation, monitoring etc.

Cloudera Hue already provides part of this functionality: submit jobs (jar
file, Hive, Pig, Impala), view job status and output.
Cluster Level Coordination Component

Expose information about jobs on a specific cluster. Possible this
component should be represent by existing Hadoop projects Hue and Oozie.
User Workflow

- User selects or creates a job to run

- User chooses data source for appropriate type for this job

- Dispatcher provides hints to user about a better way to scheduler this
job (on existing clusters or create a new one)

- User makes a decision based on the hint from dispatcher

- Dispatcher (if needed) creates or resizes existing cluster and schedules
job to it
- Dispatcher periodically pull status of job and shows it on UI

Thanks,

Alexander Kuznetsov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-02 Thread Clint Byrum
Excerpts from Michael Basnight's message of 2013-07-02 15:17:09 -0700:
> Howdy,
> 
> one of the TC requests for integration of trove was to integrate heat. While 
> this is a small task for single instance installations, when we get into 
> clustering it seems a bit more painful. Id like to submit the following as a 
> place to start the discussion for why we would/wouldnt integrate heat (now). 
> This is, in NO WAY, to say we will not integrate heat. Its just a matter of 
> timing and requirements for our 'soon to be' cluster api. I am, however, 
> targeting getting trove to work in a rpm environment, as it is tied to apt 
> currently.

Hi Michael. I do think that it is very cool that Trove will be making
use of Heat for cluster configuration.

> 
> 1) Companies who are looking at trove are not yet looking at heat, and a hard 
> dependency might stifle growth of the product initially
> • CERN

I'm sure these users don't explicitly want "MySQL" (or whatever DB
you use) and "RabbitMQ" (or whatever RPC you use) either, but they
are plumbing, and thus things that need to be deployed in the larger
architecture.

> 2) homogeneous LaunchConfiguration
> • a database cluster is heterogeneous
> • Our cluster configuration will need to specify different sized slaves, 
> and allow a customer to upgrade a single slaves configuration
> • heat said if this is something that has a good use case, they could 
> potentially make it happen (not sure of timeframe)

There's no requirement that you use AWS::EC2::AutoScalingGroup or
OS::Heat::InstanceGroup. In fact I find them rather cumbersome and
limited. Since all Heat templates are just data structures (expressed
as yaml or json) you can just maintain an array of instances of the size
that you want.

> 3) have to modify template to scale out
> • This doable but will require hacking a template in code and pushing 
> that template
> • I assume removing a slave will require the same finagling of the 
> template
> • I understand that a better version of this is coming (not sure of 
> timeframe)
> 

The word template makes it sound like it is a text only thing. It is
a data structure, and as such, it is quite easy to modify and maintain
in code.

As an example, lets say you have this as your template for a database cluster:

# in yaml
Resources:
  MasterReadyWaitCond:
Type: AWS::CloudFormation::WaitCondition
Properties:
  Timeout: 120
  MasterReadyWaitCondHandle:
Type: AWS::CloudFormation::WaitConditionHandle
Properties:
  WaitConditionName: {Ref: MasterReadyWaitCond}
  Master:
Type: AWS::EC2::Instance
Properties:
  InstanceType: reallyfastandlotsofram
  ImageId: trove-mysql
Metadata:
  ReadyWaitCondition: {Ref: MasterReadyWaitCondHandle}


This will boot up, with the reference image, and if you have appropriate
software (such as heat-cfntools or  os-apply-config and os-refresh-config)
in your image it will read the Metadata section and signal back to Heat
when ReadyWaitCond is satisfied. With that, Trove can poll the status
of that resource to find out if the master is ready.

Now, to add a slave:

# in yaml
Resources:
  MasterReadyWaitCond:
Type: AWS::CloudFormation::WaitCondition
Properties:
  Timeout: 120
  MasterReadyWaitCondHandle:
Type: AWS::CloudFormation::WaitConditionHandle
Properties:
  WaitConditionName: {Ref: MasterReadyWaitCond}
  Master:
Type: AWS::EC2::Instance
Properties:
  InstanceType: reallyfastandlotsofram
  ImageId: trove-mysql
Metadata:
  ReadyWaitCondition: {Ref: MasterReadyWaitCondHandle}
  Users:
- name: slave
- password: some_random_string
  SlaveReadyWaitCond:
Type: AWS::CloudFormation::WaitCondition
Properties:
  Timeout: 120
  SlaveReadyWaitCondHandle:
Type: AWS::CloudFormation::WaitConditionHandle
Properties:
  WaitConditionName: {Ref: SlaveReadyWaitCond}
  Slave:
Type: AWS::EC2::Instance
DependsOn: MasterReadyWaitCond
Properties:
  InstanceType: kindofawesome
  ImageId: trove-mysql
Metadata:
  Master:
Address: {Fn::GetAtt: [Master, PrivateIp]}
User:
  name: slave
  password: some_random_string
  ReadyWaitCondition: {Ref: SlaveReadyWaitCondHandle}


I hope all of that makes some sense. Eventually yes, resizable arrays
of servers will be in the new format, HOT, but for now, the CFN method
is still useful as you get signals and dependency graph management.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Edgar Magana
Got it!
I will file a bug for it and I will submit the fix this week.

Thanks,

Edgar

From:  Aaron Rosen 
Reply-To:  OpenStack List 
Date:  Tuesday, July 2, 2013 3:33 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] does delete_network call
delete_subnet automatically?

The call should be in the db_base class. If you call self.delete_subnet()
from the db_base class then it will call the delete_subnet() method from the
plugin if implement. Inheritance.



On Tue, Jul 2, 2013 at 3:15 PM, Edgar Magana  wrote:
> It makes sense totally. Then, instead of making the db_base class calling the
> delete_subnet at the plugin level, shouldn't be better call delete_subnet at
> the plugin level when the delete_network is called?
> Basically:
> 
> def delete_network()
> subnets = get_subnets()
> for subnet in subnets:
> delete_subnet(subnet)
> Š
> 
> Edgar
> 
> From:  Aaron Rosen 
> Reply-To:  OpenStack List 
> Date:  Tuesday, July 2, 2013 2:28 PM
> 
> To:  OpenStack List 
> Subject:  Re: [openstack-dev] [Neutron] does delete_network call delete_subnet
> automatically?
> 
> Yes, I think this is the desired behavior. If someone deletes a network we
> check to see if there are any ports on the network. If there are ports on the
> network we raise. If there are no ports on the network we allow it to be
> deleted. Since you cannot have a subnet without a network we should delete
> that then too. I don't see any reason to complicate things by forcing the user
> to delete the subnets first.
> 
> Aaron
> 
> 
> On Tue, Jul 2, 2013 at 2:04 PM, Edgar Magana  wrote:
>> Before filing a bug, do we really want this kind of functionality?
>> Is it "correct" to delete a network without really checking if the owner
>> really wants to delete all subnets associated with it?
>> 
>> Edgar
>> 
>> From:  Aaron Rosen 
>> Reply-To:  OpenStack List 
>> Date:  Tuesday, July 2, 2013 1:55 PM
>> 
>> To:  OpenStack List 
>> Subject:  Re: [openstack-dev] [Neutron] does delete_network call
>> delete_subnet automatically?
>> 
>> Good point. We should be calling delete_subnet() from delete_network() in the
>> db_base class rather than deleting it directly from the database.
>> 
>> Aaron
>> 
>> 
>> On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana  wrote:
>>> If the plugin performs operations when the subnet is created, how is
>>> possible to roll-back those operation if the plugin implementation of
>>> delete_subnet() is never called?
>>> I don¹t think we should let delete_network in debasepluginv2.py delete all
>>> subnets, we could just ask the tenant user to delete all subnets first, is
>>> there any specific reason why we automatically delete all subnets?
>>> 
>>> Edgar
>>> 
>>> From:  Aaron Rosen 
>>> Reply-To:  OpenStack List 
>>> Date:  Tuesday, July 2, 2013 12:40 PM
>>> To:  OpenStack List 
>>> Subject:  Re: [openstack-dev] [Neutron] does delete_network call
>>> delete_subnet automatically?
>>> 
>>> delete_network() in the db_base class handles deleting the subnets
>>> associated with the networks.
>>> 
>>> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v
>>> 2.py#L1033
>>> 
>>> 
>>> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana  wrote:
 Folks,
 
 When I create a network and a subnet associated to that network, I am able
 to delete the network without deleting the subnet first from both CLI and
 Horizon.
 The difference is that in Horizon, both APIs are called: delete_subnet()
 and delete_network()
 When I tried by CLI, only delete_network is called as you can see in these
 logs:
 
 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
 /etc/quantum/policy.json
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
 QuantumPluginPLUMgrid Status: delete_network() called
 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
 /etc/quantum/policy.json
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
 PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
 PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
 'Accept': 'application/json'}
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
 network.delete.end on notifications.info 
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
 is ad8c5a233bd6403ea850cde73afb720a.
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
 asynchronous fanout cast...
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
 is dcba5e2b55bb4cb89aab17f5797882e6.
 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
 Authenticating user token
 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]

[openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-02 Thread Russell Bryant
Greetings,

I would like to propose Christopher Yoeh to be added to the nova-core team.

Christopher has been prolific in his contributions to nova lately, both
in code and his general leadership of the v3 API effort.  He has also
been regularly contributing to code reviews.  It would be great to have
him on board to help review API changes, as well as fixes elsewhere in nova.

References:

https://review.openstack.org/#/q/owner:5292,n,z

https://review.openstack.org/#/q/reviewer:5292,n,z

https://review.openstack.org/#/dashboard/5292

Please respond with +1s or any concerns.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Aaron Rosen
The call should be in the db_base class. If you call self.delete_subnet()
from the db_base class then it will call the delete_subnet() method from
the plugin if implement. Inheritance.



On Tue, Jul 2, 2013 at 3:15 PM, Edgar Magana  wrote:

> It makes sense totally. Then, instead of making the db_base class calling
> the delete_subnet at the plugin level, shouldn't be better call delete_subnet
> at the plugin level when the delete_network is called?
> Basically:
>
> def delete_network()
> subnets = get_subnets()
> for subnet in subnets:
> delete_subnet(subnet)
> …
>
> Edgar
>
> From: Aaron Rosen 
> Reply-To: OpenStack List 
> Date: Tuesday, July 2, 2013 2:28 PM
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] does delete_network call
> delete_subnet automatically?
>
> Yes, I think this is the desired behavior. If someone deletes a network we
> check to see if there are any ports on the network. If there are ports on
> the network we raise. If there are no ports on the network we allow it to
> be deleted. Since you cannot have a subnet without a network we should
> delete that then too. I don't see any reason to complicate things by
> forcing the user to delete the subnets first.
>
> Aaron
>
>
> On Tue, Jul 2, 2013 at 2:04 PM, Edgar Magana  wrote:
>
>> Before filing a bug, do we really want this kind of functionality?
>> Is it "correct" to delete a network without really checking if the owner
>> really wants to delete all subnets associated with it?
>>
>> Edgar
>>
>> From: Aaron Rosen 
>> Reply-To: OpenStack List 
>> Date: Tuesday, July 2, 2013 1:55 PM
>>
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron] does delete_network call
>> delete_subnet automatically?
>>
>> Good point. We should be calling delete_subnet() from delete_network() in
>> the db_base class rather than deleting it directly from the database.
>>
>> Aaron
>>
>>
>> On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana wrote:
>>
>>> If the plugin performs operations when the subnet is created, how is
>>> possible to roll-back those operation if the plugin implementation of *
>>> delete_subnet()* is never called?
>>> I don’t think we should let *delete_*network in *debasepluginv2.py* delete
>>> all subnets, we could just ask the tenant user to delete all subnets first,
>>> is there any specific reason why we automatically delete all subnets?
>>>
>>> Edgar
>>>
>>> From: Aaron Rosen 
>>> Reply-To: OpenStack List 
>>> Date: Tuesday, July 2, 2013 12:40 PM
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Neutron] does delete_network call
>>> delete_subnet automatically?
>>>
>>> delete_network() in the db_base class handles deleting the subnets
>>> associated with the networks.
>>>
>>>
>>> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2.py#L1033
>>>
>>>
>>> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana wrote:
>>>
 Folks,

 When I create a network and a subnet associated to that network, I am
 able to delete the network without deleting the subnet first from both CLI
 and Horizon.
 The difference is that in Horizon, both APIs are called:
 delete_subnet() and delete_network()
 When I tried by CLI, only delete_network is called as you can see in
 these logs:

 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
 /etc/quantum/policy.json
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
 QuantumPluginPLUMgrid Status: delete_network() called
 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
 /etc/quantum/policy.json
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
 PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
 2013-07-02 12:26:57DEBUG
 [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
 PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
 'Accept': 'application/json'}
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
 Sending network.delete.end on notifications.info
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
 UNIQUE_ID is ad8c5a233bd6403ea850cde73afb720a.
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
 asynchronous fanout cast...
 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
 UNIQUE_ID is dcba5e2b55bb4cb89aab17f5797882e6.
 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
 Authenticating user token
 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
 Removing headers from request environment:
 X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 2013-07-02 12:27:23DEBUG [keys

Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Edgar Magana
It makes sense totally. Then, instead of making the db_base class calling
the delete_subnet at the plugin level, shouldn't be better call
delete_subnet at the plugin level when the delete_network is called?
Basically:

def delete_network()
subnets = get_subnets()
for subnet in subnets:
delete_subnet(subnet)
Š

Edgar

From:  Aaron Rosen 
Reply-To:  OpenStack List 
Date:  Tuesday, July 2, 2013 2:28 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] does delete_network call
delete_subnet automatically?

Yes, I think this is the desired behavior. If someone deletes a network we
check to see if there are any ports on the network. If there are ports on
the network we raise. If there are no ports on the network we allow it to be
deleted. Since you cannot have a subnet without a network we should delete
that then too. I don't see any reason to complicate things by forcing the
user to delete the subnets first.

Aaron


On Tue, Jul 2, 2013 at 2:04 PM, Edgar Magana  wrote:
> Before filing a bug, do we really want this kind of functionality?
> Is it "correct" to delete a network without really checking if the owner
> really wants to delete all subnets associated with it?
> 
> Edgar
> 
> From:  Aaron Rosen 
> Reply-To:  OpenStack List 
> Date:  Tuesday, July 2, 2013 1:55 PM
> 
> To:  OpenStack List 
> Subject:  Re: [openstack-dev] [Neutron] does delete_network call delete_subnet
> automatically?
> 
> Good point. We should be calling delete_subnet() from delete_network() in the
> db_base class rather than deleting it directly from the database.
> 
> Aaron
> 
> 
> On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana  wrote:
>> If the plugin performs operations when the subnet is created, how is possible
>> to roll-back those operation if the plugin implementation of delete_subnet()
>> is never called?
>> I don¹t think we should let delete_network in debasepluginv2.py delete all
>> subnets, we could just ask the tenant user to delete all subnets first, is
>> there any specific reason why we automatically delete all subnets?
>> 
>> Edgar
>> 
>> From:  Aaron Rosen 
>> Reply-To:  OpenStack List 
>> Date:  Tuesday, July 2, 2013 12:40 PM
>> To:  OpenStack List 
>> Subject:  Re: [openstack-dev] [Neutron] does delete_network call
>> delete_subnet automatically?
>> 
>> delete_network() in the db_base class handles deleting the subnets associated
>> with the networks.
>> 
>> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2
>> .py#L1033
>> 
>> 
>> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana  wrote:
>>> Folks,
>>> 
>>> When I create a network and a subnet associated to that network, I am able
>>> to delete the network without deleting the subnet first from both CLI and
>>> Horizon.
>>> The difference is that in Horizon, both APIs are called: delete_subnet() and
>>> delete_network()
>>> When I tried by CLI, only delete_network is called as you can see in these
>>> logs:
>>> 
>>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>>> /etc/quantum/policy.json
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
>>> QuantumPluginPLUMgrid Status: delete_network() called
>>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>>> /etc/quantum/policy.json
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>>> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>>> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
>>> 'Accept': 'application/json'}
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
>>> network.delete.end on notifications.info 
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
>>> is ad8c5a233bd6403ea850cde73afb720a.
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
>>> asynchronous fanout cast...
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
>>> is dcba5e2b55bb4cb89aab17f5797882e6.
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>>> Authenticating user token
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Removing
>>> headers from request environment:
>>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Pr
>>> oject-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id
>>> ,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Na
>>> me,X-Tenant,X-Role
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Storing
>>> dd0bd438481c2d010d1abc5903fa11da token in memcache
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
>>> /subnets.json
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
>>> 2013-07-02 12:27:23DEBUG [routes.middlewar

[openstack-dev] trove and heat integration status

2013-07-02 Thread Michael Basnight
Howdy,

one of the TC requests for integration of trove was to integrate heat. While 
this is a small task for single instance installations, when we get into 
clustering it seems a bit more painful. Id like to submit the following as a 
place to start the discussion for why we would/wouldnt integrate heat (now). 
This is, in NO WAY, to say we will not integrate heat. Its just a matter of 
timing and requirements for our 'soon to be' cluster api. I am, however, 
targeting getting trove to work in a rpm environment, as it is tied to apt 
currently.

1) Companies who are looking at trove are not yet looking at heat, and a hard 
dependency might stifle growth of the product initially
• CERN
2) homogeneous LaunchConfiguration
• a database cluster is heterogeneous
• Our cluster configuration will need to specify different sized 
slaves, and allow a customer to upgrade a single slaves configuration
• heat said if this is something that has a good use case, they could 
potentially make it happen (not sure of timeframe)
3) have to modify template to scale out
• This doable but will require hacking a template in code and pushing 
that template
• I assume removing a slave will require the same finagling of the 
template
• I understand that a better version of this is coming (not sure of 
timeframe)

I will be tracking the following at [1]

[1] https://wiki.openstack.org/wiki/Trove/HeatIntegration
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS

2013-07-02 Thread Nachi Ueno
HI Tatiana

Cool.  Thanks!!

2013/7/2 Tatiana Mazur :
> Hello Nachi,
>
> VPNaaS UI code is here:
> https://review.openstack.org/#/c/34882/
>
> I edited "HowToInstall" page: an instruction and link to user scenarios
> (https://wiki.openstack.org/wiki/Neutron/VPNaaS/UI) are added.
>
> --
> Kind regards,
> Tatiana
>
>
> On Mon, Jul 1, 2013 at 9:13 PM, Nachi Ueno  wrote:
>>
>> Hi Taiana
>>
>> Cl!
>> Could you share the code on gerrit?
>>
>> The first driver can be testable now
>> https://wiki.openstack.org/wiki/Quantum/VPNaaS/HowToInstall
>> Could you add instruction to use your UI on this page?
>>
>> Nice work!
>> Best
>> Nachi
>>
>>
>>
>>
>> 2013/6/30 Tatiana Mazur :
>> > Hello,
>> >
>> > I have finished the prototype of VPNaaS UI. The corresponding blueprint
>> > is
>> > here: https://blueprints.launchpad.net/horizon/+spec/vpnaas-ui. Since
>> > that's
>> > a prototype, the code will be polished and some features will be added
>> > later
>> > when dependencies are merged. Unit tests are also to be added. For now
>> > 'Create' and 'Delete' options are implemented for VPNService, IKEPolicy,
>> > IPSecPolicy and VPNConnection. 'Update' actions are to be added (I think
>> > I'll add them in a separate patch set in order not to overcomplicate
>> > this
>> > one).
>> >
>> > --
>> > Kind regards,
>> > Tatiana
>> >
>> >
>> > On Wed, May 15, 2013 at 10:14 PM, Nachi Ueno  wrote:
>> >>
>> >> Hi Llya
>> >>
>> >> Wow. Sounds Great!
>> >> Thank you for your contribution.
>> >>
>> >> Best
>> >> Nachi
>> >>
>> >>
>> >>
>> >> 2013/5/15 Ilya Shakhat :
>> >> > Hi Nachi,
>> >> >
>> >> > Tatyana and me volunteer for work on UI for VPNaaS. The corresponding
>> >> > bp
>> >> > is
>> >> > https://blueprints.launchpad.net/horizon/+spec/vpnaas-ui. We will
>> >> > start
>> >> > filling the specification soon.
>> >> >
>> >> > Thanks,
>> >> > Ilya
>> >> >
>> >> >
>> >> > 2013/5/15 Nachi Ueno 
>> >> >>
>> >> >> Hi Folks
>> >> >>
>> >> >> We had VPN meetings yesterday.
>> >> >>
>> >> >> Agenda :
>> >> >> 1.  local_subnet vs local_cidr  --> Keep discussion
>> >> >> 2.  Use cidr value or subnet_id?  --> Keep discussion
>> >> >> 3.  Task assignment
>> >> >>   -  move doc to wiki (Swami) Done
>> >> >> https://wiki.openstack.org/wiki/Quantum/VPNaaS
>> >> >>   -  Register BP and get approval by Mark (Swami) Done -> H2
>> >> >>   -  check default value for lifetime value (Swami) Done
>> >> >>   -  Implement Data Model (Swami will push code to the gerrit) by
>> >> >> 5/20
>> >> >>   -  CLI (python-quantum client) work (Swami will push code to the
>> >> >> gerrit) by 5/20
>> >> >>   -  Implement Driver (Nachi & PCM ) by 5/31
>> >> >>  - Investigate strongswan
>> >> >>  -  rpc (spec needed)
>> >> >>  - Design driver archtecutre (spec needed)
>> >> >>  - Write driver code
>> >> >>   - Instation instructions on Wiki 5/31
>> >> >>   -  Devstack support (nati) late June?
>> >> >>   -  Write openstack network api document wiki (Sachin)
>> >> >>   -  Horizon work (needs contributer)
>> >> >>   -  Tempest (needs contributer)
>> >> >>
>> >> >> Next meeting is 5/16 Thursday at 3pm (PST) . On IRC
>> >> >> #openstack-meetings
>> >> >>
>> >> >> Meeting ended Tue May 14 01:00:58 2013 UTC.  Information about
>> >> >> MeetBot
>> >> >> at http://wiki.debian.org/MeetBot . (v 0.1.4)
>> >> >> Minutes:
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.html
>> >> >> Minutes (text):
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.txt
>> >> >> Log:
>> >> >>
>> >> >>
>> >> >>
>> >> >> http://eavesdrop.openstack.org/meetings/openstack_networking_vpn/2013/openstack_networking_vpn.2013-05-14-00.06.log.htm
>> >> >>
>> >> >> Thanks!
>> >> >> Nachi Ueno
>> >> >>
>> >> >> 2013/5/10 Nachi Ueno :
>> >> >> > Hi Paul
>> >> >> >
>> >> >> > Thanks for your contributions! :)
>> >> >> >
>> >> >> > Nachi
>> >> >> >
>> >> >> > 2013/5/10 Paul Michali :
>> >> >> >> Sure! Glad to work with you Nachi. Anything I can do to help out
>> >> >> >> on
>> >> >> >> the
>> >> >> >> project!
>> >> >> >>
>> >> >> >> I'll start looking at strongswan and how to configure.
>> >> >> >>
>> >> >> >>
>> >> >> >> Regards,
>> >> >> >>
>> >> >> >> PCM (Paul Michali)
>> >> >> >>
>> >> >> >>
>> >> >> >> On May 10, 2013, at 12:35 PM, Nachi Ueno wrote:
>> >> >> >>
>> >> >> >> Hi Paul
>> >> >> >>
>> >> >> >> Sounds Great.
>> >> >> >>
>> >> >> >> The first driver will be strong-swan based.
>> >> >> >> http://www.strongswan.org/
>> >> >> >>
>> >> >> >> How about work with me to implement strong-swan vpn driver?
>> >> >> >> Honestly, i'm new to strong-swan, so I'm very appreciate if you
>> >> >> >> could try strong-swan on ubuntu and share how to configure it
>> >> >> >> based
>> >> >> >> on
>> >> >> >> current API model.
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >> Nachi
>> >> >> >>
>> >> >> >>
>> >> >> >>
>

Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Salvatore Orlando
Kyle,

I actually meant that this problem might occur if patchset 2, that you're
trying to push, is a rebase of patchset 1 on top of another patch in order
to make the commit dependent on another one. If that is the case, gerrit
won't see any difference between patchset2 and patchset1, as git diff
HEAD~1 would be the same in both cases.

If this however is not your problem just disregard this post.

Salvatore


On 2 July 2013 23:29, Kyle Mestery (kmestery)  wrote:

> On Jul 2, 2013, at 4:18 PM, Salvatore Orlando  wrote:
> >
> > Kyle,
> >
> > is this commit basically a rebase on top of 91e0850?
> > In that case the diff with the previous patchset would be empty.
> > I recall I had a similar issue; I just tweaked a comment line in my
> commit to let gerrit think it was a different patchset.
> >
> > Salvatore
> >
> Hi Salvatore:
>
> Actually, no. 91e0850 is actually the implementation for bp/ml2-gre,
> and my commit implements bp/ml2-vxlan. Since there was some
> shared code, the first commit implemented that, thus requiring my
> commit to be dependent on this other commit.
>
> Thanks,
> Kyle
>
> >
> > On 2 July 2013 23:05, Kyle Mestery (kmestery) 
> wrote:
> > On Jul 2, 2013, at 3:52 PM, Jeremy Stanley 
> >  wrote:
> > > On 2013-07-02 20:14:35 + (+), Kyle Mestery (kmestery) wrote:
> > >> I'm trying to submit a gerrit review for a commit which is
> > >> dependent on another person's commit [1].
> > > [...]
> > >> ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no
> changes made)
> > > [...]
> > >
> > > I want to say I've seen this recently when someone copy-pasted from
> > > one commit message to another and included an existing Change-Id
> > > header, so you might amend and clear that line from the top commit
> > > before reviewing again (it'll get regenerated fresh).
> > >
> > Thanks Jeremy, this was it! For some reason, the top commit had the
> > same Change-ID as the bottom commit. Once I cleared it, I was able to
> > push, though with the issue seen here:
> >
> > [kmestery@fedora-mac quantum]$ git review -R
> > You have more than one commit that you are about to submit.
> > The outstanding commits are:
> >
> > 6cca2cf (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2
> plugin
> > 91e0850 Add gre tunneling support for the ML2 plugin
> >
> > Is this really what you meant to do?
> > Type 'yes' to confirm: yes
> > Enter passphrase for key '/home/kmestery/.ssh/id_rsa':
> > remote: Resolving deltas: 100% (32/32)
> > remote: Processing changes: new: 1, updated: 1, refs: 1, done
> > remote:
> > remote: New Changes:
> > remote:   https://review.openstack.org/35384
> > remote:
> > To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
> >  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no
> changes made)
> > error: failed to push some refs to 'ssh://
> mest...@review.openstack.org:29418/openstack/quantum.git'
> > [kmestery@fedora-mac quantum]$
> >
> > In addition, my commit seen at the review URL above does not show
> > the dependancy. Any ideas now?
> >
> > Thanks!
> > Kyle
> >
> > > If that doesn't work, could you push your working branch to
> > > somewhere I could pull from so I can test it myself? Feel free to
> > > follow up with me in private or open a bug against git-review on
> > > Launchpad if you don't want to run through troubleshooting
> > > back-and-forth on the list.o
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Kyle Mestery (kmestery)
On Jul 2, 2013, at 4:18 PM, Salvatore Orlando  wrote:
> 
> Kyle,
> 
> is this commit basically a rebase on top of 91e0850?
> In that case the diff with the previous patchset would be empty.
> I recall I had a similar issue; I just tweaked a comment line in my commit to 
> let gerrit think it was a different patchset.
> 
> Salvatore
> 
Hi Salvatore:

Actually, no. 91e0850 is actually the implementation for bp/ml2-gre,
and my commit implements bp/ml2-vxlan. Since there was some
shared code, the first commit implemented that, thus requiring my
commit to be dependent on this other commit.

Thanks,
Kyle

> 
> On 2 July 2013 23:05, Kyle Mestery (kmestery)  wrote:
> On Jul 2, 2013, at 3:52 PM, Jeremy Stanley 
>  wrote:
> > On 2013-07-02 20:14:35 + (+), Kyle Mestery (kmestery) wrote:
> >> I'm trying to submit a gerrit review for a commit which is
> >> dependent on another person's commit [1].
> > [...]
> >> ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
> >> made)
> > [...]
> >
> > I want to say I've seen this recently when someone copy-pasted from
> > one commit message to another and included an existing Change-Id
> > header, so you might amend and clear that line from the top commit
> > before reviewing again (it'll get regenerated fresh).
> >
> Thanks Jeremy, this was it! For some reason, the top commit had the
> same Change-ID as the bottom commit. Once I cleared it, I was able to
> push, though with the issue seen here:
> 
> [kmestery@fedora-mac quantum]$ git review -R
> You have more than one commit that you are about to submit.
> The outstanding commits are:
> 
> 6cca2cf (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2 plugin
> 91e0850 Add gre tunneling support for the ML2 plugin
> 
> Is this really what you meant to do?
> Type 'yes' to confirm: yes
> Enter passphrase for key '/home/kmestery/.ssh/id_rsa':
> remote: Resolving deltas: 100% (32/32)
> remote: Processing changes: new: 1, updated: 1, refs: 1, done
> remote:
> remote: New Changes:
> remote:   https://review.openstack.org/35384
> remote:
> To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
>  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
> made)
> error: failed to push some refs to 
> 'ssh://mest...@review.openstack.org:29418/openstack/quantum.git'
> [kmestery@fedora-mac quantum]$
> 
> In addition, my commit seen at the review URL above does not show
> the dependancy. Any ideas now?
> 
> Thanks!
> Kyle
> 
> > If that doesn't work, could you push your working branch to
> > somewhere I could pull from so I can test it myself? Feel free to
> > follow up with me in private or open a bug against git-review on
> > Launchpad if you don't want to run through troubleshooting
> > back-and-forth on the list.o
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Aaron Rosen
Yes, I think this is the desired behavior. If someone deletes a network we
check to see if there are any ports on the network. If there are ports on
the network we raise. If there are no ports on the network we allow it to
be deleted. Since you cannot have a subnet without a network we should
delete that then too. I don't see any reason to complicate things by
forcing the user to delete the subnets first.

Aaron


On Tue, Jul 2, 2013 at 2:04 PM, Edgar Magana  wrote:

> Before filing a bug, do we really want this kind of functionality?
> Is it "correct" to delete a network without really checking if the owner
> really wants to delete all subnets associated with it?
>
> Edgar
>
> From: Aaron Rosen 
> Reply-To: OpenStack List 
> Date: Tuesday, July 2, 2013 1:55 PM
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] does delete_network call
> delete_subnet automatically?
>
> Good point. We should be calling delete_subnet() from delete_network() in
> the db_base class rather than deleting it directly from the database.
>
> Aaron
>
>
> On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana  wrote:
>
>> If the plugin performs operations when the subnet is created, how is
>> possible to roll-back those operation if the plugin implementation of *
>> delete_subnet()* is never called?
>> I don’t think we should let *delete_*network in *debasepluginv2.py* delete
>> all subnets, we could just ask the tenant user to delete all subnets first,
>> is there any specific reason why we automatically delete all subnets?
>>
>> Edgar
>>
>> From: Aaron Rosen 
>> Reply-To: OpenStack List 
>> Date: Tuesday, July 2, 2013 12:40 PM
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron] does delete_network call
>> delete_subnet automatically?
>>
>> delete_network() in the db_base class handles deleting the subnets
>> associated with the networks.
>>
>>
>> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2.py#L1033
>>
>>
>> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana wrote:
>>
>>> Folks,
>>>
>>> When I create a network and a subnet associated to that network, I am
>>> able to delete the network without deleting the subnet first from both CLI
>>> and Horizon.
>>> The difference is that in Horizon, both APIs are called: delete_subnet()
>>> and delete_network()
>>> When I tried by CLI, only delete_network is called as you can see in
>>> these logs:
>>>
>>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>>> /etc/quantum/policy.json
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
>>> QuantumPluginPLUMgrid Status: delete_network() called
>>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>>> /etc/quantum/policy.json
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>>> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
>>> 2013-07-02 12:26:57DEBUG
>>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>>> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
>>> 'Accept': 'application/json'}
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
>>> network.delete.end on notifications.info
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
>>> UNIQUE_ID is ad8c5a233bd6403ea850cde73afb720a.
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
>>> asynchronous fanout cast...
>>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
>>> UNIQUE_ID is dcba5e2b55bb4cb89aab17f5797882e6.
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>>> Authenticating user token
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>>> Removing headers from request environment:
>>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
>>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>>> Storing dd0bd438481c2d010d1abc5903fa11da token in memcache
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for
>>> GET /subnets.json
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET
>>> /subnets.json
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
>>> '/subnets{.format}', defaults: {'action': u'index', 'controller': >> at 37764240 wrapping >}
>>> 2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
>>> u'index', 'controller': >>
>>> However, the subnet is not in the DB but the delete_subnet API is never
>>> called, can somebody explain what is happening here?
>>> BTW. This is Grizzly release
>>>
>>> Thanks,
>>>
>>> Edgar
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/c

Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Salvatore Orlando
Kyle,

is this commit basically a rebase on top of 91e0850?
In that case the diff with the previous patchset would be empty.
I recall I had a similar issue; I just tweaked a comment line in my commit
to let gerrit think it was a different patchset.

Salvatore


On 2 July 2013 23:05, Kyle Mestery (kmestery)  wrote:

> On Jul 2, 2013, at 3:52 PM, Jeremy Stanley 
>  wrote:
> > On 2013-07-02 20:14:35 + (+), Kyle Mestery (kmestery) wrote:
> >> I'm trying to submit a gerrit review for a commit which is
> >> dependent on another person's commit [1].
> > [...]
> >> ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no
> changes made)
> > [...]
> >
> > I want to say I've seen this recently when someone copy-pasted from
> > one commit message to another and included an existing Change-Id
> > header, so you might amend and clear that line from the top commit
> > before reviewing again (it'll get regenerated fresh).
> >
> Thanks Jeremy, this was it! For some reason, the top commit had the
> same Change-ID as the bottom commit. Once I cleared it, I was able to
> push, though with the issue seen here:
>
> [kmestery@fedora-mac quantum]$ git review -R
> You have more than one commit that you are about to submit.
> The outstanding commits are:
>
> 6cca2cf (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2 plugin
> 91e0850 Add gre tunneling support for the ML2 plugin
>
> Is this really what you meant to do?
> Type 'yes' to confirm: yes
> Enter passphrase for key '/home/kmestery/.ssh/id_rsa':
> remote: Resolving deltas: 100% (32/32)
> remote: Processing changes: new: 1, updated: 1, refs: 1, done
> remote:
> remote: New Changes:
> remote:   https://review.openstack.org/35384
> remote:
> To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
>  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes
> made)
> error: failed to push some refs to 'ssh://
> mest...@review.openstack.org:29418/openstack/quantum.git'
> [kmestery@fedora-mac quantum]$
>
> In addition, my commit seen at the review URL above does not show
> the dependancy. Any ideas now?
>
> Thanks!
> Kyle
>
> > If that doesn't work, could you push your working branch to
> > somewhere I could pull from so I can test it myself? Feel free to
> > follow up with me in private or open a bug against git-review on
> > Launchpad if you don't want to run through troubleshooting
> > back-and-forth on the list.o
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Kyle Mestery (kmestery)
On Jul 2, 2013, at 3:52 PM, Jeremy Stanley 
 wrote:
> On 2013-07-02 20:14:35 + (+), Kyle Mestery (kmestery) wrote:
>> I'm trying to submit a gerrit review for a commit which is
>> dependent on another person's commit [1].
> [...]
>> ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
>> made)
> [...]
> 
> I want to say I've seen this recently when someone copy-pasted from
> one commit message to another and included an existing Change-Id
> header, so you might amend and clear that line from the top commit
> before reviewing again (it'll get regenerated fresh).
> 
Thanks Jeremy, this was it! For some reason, the top commit had the
same Change-ID as the bottom commit. Once I cleared it, I was able to
push, though with the issue seen here:

[kmestery@fedora-mac quantum]$ git review -R
You have more than one commit that you are about to submit.
The outstanding commits are:

6cca2cf (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2 plugin
91e0850 Add gre tunneling support for the ML2 plugin

Is this really what you meant to do?
Type 'yes' to confirm: yes
Enter passphrase for key '/home/kmestery/.ssh/id_rsa': 
remote: Resolving deltas: 100% (32/32)
remote: Processing changes: new: 1, updated: 1, refs: 1, done
remote: 
remote: New Changes:
remote:   https://review.openstack.org/35384
remote: 
To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
 ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes made)
error: failed to push some refs to 
'ssh://mest...@review.openstack.org:29418/openstack/quantum.git'
[kmestery@fedora-mac quantum]$ 

In addition, my commit seen at the review URL above does not show
the dependancy. Any ideas now?

Thanks!
Kyle

> If that doesn't work, could you push your working branch to
> somewhere I could pull from so I can test it myself? Feel free to
> follow up with me in private or open a bug against git-review on
> Launchpad if you don't want to run through troubleshooting
> back-and-forth on the list.o




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Edgar Magana
Before filing a bug, do we really want this kind of functionality?
Is it "correct" to delete a network without really checking if the owner
really wants to delete all subnets associated with it?

Edgar

From:  Aaron Rosen 
Reply-To:  OpenStack List 
Date:  Tuesday, July 2, 2013 1:55 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] does delete_network call
delete_subnet automatically?

Good point. We should be calling delete_subnet() from delete_network() in
the db_base class rather than deleting it directly from the database.

Aaron


On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana  wrote:
> If the plugin performs operations when the subnet is created, how is possible
> to roll-back those operation if the plugin implementation of delete_subnet()
> is never called?
> I don¹t think we should let delete_network in debasepluginv2.py delete all
> subnets, we could just ask the tenant user to delete all subnets first, is
> there any specific reason why we automatically delete all subnets?
> 
> Edgar
> 
> From:  Aaron Rosen 
> Reply-To:  OpenStack List 
> Date:  Tuesday, July 2, 2013 12:40 PM
> To:  OpenStack List 
> Subject:  Re: [openstack-dev] [Neutron] does delete_network call delete_subnet
> automatically?
> 
> delete_network() in the db_base class handles deleting the subnets associated
> with the networks.
> 
> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2.
> py#L1033
> 
> 
> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana  wrote:
>> Folks,
>> 
>> When I create a network and a subnet associated to that network, I am able to
>> delete the network without deleting the subnet first from both CLI and
>> Horizon.
>> The difference is that in Horizon, both APIs are called: delete_subnet() and
>> delete_network()
>> When I tried by CLI, only delete_network is called as you can see in these
>> logs:
>> 
>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>> /etc/quantum/policy.json
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
>> QuantumPluginPLUMgrid Status: delete_network() called
>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>> /etc/quantum/policy.json
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
>> 'Accept': 'application/json'}
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
>> network.delete.end on notifications.info 
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is
>> ad8c5a233bd6403ea850cde73afb720a.
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
>> asynchronous fanout cast...
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is
>> dcba5e2b55bb4cb89aab17f5797882e6.
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>> Authenticating user token
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Removing
>> headers from request environment:
>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Pro
>> ject-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X
>> -User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,
>> X-Tenant,X-Role
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Storing
>> dd0bd438481c2d010d1abc5903fa11da token in memcache
>> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
>> /subnets.json
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
>> '/subnets{.format}', defaults: {'action': u'index', 'controller': > 37764240 wrapping >}
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
>> u'index', 'controller': > 
>> However, the subnet is not in the DB but the delete_subnet API is never
>> called, can somebody explain what is happening here?
>> BTW. This is Grizzly release
>> 
>> Thanks,
>> 
>> Edgar
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/li
> stinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack

Re: [openstack-dev] [Nova] Criteria for compute drivers

2013-07-02 Thread Alessandro Pilotti


On 02.07.2013, at 20:43, "Russell Bryant"  wrote:

> Greetings,
> 
> Nova includes various compute drivers today, but the test coverage they
> receive varies quite a bit.  This is documented on the following wiki
> page.  The drivers are broken up into groups A, B, and C.
> 
>https://wiki.openstack.org/wiki/HypervisorSupportMatrix
> 
> We have two new compute drivers in the queue for Havana: docker [1] and
> z/vm [2].  I'd like to propose as a piece of criteria for inclusion that
> new drivers go into groups A or B.
> 
> Further, I would like to see *all* drivers move into groups A or B by
> the release of Icehouse.  I've been told that this is already in the
> works for VMware and baremetal, at least.
> 

There's also work going on on the Hyper-V side.


> I feel like if there isn't enough interesting and willingness to raise
> the bar on testing a given compute driver, then we're just wasting our
> time and effort having it in the tree.
> 
> Feedback welcome!
> 
> Thanks,
> 
> [1] https://blueprints.launchpad.net/nova/+spec/new-hypervisor-docker
> [2] https://blueprints.launchpad.net/nova/+spec/zvm-plugin
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] No meeting this week

2013-07-02 Thread Mark Washenberger
Hi folks,

Due to the number of core members on vacation, we will not be having a
glance team meeting this week.

We will pick up again for Thursday of next week (July 11 2013), at the 2000
UTC timeslot.

Thanks,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Aaron Rosen
Good point. We should be calling delete_subnet() from delete_network() in
the db_base class rather than deleting it directly from the database.

Aaron


On Tue, Jul 2, 2013 at 1:39 PM, Edgar Magana  wrote:

> If the plugin performs operations when the subnet is created, how is
> possible to roll-back those operation if the plugin implementation of *
> delete_subnet()* is never called?
> I don’t think we should let *delete_*network in *debasepluginv2.py* delete
> all subnets, we could just ask the tenant user to delete all subnets first,
> is there any specific reason why we automatically delete all subnets?
>
> Edgar
>
> From: Aaron Rosen 
> Reply-To: OpenStack List 
> Date: Tuesday, July 2, 2013 12:40 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron] does delete_network call
> delete_subnet automatically?
>
> delete_network() in the db_base class handles deleting the subnets
> associated with the networks.
>
>
> https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2.py#L1033
>
>
> On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana wrote:
>
>> Folks,
>>
>> When I create a network and a subnet associated to that network, I am
>> able to delete the network without deleting the subnet first from both CLI
>> and Horizon.
>> The difference is that in Horizon, both APIs are called: delete_subnet()
>> and delete_network()
>> When I tried by CLI, only delete_network is called as you can see in
>> these logs:
>>
>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>> /etc/quantum/policy.json
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
>> QuantumPluginPLUMgrid Status: delete_network() called
>> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
>> /etc/quantum/policy.json
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
>> 2013-07-02 12:26:57DEBUG
>> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
>> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
>> 'Accept': 'application/json'}
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
>> network.delete.end on notifications.info
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
>> UNIQUE_ID is ad8c5a233bd6403ea850cde73afb720a.
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
>> asynchronous fanout cast...
>> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp]
>> UNIQUE_ID is dcba5e2b55bb4cb89aab17f5797882e6.
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>> Authenticating user token
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>> Removing headers from request environment:
>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
>> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
>> Storing dd0bd438481c2d010d1abc5903fa11da token in memcache
>> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
>> /subnets.json
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
>> '/subnets{.format}', defaults: {'action': u'index', 'controller': > at 37764240 wrapping >}
>> 2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
>> u'index', 'controller': >
>> However, the subnet is not in the DB but the delete_subnet API is never
>> called, can somebody explain what is happening here?
>> BTW. This is Grizzly release
>>
>> Thanks,
>>
>> Edgar
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Jeremy Stanley
On 2013-07-02 20:14:35 + (+), Kyle Mestery (kmestery) wrote:
> I'm trying to submit a gerrit review for a commit which is
> dependent on another person's commit [1].
[...]
>  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes 
> made)
[...]

I want to say I've seen this recently when someone copy-pasted from
one commit message to another and included an existing Change-Id
header, so you might amend and clear that line from the top commit
before reviewing again (it'll get regenerated fresh).

If that doesn't work, could you push your working branch to
somewhere I could pull from so I can test it myself? Feel free to
follow up with me in private or open a bug against git-review on
Launchpad if you don't want to run through troubleshooting
back-and-forth on the list.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Aaron Rosen
Hi Kyle,

I wonder if it works if you drop the -R ?

I've seen this error:

 ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes
made)

if i try to push a change that has already been pushed and nothing has
changed. Any chance that may have happened?

Best,

Aaron



On Tue, Jul 2, 2013 at 1:14 PM, Kyle Mestery (kmestery)
wrote:

> I'm trying to submit a gerrit review for a commit which is dependent on
> another person's commit [1]. I've followed the instructions here [2], and
> had no problems doing the branch create and the cherry pick of both the
> dependent and my own commits. However, when I go to commit, I get a gerrit
> error. Trying again yielded another commit going to the dependent commit,
> likely because I had to rebase both my change and the dependent commit. Has
> anyone seen anything like this? The error I see is below:
>
> [kmestery@fedora-mac quantum]$ git review -R
> You have more than one commit that you are about to submit.
> The outstanding commits are:
>
> 5929516 (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2 plugin
> 91e0850 Add gre tunneling support for the ML2 plugin
>
> Is this really what you meant to do?
> Type 'yes' to confirm: yes
> Enter passphrase for key '/home/kmestery/.ssh/id_rsa':
> remote: Resolving deltas: 100% (32/32)
> remote: Processing changes: updated: 1, refs: 2, done
> To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
>  ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes
> made)
> error: failed to push some refs to 'ssh://
> mest...@review.openstack.org:29418/openstack/quantum.git'
> [kmestery@fedora-mac quantum]$
>
>
> Thanks,
> Kyle
>
> [1] https://review.openstack.org/#/c/33297/
> [2] https://wiki.openstack.org/wiki/Gerrit_Workflow
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Edgar Magana
If the plugin performs operations when the subnet is created, how is
possible to roll-back those operation if the plugin implementation of
delete_subnet() is never called?
I don¹t think we should let delete_network in debasepluginv2.py delete all
subnets, we could just ask the tenant user to delete all subnets first, is
there any specific reason why we automatically delete all subnets?

Edgar

From:  Aaron Rosen 
Reply-To:  OpenStack List 
Date:  Tuesday, July 2, 2013 12:40 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] does delete_network call
delete_subnet automatically?

delete_network() in the db_base class handles deleting the subnets
associated with the networks.

https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v
2.py#L1033


On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana  wrote:
> Folks,
> 
> When I create a network and a subnet associated to that network, I am able to
> delete the network without deleting the subnet first from both CLI and
> Horizon.
> The difference is that in Horizon, both APIs are called: delete_subnet() and
> delete_network()
> When I tried by CLI, only delete_network is called as you can see in these
> logs:
> 
> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
> /etc/quantum/policy.json
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
> QuantumPluginPLUMgrid Status: delete_network() called
> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
> /etc/quantum/policy.json
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
> 'Accept': 'application/json'}
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
> network.delete.end on notifications.info 
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is
> ad8c5a233bd6403ea850cde73afb720a.
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
> asynchronous fanout cast...
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is
> dcba5e2b55bb4cb89aab17f5797882e6.
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
> Authenticating user token
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Removing
> headers from request environment:
> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Proj
> ect-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-U
> ser-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-T
> enant,X-Role
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Storing
> dd0bd438481c2d010d1abc5903fa11da token in memcache
> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
> /subnets.json
> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
> 2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
> '/subnets{.format}', defaults: {'action': u'index', 'controller':  37764240 wrapping >}
> 2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
> u'index', 'controller':  
> However, the subnet is not in the DB but the delete_subnet API is never
> called, can somebody explain what is happening here?
> BTW. This is Grizzly release
> 
> Thanks,
> 
> Edgar
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jarret Raim
On 7/2/13 12:43 PM, "Simo Sorce"  wrote:


>On Tue, 2013-07-02 at 16:55 +, Tiwari, Arvind wrote:
>> Hi Simo,
>> 
>> I am lost.
>>  
>> Does Barbican is product came out of
>>https://wiki.openstack.org/wiki/KeyManager BP?
>
>Yes Barbican is an implementation of this Blueprint afaik.

Barbican is based on the goals of this blueprint. We revised a lot of it,
but the goals are the same. The current documentation can be found here:

https://github.com/cloudkeep/barbican/wiki

https://github.com/cloudkeep/barbican


>
>> If yes, then why it is deviating from the BP which says Key Manager
>>will be separate service but not a part of Keystone.
>
>Sorry I don't follow, Barbican is separated from Keystone.

Correct. Barbican is a separate service. We use Keystone for auth
(obviously), but we have our own infrastructure.


Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issues with "git review" for a dependent commit

2013-07-02 Thread Kyle Mestery (kmestery)
I'm trying to submit a gerrit review for a commit which is dependent on another 
person's commit [1]. I've followed the instructions here [2], and had no 
problems doing the branch create and the cherry pick of both the dependent and 
my own commits. However, when I go to commit, I get a gerrit error. Trying 
again yielded another commit going to the dependent commit, likely because I 
had to rebase both my change and the dependent commit. Has anyone seen anything 
like this? The error I see is below:

[kmestery@fedora-mac quantum]$ git review -R
You have more than one commit that you are about to submit.
The outstanding commits are:

5929516 (HEAD, bp/ml2-vxlan) Add VXLAN tunneling support for the ML2 plugin
91e0850 Add gre tunneling support for the ML2 plugin

Is this really what you meant to do?
Type 'yes' to confirm: yes
Enter passphrase for key '/home/kmestery/.ssh/id_rsa': 
remote: Resolving deltas: 100% (32/32)
remote: Processing changes: updated: 1, refs: 2, done
To ssh://mest...@review.openstack.org:29418/openstack/quantum.git
 ! [remote rejected] HEAD -> refs/publish/master/bp/ml2-vxlan (no changes made)
error: failed to push some refs to 
'ssh://mest...@review.openstack.org:29418/openstack/quantum.git'
[kmestery@fedora-mac quantum]$ 


Thanks,
Kyle

[1] https://review.openstack.org/#/c/33297/
[2] https://wiki.openstack.org/wiki/Gerrit_Workflow
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-02 Thread Doug Hellmann
On Tue, Jul 2, 2013 at 2:30 PM, Monty Taylor  wrote:

>
>
> On 07/02/2013 05:46 AM, Doug Hellmann wrote:
> >
> >
> >
> > On Tue, Jul 2, 2013 at 5:52 AM, Robert Collins
> > mailto:robe...@robertcollins.net>> wrote:
> >
> > On 2 July 2013 21:32, Thierry Carrez  > > wrote:
> > > Thierry Carrez wrote:
> > >> """
> > >> 'OpenStack Programs' are efforts which are essential to the
> > completion
> > >> of our mission. Programs can create any code repository and
> > produce any
> > >> deliverable they deem necessary to achieve their goals.
> > >>
> > >> Programs are placed under the oversight of the Technical
> > Committee, and
> > >> contributing to one of their code repositories grants you ATC
> status.
> > >>
> > >> Current efforts or teams which want to be recognized as an
> 'OpenStack
> > >> Program' should place a request to the Technical Committee,
> > including a
> > >> clear mission statement describing how they help the OpenStack
> > general
> > >> mission and how that effort is essential to the completion of our
> > >> mission. If programs have a goal that includes the production of
> > >> a server 'integrated' deliverable, that specific project would
> still
> > >> need to go through an Incubation period.
> > >>
> > >> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> > >> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer',
> > 'Documentation',
> > >> 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in
> > >> incubation. Those programs should retroactively submit a mission
> > >> statement and initial lead designation, if they don't have one
> > already.
> > >> """
> > >
> > > Oops. In this variant, Trove and Ironic, as programs, would not be
> "in
> > > incubation" (only one of their deliverables would). That last
> > paragraph
> > > should be fixed as:
> > >
> > > """
> > > The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> > > 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer',
> > 'Documentation',
> > > 'Infrastructure', 'QA', 'Oslo', 'Trove' and 'Ironic'. Those
> programs
> > > should retroactively submit a mission statement and initial lead
> > > designation, if they don't have one already.
> > > """
> > >
> > > Maybe Ironic should be merged into the TripleO program when it's
> > considered.
> >
> > Certainly; with our focus on deploy and operations, Ironic is very
> > much something we'll care about forever :). OTOH, baremetal machine
> > provisioning is a distinct concern from OpenStack deployment and
> > operations. I don't know that there is a better place for Ironic;
> it's
> > certainly got significant tentacles into other areas than just Nova
> > [hence it being split out in the first place]. Nevertheless : clearly
> > Ironic is a Project, and Incubated. I think whether it is
> incorporated
> > into it's own Program, or TripleO, isn't a very interesting question.
> > ATC membership is decoupled from things now, so \o/.
> >
> > On proposal 3, I wonder if it makes things too vague : if a Program
> > can have one or more integrated Projects, it sort of suggests that
> > perhaps Neutron be a Project of the Nova Program?
> >
> >
> > I like option 3 because it lets us move ahead without having to revisit
> > what may just have been an unfortunate narrowness of vision in the
> > original charter (who knew we would grow so quickly?). We have been
> > letting the projects evolve around feature sets in a way that helps us
> > manage code and feature complexity, e.g. breaking networking and block
> > storage out of nova. The addition of programs as groups of one or more
> > projects is a natural way to manage changes in the community's size and
> > complexity as we continue to grow.
>
> I'm fine with this as long as a program can be a group of 0 or more
> projects. On the chance that we decide to use the concept to refer to
> horizontal efforts (I do not think we need to decide on that right now)
> I would hate to be hide-bound and exclude security or release or
> translations because they don't have their own repo or project deliverable.
>

Works for me.

Doug


>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Bryan D. Payne
> If you do not trust keystone to give you the right information you have
> already lost as keystone is used (afaik) to check for authorization
> anyway.
>

This is true.


> Can you be a little bit more explicit on the threat model you have in
> mind and what guarantees Barbican would give you that would make it more
> suitable to store public key than Keystone ?


I'm concerned about malicious tampering with the keys.  If the keys are
then use for validating that a user is presenting the correct private key,
this could result in an instance compromise.  Yes, if someone tampers with
other data in keystone then it could result in a compromise as well.  This
is true.

As I think about this some more, I think the best way to frame it is that
-- for me -- key data and user / password data are two different classes
that may have different security requirements.  It is nice to not mix the
two, IMHO.  However, I can appreciate the simplicity that comes with just
not using Barbican and throwing everything in Keystone.

WIth this in mind, I do like the idea of having Keystone return a pointer
to the key location as URL.  This can be a ref back to a Keystone route.
 Or it can be a ref to a Barbican route.  This would be most flexible and
allow people to fullfil different security and auditing requirements.

-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Distro defaults [was Re: [oslo.config] Config files overriding CLI: The path of most surprise.]

2013-07-02 Thread Mark McLoughlin
On Tue, 2013-07-02 at 17:48 +, Jeremy Stanley wrote:
> On 2013-07-01 15:10:26 -0700 (-0700), Mark Washenberger wrote:
> [...]
> > The talk about permanence confuses me, unless we mean that more
> > permanent values are overridden by less permanent ones.
> [...]
> 
> I think the "permanence" counter argument (which I don't agree with,
> just recounting it for completeness) was that command-line arguments
> may be embedded in init scripts by some distributions and then
> administrators would be surprised when their modifications to the
> configuration files weren't respected.

Yes, that was what the "permanence" discussion related to. The example I
was thinking of was '--logfile /var/log/nova/api.log' which doesn't seem
like a ridiculous thing to pass via the command line.

Since we've clearly moved on, I'm not sure replaying old points is very
constructive, but you have hit on an interesting topic, so ... :)

> Ultimately, however, any time
> distribution defaults which could be set in packaged configuration
> are instead being set with the service command-line in packaged init
> scripts, I would tend to just consider that a (serious) packaging
> bug and certainly nothing we should be catering to as a project.

That's very ... stringent. But I do mostly agree. Distros shouldn't
stick a tonne of distro defaults on the command line of services.

The two principles that matter IMHO are:

  1) users should be able to override defaults

  2) if a user deletes their config file, they get back to the defaults

Something we've experimented with in Red Hat OpenStack is to put distro
defaults in e.g. /usr/share/nova/nova-dist.conf. You can see some of the
thinking here, for example:

  https://bugzilla.redhat.com/show_bug.cgi?id=887334#c4

The alternative approach would be to patch the code with distro
defaults.

Now, you could say that distros shouldn't need to modify the defaults.
That's fair, but I don't think there's anything too crazy in our distro
defaults either:

  https://github.com/redhat-openstack/openstack-nova/blob/master/nova.conf

Anyone got other thoughts on how distros should handle this?

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday July 2nd at 19:00 UTC

2013-07-02 Thread Elizabeth Krumbach Joseph
On Tue, Jul 2, 2013 at 12:41 PM, Elizabeth Krumbach Joseph
 wrote:
> http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-02-19.03.log.htm

Lost a letter there:
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-02-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday July 2nd at 19:00 UTC

2013-07-02 Thread Elizabeth Krumbach Joseph
On Mon, Jul 1, 2013 at 9:33 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday July 2nd, at 19:00 UTC in #openstack-meeting

Meeting minutes and logs:

http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-02-19.03.html
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-02-19.03.txt
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-02-19.03.log.htm

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Aaron Rosen
delete_network() in the db_base class handles deleting the subnets
associated with the networks.

https://github.com/openstack/quantum/blob/master/quantum/db/db_base_plugin_v2.py#L1033


On Tue, Jul 2, 2013 at 12:31 PM, Edgar Magana  wrote:

> Folks,
>
> When I create a network and a subnet associated to that network, I am able
> to delete the network without deleting the subnet first from both CLI and
> Horizon.
> The difference is that in Horizon, both APIs are called: delete_subnet()
> and delete_network()
> When I tried by CLI, only delete_network is called as you can see in these
> logs:
>
> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
> /etc/quantum/policy.json
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
> QuantumPluginPLUMgrid Status: delete_network() called
> 2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
> /etc/quantum/policy.json
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
> PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
> 2013-07-02 12:26:57DEBUG
> [quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
> PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
> 'Accept': 'application/json'}
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
> network.delete.end on notifications.info
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
> is ad8c5a233bd6403ea850cde73afb720a.
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
> asynchronous fanout cast...
> 2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
> is dcba5e2b55bb4cb89aab17f5797882e6.
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
> Authenticating user token
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
> Removing headers from request environment:
> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
> 2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
> Storing dd0bd438481c2d010d1abc5903fa11da token in memcache
> 2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
> /subnets.json
> 2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
> 2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
> '/subnets{.format}', defaults: {'action': u'index', 'controller':  at 37764240 wrapping >}
> 2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
> u'index', 'controller': 
> However, the subnet is not in the DB but the delete_subnet API is never
> called, can somebody explain what is happening here?
> BTW. This is Grizzly release
>
> Thanks,
>
> Edgar
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] does delete_network call delete_subnet automatically?

2013-07-02 Thread Edgar Magana
Folks,

When I create a network and a subnet associated to that network, I am able
to delete the network without deleting the subnet first from both CLI and
Horizon.
The difference is that in Horizon, both APIs are called: delete_subnet() and
delete_network()
When I tried by CLI, only delete_network is called as you can see in these
logs:

2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
/etc/quantum/policy.json
2013-07-02 12:26:57DEBUG
[quantum.plugins.plumgrid.plumgrid_nos_plugin.plumgrid_plugin]
QuantumPluginPLUMgrid Status: delete_network() called
2013-07-02 12:26:57DEBUG [quantum.policy] loading policy file at
/etc/quantum/policy.json
2013-07-02 12:26:57DEBUG
[quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
PLUMgrid_NOS_Server: 10.1.2.43 8080 DELETE
2013-07-02 12:26:57DEBUG
[quantum.plugins.plumgrid.plumgrid_nos_plugin.rest_connection]
PLUMgrid_NOS_Server Sending Data: {'Content-type': 'application/json',
'Accept': 'application/json'}
2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Sending
network.delete.end on notifications.info
2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
is ad8c5a233bd6403ea850cde73afb720a.
2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] Making
asynchronous fanout cast...
2013-07-02 12:26:57DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID
is dcba5e2b55bb4cb89aab17f5797882e6.
2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token]
Authenticating user token
2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Removing
headers from request environment:
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Pr
oject-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id
,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Na
me,X-Tenant,X-Role
2013-07-02 12:27:23DEBUG [keystoneclient.middleware.auth_token] Storing
dd0bd438481c2d010d1abc5903fa11da token in memcache
2013-07-02 12:27:23DEBUG [routes.middleware] No route matched for GET
/subnets.json
2013-07-02 12:27:23DEBUG [routes.middleware] Matched GET /subnets.json
2013-07-02 12:27:23DEBUG [routes.middleware] Route path:
'/subnets{.format}', defaults: {'action': u'index', 'controller': >}
2013-07-02 12:27:23DEBUG [routes.middleware] Match dict: {'action':
u'index', 'controller': ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move key pair management out of Nova and into Keystone?

2013-07-02 Thread Bhandaru, Malini K
Greetings Simo, Jay, Bryan, Jarret, Dolph, Phil, Nachi, Jamie, Thierry, Arvind 
and others!



1)  The key manager, barbican, under development supports the blueprint we 
developed and discussed on this mailing list.

https://wiki.openstack.org/wiki/KeyManager#Key_Manager

2)  Its full featured version is to hold all things "key" related, 
including public keys and certificates, their renewal and supporting necessary 
KMIP interfaces.

3)  Only authenticated users/services can access the keys in barbican, 
barbican itself uses keystone for authentication and authorization like other 
OpenStack services.

4)  First use case to support is  volume encryption (John Hopkins Applied 
Physics Lab team)

https://review.openstack.org/30976

5)  Rackspace (Jarret and his team) and Intel have been working hard to 
meet Havana release milestones.



At the Portland summit we did discuss whether to keep the key management 
functionality as a separate entity or as a part of keystone.

Participants included Adam Young, Dolph, and several other keystone cores and 
the Rackspace and Intel folks.

1) pro - if part of keystone, less of an incubation hurdle.

2) cons - keystone is already feature rich and this is a separate piece of 
functionality. Should we want to later pull it out and float as a separate 
service a lot of work. (The need for a key manager has been felt as more of us 
seek to provide greater security for user data at rest (volumes, objects) )

3) Key manager would be a pluggable module for folks who might want an HSM.

4) We did mention at the summit that storing nova ssl keys to access 
instances could be shifted to the key manager given a broader scope as a

 repository of all things used to encrypt/decrypt data.

5) Saving the users', OpenStack service endpoints', and instance pubic keys 
and/or certificate intersects with Keystone's identity credential storage.

 All things identity related are the prerogative of keystone.

 This is where Jarret's comment fits in about a pointer to the 
certificate or public key could be saved in keystone with the public key, 
certificate, even private key inside key manager. To meet compliance needs more 
audit logging will be present in the key manager. Certainly, wherever keys are 
stored more audit logging is feasible. This is just a logical divide of whether 
to build in the functionality.

6) Today keystone provides a catalog of service endpoints (including the 
key manager), it is logical to extend this to include access to their 
certificates.  This would then serve as central point to determine how to 
securely communicate with the endpoint - assuming neither keystone nor barbican 
are compromised.











-Original Message-
From: Simo Sorce [mailto:s...@redhat.com]
Sent: Tuesday, July 02, 2013 10:43 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Move keypair management out of Nova and into 
Keystone?



On Tue, 2013-07-02 at 16:55 +, Tiwari, Arvind wrote:

> Hi Simo,

>

> I am lost.

>

> Does Barbican is product came out of 
> https://wiki.openstack.org/wiki/KeyManager BP?



Yes Barbican is an implementation of this Blueprint afaik.



> If yes, then why it is deviating from the BP which says Key Manager will be 
> separate service but not a part of Keystone.



Sorry I don't follow, Barbican is separated from Keystone.



> If no, then why we are thinking about new Key manager (which seems to me a 
> subset of above BP)?



New ?



Simo.



--

Simo Sorce * Red Hat, Inc * New York





___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Criteria for compute drivers

2013-07-02 Thread Dan Smith
> We have two new compute drivers in the queue for Havana: docker [1]
> and z/vm [2].  I'd like to propose as a piece of criteria for
> inclusion that new drivers go into groups A or B.

I think this is a really good idea. As we continue to absorb new and
more complex drivers into the tree, the amount of stuff not being
functionally tested is growing too fast, IMHO.
 
> Further, I would like to see *all* drivers move into groups A or B by
> the release of Icehouse.  I've been told that this is already in the
> works for VMware and baremetal, at least.

I think Hyper-V promised this as well a while back.

Major +1 from me :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Ryan Lane
On Tue, Jul 2, 2013 at 8:12 AM, Bryan D. Payne  wrote:

>
>  > I don't understand. Users already have custody of their own keys. The
>> > only thing that Keystone/Nova has is the public key fingerprint [1], not
>> > the private key...
>>
>> You acatually have the public key, not just the fingerprint, but indeed
>> I do not see why abrbican should be involved here.  apublic key does not
>> need the same level of protection of a private key or a symmetric
>> encryption key, so by storing this data in barbican we would only
>> needlessly expose barbican to more access patternsand more
>> logging/auditing volume than is needed.
>>
>
> I believe you're confusing a couple of points here.  In this case, for
> public keys, what matters is integrity.  For the other cases that you
> mentioned, both integrity and confidentiality matter.  I believe that given
> the high integrity requirements that it *does* make sense to store these in
> a more protected location.
>
> +1 for using Barbican
>
>
This would make Barbican a required service for running Nova. Keystone is
already required and it has the necessary functionality.

- Ryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jarret Raim
Wrote this answer this morning, but Simo beat me to it. Answer below sent
for posterity.



TL;DR:

Jay - it seems like we are on the same page. Barbican can be helpful for
generation and storage (if needed) of various types of keying material.
However, if your use case is better served by storing the keys yourself,
than that seems fine.



On 7/2/13 9:07 AM, "Jay Pipes"  wrote:

>On 07/02/2013 09:49 AM, Jarret Raim wrote:
>> I've spent some time thinking about how Barbican (Key Management) can
>>help
>> in this workflow.
>>
>> We will have the ability to generate SSH keys (and a host of other key &
>> certificate types). This is backed by cryptographically sound code and
>> we've spent some time figuring out the entropy problem and HSM support.
>>If
>> the keys are stored in Barbican, we'd get the audit / logging and other
>> functionality needed for compliance.
>
>What does the above mean? What about Barbican is audited/logged that
>isn't in Keystone and why wouldn't such auditing/logging be added to
>Keystone if it were needed for compliance? I'm trying to figure out why
>there is yet another OpenStack-related project for storing
>keys/credentials when Keystone already exists.

Barbican has wider scope that Keystone. Keystone is the source for auth
and will store the keys that are associated for a particular user.
However, there are many reasons why keystone might not want to generate or
store keys. These include the various compliance regimes - almost all of
which have some requirements around key management (and identity). One of
Barbican's main goals is to provide the logging, auditing and reporting
needed for customers to meet their compliance obligations. Additionally,
high volume key creation can be tricky from a entropy point of view. We
will offer plugins that allow for the use of various entropy sources
(including the Intel chip stuff when it comes out) as well as support for
using a full HSM for key generation.

None of this means that Keystone couldn't be the API that other services
use to get their public SSH keys. It just means that Keystone might want
to use Barbican for key creation / storage. If we think the SSH pub key is
narrow enough use case, I don't have a problem with Keystone just storing
it.


> > We also get federation which will
>> allow customers of public Clouds (or shared private Clouds) to maintain
>> custody of their own keys rather than storing them in the provider.
>
>I don't understand. Users already have custody of their own keys. The
>only thing that Keystone/Nova has is the public key fingerprint [1], not
>the private key...

This is true for SSH key access to nova. There will be other use cases
where we might want full certificates or some other keying material tied
to a user. 

>> There seem to be a couple of ways to take advantage of this
>>functionality.
>> If a key is specific to a user, then Keystone could store a URI to the
>>key
>> in Barbican and Nova could request it on server creation. Alternatively,
>> the user could pass a URI to a key into Nova directly. If we want to
>>move
>> to always enabling SSH key access only on boot, Nova could create a key
>> under the requesting tenant in Barbican and use it on server create.
>
>OK, so the above would basically be a "driver" in Keystone parlance for
>the credentials module, where Keystone would just store the key in
>Barbican and retrieve said key.
>
>At this point, though, what exactly is the point of Barbican over a
>simple database or KVS driver?

>From Keystone's point of view, that's probably true. You can just use us a
dumb store if that makes the most sense for the use case.

For something like public SSH keys only, there is probably nothing wrong
with any type of storage (though there might be some requirements for key
auditing & rotation that need to be met). However, Barbican offers several
benefits over an internally maintained key storage service.

First, a single secure key storage service is better than each product
storing their own. This doesn't matter as much for Keystone as it is
already going to have to be secure, etc. but it does matter for all the
other Barbican customers.

Second, Barbican will always offer a free an open source implementation.
This allows any customer access to high quality, security crypto without
having to go to a vendor.

Third, Barbican will support hardware security modules, the Intel TPM and
rand stuff and other solutions for better quality / more secure crypto
products.

Fourth, Barbican is a simple ReST API that is open and doesn't require
custom code for a particular provider.

There is lots more, but you get the idea.

>> Things get more interesting when we are talking about IPSec certificates
>> and the like. Barbican seems a more logical place to generate / store /
>> share these types of keys than Keystone.
>
>Generate...perhaps. Store... I doubt it. Share...I think Keystone is the
>most logical place to share credentials. After all, it's the
>authentication

Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-02 Thread Thomas Goirand
On 07/03/2013 02:15 AM, Monty Taylor wrote:
> 
> 
> On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
>> On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
>>> Thomas Goirand wrote:
 So, shouldn't the .mo files be generated at build time only, and be kept
 out of the Git?
>>>
>>> +1
>>
>> Yep, agree too.
>>
>> Interestingly, last time I checked, devstack doesn't actually compile
>> the message catalogs (python setup.py compile_catalog).
>>
>> I've been meaning to fix that for a while now, but it's fallen by the
>> wayside. I've unassigned myself from the bug for now:
>>
>>   https://bugs.launchpad.net/devstack/+bug/995287
> 
> Should we make python setup.py install do this if gettext is installed?
> Or keep it as a separate step for people who care?

FYI: https://review.openstack.org/#/c/35330/

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Criteria for compute drivers

2013-07-02 Thread Russell Bryant
Greetings,

Nova includes various compute drivers today, but the test coverage they
receive varies quite a bit.  This is documented on the following wiki
page.  The drivers are broken up into groups A, B, and C.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

We have two new compute drivers in the queue for Havana: docker [1] and
z/vm [2].  I'd like to propose as a piece of criteria for inclusion that
new drivers go into groups A or B.

Further, I would like to see *all* drivers move into groups A or B by
the release of Icehouse.  I've been told that this is already in the
works for VMware and baremetal, at least.

I feel like if there isn't enough interesting and willingness to raise
the bar on testing a given compute driver, then we're just wasting our
time and effort having it in the tree.

Feedback welcome!

Thanks,

[1] https://blueprints.launchpad.net/nova/+spec/new-hypervisor-docker
[2] https://blueprints.launchpad.net/nova/+spec/zvm-plugin

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-02 Thread Monty Taylor


On 07/02/2013 05:46 AM, Doug Hellmann wrote:
> 
> 
> 
> On Tue, Jul 2, 2013 at 5:52 AM, Robert Collins
> mailto:robe...@robertcollins.net>> wrote:
> 
> On 2 July 2013 21:32, Thierry Carrez  > wrote:
> > Thierry Carrez wrote:
> >> """
> >> 'OpenStack Programs' are efforts which are essential to the
> completion
> >> of our mission. Programs can create any code repository and
> produce any
> >> deliverable they deem necessary to achieve their goals.
> >>
> >> Programs are placed under the oversight of the Technical
> Committee, and
> >> contributing to one of their code repositories grants you ATC status.
> >>
> >> Current efforts or teams which want to be recognized as an 'OpenStack
> >> Program' should place a request to the Technical Committee,
> including a
> >> clear mission statement describing how they help the OpenStack
> general
> >> mission and how that effort is essential to the completion of our
> >> mission. If programs have a goal that includes the production of
> >> a server 'integrated' deliverable, that specific project would still
> >> need to go through an Incubation period.
> >>
> >> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> >> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer',
> 'Documentation',
> >> 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in
> >> incubation. Those programs should retroactively submit a mission
> >> statement and initial lead designation, if they don't have one
> already.
> >> """
> >
> > Oops. In this variant, Trove and Ironic, as programs, would not be "in
> > incubation" (only one of their deliverables would). That last
> paragraph
> > should be fixed as:
> >
> > """
> > The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> > 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer',
> 'Documentation',
> > 'Infrastructure', 'QA', 'Oslo', 'Trove' and 'Ironic'. Those programs
> > should retroactively submit a mission statement and initial lead
> > designation, if they don't have one already.
> > """
> >
> > Maybe Ironic should be merged into the TripleO program when it's
> considered.
> 
> Certainly; with our focus on deploy and operations, Ironic is very
> much something we'll care about forever :). OTOH, baremetal machine
> provisioning is a distinct concern from OpenStack deployment and
> operations. I don't know that there is a better place for Ironic; it's
> certainly got significant tentacles into other areas than just Nova
> [hence it being split out in the first place]. Nevertheless : clearly
> Ironic is a Project, and Incubated. I think whether it is incorporated
> into it's own Program, or TripleO, isn't a very interesting question.
> ATC membership is decoupled from things now, so \o/.
> 
> On proposal 3, I wonder if it makes things too vague : if a Program
> can have one or more integrated Projects, it sort of suggests that
> perhaps Neutron be a Project of the Nova Program?
> 
> 
> I like option 3 because it lets us move ahead without having to revisit
> what may just have been an unfortunate narrowness of vision in the
> original charter (who knew we would grow so quickly?). We have been
> letting the projects evolve around feature sets in a way that helps us
> manage code and feature complexity, e.g. breaking networking and block
> storage out of nova. The addition of programs as groups of one or more
> projects is a natural way to manage changes in the community's size and
> complexity as we continue to grow.

I'm fine with this as long as a program can be a group of 0 or more
projects. On the chance that we decide to use the concept to refer to
horizontal efforts (I do not think we need to decide on that right now)
I would hate to be hide-bound and exclude security or release or
translations because they don't have their own repo or project deliverable.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] No meeting this week (2013-07-04)

2013-07-02 Thread Boris Pavlovic
Hi,

+1

Best regards,
Boris Pavlovic


On Tue, Jul 2, 2013 at 10:04 PM, Russell Bryant  wrote:

> Greetings,
>
> Let's skip the weekly nova meeting this week.  It falls on a US holiday,
> so many people would not be able to make it.
>
> The biggest item of discussion right now is havana-2 status.  Please
> check and make sure the status of your blueprints is accurate.  Let's
> work hard on a final push toward havana-2.  There's lots of stuff not
> finished, and a whole lot that needs review.
>
> https://launchpad.net/nova/+milestone/havana-2
>
> Thanks,
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-02 Thread Monty Taylor


On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
> On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
>> Thomas Goirand wrote:
>>> So, shouldn't the .mo files be generated at build time only, and be kept
>>> out of the Git?
>>
>> +1
> 
> Yep, agree too.
> 
> Interestingly, last time I checked, devstack doesn't actually compile
> the message catalogs (python setup.py compile_catalog).
> 
> I've been meaning to fix that for a while now, but it's fallen by the
> wayside. I've unassigned myself from the bug for now:
> 
>   https://bugs.launchpad.net/devstack/+bug/995287

Should we make python setup.py install do this if gettext is installed?
Or keep it as a separate step for people who care?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] No meeting this week (2013-07-04)

2013-07-02 Thread Russell Bryant
Greetings,

Let's skip the weekly nova meeting this week.  It falls on a US holiday,
so many people would not be able to make it.

The biggest item of discussion right now is havana-2 status.  Please
check and make sure the status of your blueprints is accurate.  Let's
work hard on a final push toward havana-2.  There's lots of stuff not
finished, and a whole lot that needs review.

https://launchpad.net/nova/+milestone/havana-2

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-02 Thread Ben Nemec
One small addition I would suggest is a step to remove the unused 
sqlalchemy-migrate code once this is all done.  That's my main concern 
with moving it to Oslo right now.


Also, is this a formal blueprint(s)?  Seems like it should be.

-Ben

On 2013-07-02 12:50, Boris Pavlovic wrote:
 
 
 
 
 
 
 
 
###

 Goal
 
 
 
 
 
 
 
 
###


We should fix work with DB, unify it in all projects and use oslo code
for all common things.

In more words:

DB API

 *) Fully cover by tests.

 *) Run tests against all backends (now they are runed only against
sqlite).

 *) Unique constraints (instead of select + insert)
 a) Provide unique constraints.
 b) Add missing unique constraints.

 *) DB Archiving
 a) create shadow tables
 b) add tests that checks that shadow and main table are synced.
 c) add code that work with shadow tables.

 *) DB API performance optimization
 a) Remove unused joins.
 b) 1 query instead of N (where it is possible).
 c) Add methods that could improve performance.
 d) Drop unused methods.

 *) DB reconnect
 a) Don't break huge task if we lost connection for a moment. just
retry DB query.

 *) DB Session cleanup
 a) do not use session parameter in public DB API methods.
 b) fix places where we are doing N queries in N transactions instead
of 1.
 c) get only data that is used (e.g. len(query.all()) =>
query.count()).



DB Migrations

 *) Test DB Migrations against all backends and real data.

 *) Fix: DB schemas after Migrations should be same in different
backends

 *) Fix: hidden bugs, that are caused by wrong migrations:
 a) fix indexes. e.g. 152 migration in Nova drop all Indexes that has
deleted column
 b) fix wrong types
 c) drop unused tables

 *) Switch from sqlalchemy-migrate to something that is not death
(e.g. alembic).



DB Models

 *) Fix: Schema that is created by Models should be the same as after
migrations.

 *) Fix: Unit tests should be runed on DB that was created by Models
not migrations.

 *) Add test that checks that Models are synced with migrations.



Oslo Code

 *) Base Sqlalchemy Models.

 *) Work around engine and session.

 *) SqlAlchemy Utils - that helps us with migrations and tests.

 *) Test migrations Base.

 *) Use common test wrapper that allows us to run tests on different
backends.

 
 
 
 
 
 
 
 
###

 Implementation
 
 
 
 
 
 
 
 
###


 This is really really huge task. And we are almost done with Nova=).

 In OpenStack for such work there is only one approach ("baby steps"
development deriven). So we are making tons of patches that could be
easy reviewed. But there is also minuses in such approach. It is
pretty hard to track work on high level. And sometimes there are
misunderstand.

 For example with oslo code. In few words at this moment we would like
to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
And I got reasonable question from Doug Hellmann. Why? I answer
because of our "baby steps". But if you don't have a list of baby
steps it is pretty hard to understand why our baby steps need this
thing. And why we don't switch to alembic firstly. So I would like to
describe our Road Map and write list of "baby steps".

---

OSLO

 *) (Merged) Base code for Models and sqlalchemy engine (session)

 *) (On review) Sqlalchemy utils that are used to:
 1. Fix bugs in sqlalchemy-migrate
 2. Base code for migrations that provides Unique Constraints.
 3. Utils for db.archiving helps us to create and check shadow tables.

 *) (On review) Testtools wrapper
 We should have only one testtool wrapper in all projects. And this is
the one of base steps in task of running tests against all backends.

 *) (On review) Test migrations base
 Base classes that provides us to test our migrations against all
backends on real data

 *) (On review, not finished yet) DB Reconnect.

 *) (Not finished) Test that checks that schemas and models are synced

---

${PROJECT_NAME}

In different projects we could work absolutely simultaneously, and
first candidates are Glance and Cinder. But inside project we could
also work simultaneously. Here is the workflow:

 1) (SYNC) Use base code for Models and sqlalchemy engines (from oslo)

 2) (SYNC) Use test migrations base (from oslo)

 3) (SYNC) Use SqlAlchemy utils (from oslo)

 4) (1 patch) Switch to OSLO DB code

 5) (1 patch) Remove ported test migrations

 6) (1 Migration) Provide unique constraints (change type of "deleted"
column)

 7) (1 Migration) Add shadow tables
 a) Create shadow tables
 b) Add test that checks that they are synced always

 8) (N Migrations) UniqueConstraint/Session/Optimization workflow:
 a) (1 patch) Add/Improve/Refactor tests for part of api (that is
connected with model)
 b) (1 patch) Fix session
 c) (

[openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-02 Thread Boris Pavlovic
###
Goal
###

We should fix work with DB, unify it in all projects and use oslo code for
all common things.

In more words:

DB API

  *) Fully cover by tests.

  *) Run tests against all backends (now they are runed only against
sqlite).

  *) Unique constraints (instead of select + insert)
 a) Provide unique constraints.
 b) Add missing unique constraints.

  *) DB Archiving
 a) create shadow tables
 b) add tests that checks that shadow and main table are synced.
 c) add code that work with shadow tables.

  *) DB API performance optimization
a) Remove unused joins.
b) 1 query instead of N (where it is possible).
c) Add methods that could improve performance.
d) Drop unused methods.

  *) DB reconnect
a) Don't break huge task if we lost connection for a moment. just retry
DB query.

  *) DB Session cleanup
a) do not use session parameter in public DB API methods.
b) fix places where we are doing N queries in N transactions instead of
1.
c) get only data that is used (e.g. len(query.all()) => query.count()).



DB Migrations

  *) Test DB Migrations against all backends and real data.

  *) Fix: DB schemas after Migrations should be same in different backends

  *) Fix: hidden bugs, that are caused by wrong migrations:
 a) fix indexes. e.g. 152 migration in Nova drop all Indexes that has
deleted column
 b) fix wrong types
 c) drop unused tables

  *) Switch from sqlalchemy-migrate to something that is not death (e.g.
alembic).



DB Models

  *) Fix: Schema that is created by Models should be the same as after
migrations.

  *) Fix: Unit tests should be runed on DB that was created by Models not
migrations.

  *) Add test that checks that Models are synced with migrations.



Oslo Code

  *) Base Sqlalchemy Models.

  *) Work around engine and session.

  *) SqlAlchemy Utils - that helps us with migrations and tests.

  *) Test migrations Base.

  *) Use common test wrapper that allows us to run tests on different
backends.


###
   Implementation
###

  This is really really huge task. And we are almost done with Nova=).

  In OpenStack for such work there is only one approach ("baby steps"
development deriven). So we are making tons of patches that could be easy
reviewed. But there is also minuses in such approach. It is pretty hard to
track work on high level. And sometimes there are misunderstand.

  For example with oslo code. In few words at this moment we would like to
add (for some time) in oslo monkey patching for sqlalchemy-migrate. And I
got reasonable question from Doug Hellmann. Why? I answer because of our
"baby steps". But if you don't have a list of baby steps it is pretty hard
to understand why our baby steps need this thing. And why we don't switch
to alembic firstly. So I would like to describe our Road Map and write list
of "baby steps".


---

OSLO

  *) (Merged) Base code for Models and sqlalchemy engine (session)

  *) (On review) Sqlalchemy utils that are used to:
  1. Fix bugs in sqlalchemy-migrate
  2. Base code for migrations that provides Unique Constraints.
  3. Utils for db.archiving helps us to create and check shadow tables.

  *) (On review) Testtools wrapper
   We should have only one testtool wrapper in all projects. And this
is the one of base steps in task of running tests against all backends.

  *) (On review) Test migrations base
   Base classes that provides us to test our migrations against all
backends on real data

  *) (On review, not finished yet) DB Reconnect.

  *) (Not finished) Test that checks that schemas and models are synced

---

${PROJECT_NAME}


In different projects we could work absolutely simultaneously, and first
candidates are Glance and Cinder. But inside project we could also work
simultaneously. Here is the workflow:


  1) (SYNC) Use base code for Models and sqlalchemy engines (from oslo)

  2) (SYNC) Use test migrations base (from oslo)

  3) (SYNC) Use SqlAlchemy utils (from oslo)

  4) (1 patch) Switch to OSLO DB code

  5) (1 patch) Remove ported test migrations

  6) (1 Migration) Provide unique constraints (change type of "deleted"
column)

  7) (1 Migration) Add shadow tables
a) Create shadow tables
b) Add test that checks that they are synced always

  8) (N Migrations) UniqueConstraint/Session/Optimization workflow:
a) (1 patch) Add/Improve/Refactor tests for part of api (that is
connected with model)
b) (1 patch) Fix session
c) (1 patch)  Optimize method
d) if required (1 Migration) Add missin

Re: [openstack-dev] Email is not registered problem

2013-07-02 Thread Jeremy Stanley
On 2013-07-01 23:29:30 + (+), Qing He wrote:
> The emails from this list stopped coming to my email address, is
> this related?

Changing contact information in Gerrit (and Launchpad for that
matter) has no bearing on Mailman mailing list subscriptions on
lists.openstack.org. Perhaps the list moderators can check whether
your subscribed address is bouncing deliveries back?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.

2013-07-02 Thread Jeremy Stanley
On 2013-07-01 15:10:26 -0700 (-0700), Mark Washenberger wrote:
[...]
> The talk about permanence confuses me, unless we mean that more
> permanent values are overridden by less permanent ones.
[...]

I think the "permanence" counter argument (which I don't agree with,
just recounting it for completeness) was that command-line arguments
may be embedded in init scripts by some distributions and then
administrators would be surprised when their modifications to the
configuration files weren't respected. Ultimately, however, any time
distribution defaults which could be set in packaged configuration
are instead being set with the service command-line in packaged init
scripts, I would tend to just consider that a (serious) packaging
bug and certainly nothing we should be catering to as a project.
-- 
{ PGP( 48F9961143495829 ); FINGER( fu...@cthulhu.yuggoth.org );
WWW( http://fungi.yuggoth.org/ ); IRC( fu...@irc.yuggoth.org#ccl );
WHOIS( STANL3-ARIN ); MUD( kin...@katarsis.mudpy.org:6669 ); }

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Simo Sorce
On Tue, 2013-07-02 at 16:55 +, Tiwari, Arvind wrote:
> Hi Simo,
> 
> I am lost.
>  
> Does Barbican is product came out of 
> https://wiki.openstack.org/wiki/KeyManager BP?

Yes Barbican is an implementation of this Blueprint afaik.

> If yes, then why it is deviating from the BP which says Key Manager will be 
> separate service but not a part of Keystone.

Sorry I don't follow, Barbican is separated from Keystone.

> If no, then why we are thinking about new Key manager (which seems to me a 
> subset of above BP)? 

New ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Simo Sorce
On Tue, 2013-07-02 at 08:12 -0700, Bryan D. Payne wrote:
> 
> > I don't understand. Users already have custody of their own
> keys. The
> > only thing that Keystone/Nova has is the public key
> fingerprint [1], not
> > the private key...
> 
> 
> You acatually have the public key, not just the fingerprint,
> but indeed
> I do not see why abrbican should be involved here.  apublic
> key does not
> need the same level of protection of a private key or a
> symmetric
> encryption key, so by storing this data in barbican we would
> only
> needlessly expose barbican to more access patternsand more
> logging/auditing volume than is needed.
> 
> 
> I believe you're confusing a couple of points here.  In this case, for
> public keys, what matters is integrity.  For the other cases that you
> mentioned, both integrity and confidentiality matter.  I believe that
> given the high integrity requirements that it *does* make sense to
> store these in a more protected location.
> 
> 
> +1 for using Barbican
> 
If you do not trust keystone to give you the right information you have
already lost as keystone is used (afaik) to check for authorization
anyway.

Can you be a little bit more explicit on the threat model you have in
mind and what guarantees Barbican would give you that would make it more
suitable to store public key than Keystone ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about locking

2013-07-02 Thread Vishvananda Ishaya

On Jul 2, 2013, at 1:49 AM, "Rosa, Andrea (HP Cloud Services)" 
 wrote:

> Hi Vish,
> 
>> Were other commands working on the compute node? It seems much more
>> likely that the node had a hung connection to rabbit. If you are not using 
>> tcp
>> keepalives, a network hiccup (or failover) can cause half open connections
>> where the server thinks the connection is still active so it sends the 
>> message
>> but the compute node never receives it.
> 
> The compute nodes is fine, messages are delivered and when I send a new 
> delete for the same instance, I can see the message received by the compute 
> node.
> As I said I don't see that very often, it's a rare case but I'd like to know 
> if the hanging lock could be an explanation.

Definitely seems like a possibility given your explanation, but I haven't seen 
it happen myself.

Vish

> 
>>> PS: As you are on this topic I submitted a fix to complete the 
>>> "pending" deletion when the compute service starts, it would be great 
>>> if you can have a look at it: https://review.openstack.org/33265
> 
> Regards
> --
> Andrea Rosa
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Tiwari, Arvind
Hi Simo,

I am lost.
 
Does Barbican is product came out of https://wiki.openstack.org/wiki/KeyManager 
BP?

If yes, then why it is deviating from the BP which says Key Manager will be 
separate service but not a part of Keystone.

If no, then why we are thinking about new Key manager (which seems to me a 
subset of above BP)? 


Arvind

-Original Message-
From: Simo Sorce [mailto:s...@redhat.com] 
Sent: Tuesday, July 02, 2013 8:57 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Move keypair management out of Nova and into 
Keystone?

On Tue, 2013-07-02 at 10:07 -0400, Jay Pipes wrote:
> On 07/02/2013 09:49 AM, Jarret Raim wrote:
> > I've spent some time thinking about how Barbican (Key Management) can help
> > in this workflow.
> >
> > We will have the ability to generate SSH keys (and a host of other key &
> > certificate types). This is backed by cryptographically sound code and
> > we've spent some time figuring out the entropy problem and HSM support. If
> > the keys are stored in Barbican, we'd get the audit / logging and other
> > functionality needed for compliance.
> 
> What does the above mean? What about Barbican is audited/logged that 
> isn't in Keystone and why wouldn't such auditing/logging be added to 
> Keystone if it were needed for compliance? I'm trying to figure out why 
> there is yet another OpenStack-related project for storing 
> keys/credentials when Keystone already exists.

Barbican is meant to store primarily private/simmetric keys in a way
that allows user to get access to them after proper keystone
integration. This material is particularly sensitive and it was felt
that we needed a specific service that would have a higher level of
scrutiny and security. This is better achieved if the tool does only one
thing with a smallest code footprint than if it is merged together with
unrelated code. Auditing of code is expensive (mostly in terms of
time/eyes), so keeping a specialized service for these keys makes sense.

>  > We also get federation which will
> > allow customers of public Clouds (or shared private Clouds) to maintain
> > custody of their own keys rather than storing them in the provider.
> 
> I don't understand. Users already have custody of their own keys. The 
> only thing that Keystone/Nova has is the public key fingerprint [1], not 
> the private key...

You acatually have the public key, not just the fingerprint, but indeed
I do not see why abrbican should be involved here.  apublic key does not
need the same level of protection of a private key or a symmetric
encryption key, so by storing this data in barbican we would only
needlessly expose barbican to more access patternsand more
logging/auditing volume than is needed. 

> > There seem to be a couple of ways to take advantage of this functionality.
> > If a key is specific to a user, then Keystone could store a URI to the key
> > in Barbican and Nova could request it on server creation. Alternatively,
> > the user could pass a URI to a key into Nova directly. If we want to move
> > to always enabling SSH key access only on boot, Nova could create a key
> > under the requesting tenant in Barbican and use it on server create.
> 
> OK, so the above would basically be a "driver" in Keystone parlance for 
> the credentials module, where Keystone would just store the key in 
> Barbican and retrieve said key.
> 
> At this point, though, what exactly is the point of Barbican over a 
> simple database or KVS driver?

Not much, and perhaps even worsen the situation as I hinted above, but I
think Jared assumed you were talking about generating/storing private
keys, and as you noted it is not the case.

> > Things get more interesting when we are talking about IPSec certificates
> > and the like. Barbican seems a more logical place to generate / store /
> > share these types of keys than Keystone.
> 
> Generate...perhaps. Store... I doubt it. Share...I think Keystone is the 
> most logical place to share credentials. After all, it's the 
> authentication/identity component in OpenStack.

Nope, if you need to store private keys that you need to routinely
retrieve and re-distribute then barbican is the right and only place.

> While encryption and key generation are interesting topics, they are 
> tangential to the fact that credentials are an attribute of the 
> identity/user, and that information is in Keystone.

If 'access credentials' remain buried (as in they cannot never be
retrieved) in Keystone (or whatever IdM service it bridges to) then it
is probably the right place as it performs authentication anyway and
needs direct access to these credentials internally in some cases.

But Keystone is not the right place to function as storage and retrieval
system for private keys that's barbican's turf.

So for the nova keypairs I think Keystone is the natural place, as that
information doesn't need strong protection, it's just public keys.
For private keys Keystone wouldn't do, and a URL redir

Re: [openstack-dev] [networking] Changes to the OVS agent tunneling

2013-07-02 Thread Alan Kavanagh
+1 eagerly awaiting, sounds good.

Alan

-Original Message-
From: Edgar Magana [mailto:emag...@plumgrid.com] 
Sent: July-02-13 12:15 PM
To: OpenStack List
Subject: Re: [openstack-dev] [networking] Changes to the OVS agent tunneling

Hi Kyle,

It seems that the document is locked, could you provide the access code?

Thanks,

Edgar

On 7/2/13 8:32 AM, "Kyle Mestery (kmestery)"  wrote:

>I've been spending a fair amount of time working with the OVS agent 
>recently, and I've written up a small Google Document [1] detailing the 
>end goal of all of this work. The short story is that I am introducing 
>changes into the OVS agent to add support for multiple tunnel_types 
>when the agent is run with the ML2 plugin. The ML2 plugin will support 
>both GRE and VXLAN tunnels at the same time, for example. I'd 
>appreciate feedback from folks on this document.
>
>Thanks,
>Kyle
>
>[1]
>https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3
>ysW
>gSyHa4V0pYELAiePD1s/edit?usp=sharing
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-02 Thread Christopher Armstrong
On Tue, Jul 2, 2013 at 11:10 AM, Kurt Griffiths
 wrote:
> The idea has merit; my main concern is that we would be duplicating
> significant chunks of code/logic between the fakes and the real services.
>
> How can we do this in a DRY way?


I've done it a few different ways for libraries I've worked on.

Usually, the fakes don't actually duplicate much code from the real
implementation. But in the cases they do, I've had situations like
this:


class RealImplementation(object):

  def do_network_stuff(self, stuff):
...

  def low_level_operation(self):
return self.do_network_stuff("GET /integer")

  def high_level_operation(self):
return self.low_level_operation() + 5


I'd just create a subclass like this:

class FakeImplementation(RealImplementation):

  def do_network_stuff(self, stuff):
raise NotImplementedError("This should never be called!")

  def low_level_operation(self):
return self.integer # or however you implement your fake


This has two interesting properties:

1. I don't have to reimplement the high_level_operation
2. If I forget to implement a fake version of some method that invokes
do_network_stuff, then it will blow up with a NotImplementedError so
my test doesn't accidentally do real network stuff.


This is just an example from some recent work I did on a simple RPC
client with an HTTP API (unrelated to OpenStack client libraries), but
that just so happens to be the case that Alex is discussing, so I
think it can work well.

--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking] Changes to the OVS agent tunneling

2013-07-02 Thread Kyle Mestery (kmestery)
Fixed, sorry about that!

On Jul 2, 2013, at 11:14 AM, Edgar Magana 
 wrote:

> Hi Kyle,
> 
> It seems that the document is locked, could you provide the access code?
> 
> Thanks,
> 
> Edgar
> 
> On 7/2/13 8:32 AM, "Kyle Mestery (kmestery)"  wrote:
> 
>> I've been spending a fair amount of time working with the OVS agent
>> recently, and I've written up a small Google Document [1] detailing the
>> end goal of all of this work. The short story is that I am introducing
>> changes into the OVS agent to add support for multiple tunnel_types when
>> the agent is run with the ML2 plugin. The ML2 plugin will support both
>> GRE and VXLAN tunnels at the same time, for example. I'd appreciate
>> feedback from folks on this document.
>> 
>> Thanks,
>> Kyle
>> 
>> [1] 
>> https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3ysW
>> gSyHa4V0pYELAiePD1s/edit?usp=sharing
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking] Changes to the OVS agent tunneling

2013-07-02 Thread Edgar Magana
Hi Kyle,

It seems that the document is locked, could you provide the access code?

Thanks,

Edgar

On 7/2/13 8:32 AM, "Kyle Mestery (kmestery)"  wrote:

>I've been spending a fair amount of time working with the OVS agent
>recently, and I've written up a small Google Document [1] detailing the
>end goal of all of this work. The short story is that I am introducing
>changes into the OVS agent to add support for multiple tunnel_types when
>the agent is run with the ML2 plugin. The ML2 plugin will support both
>GRE and VXLAN tunnels at the same time, for example. I'd appreciate
>feedback from folks on this document.
>
>Thanks,
>Kyle
>
>[1] 
>https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3ysW
>gSyHa4V0pYELAiePD1s/edit?usp=sharing
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-02 Thread Kurt Griffiths
The idea has merit; my main concern is that we would be duplicating
significant chunks of code/logic between the fakes and the real services.

How can we do this in a DRY way?

/kg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-v meeting

2013-07-02 Thread Peter Pouliot
Hi all,

I'm canceling the meeting for today.

Multiple key individuals are traveling and unable to attend.

We will resume next week.

P


Sent from my Verizon Wireless 4G LTE Smartphone
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume

2013-07-02 Thread Mate Lakat
Hi Sheng,

You can use a raw(qemu-img recognised) type image to glance, and ask
cinder to create a volume from that. This way you end up with a bootable
volume. In the end of the day, your instance will just see a block
device. The default cinder driver should also recognise other formats
that are understood by qemu-img.

As an advertisement, I just added a patch to make it able to recognise
XenServer type images:

https://review.openstack.org/34336

Mate

On Mon, Jul 01, 2013 at 06:35:03AM -0400, Sheng Bo Hou wrote:
> Hi Mate,
> 
> First, thanks for answering.
> I was trying to find the way to prepare the bootable volume.
> Take the default image downloaded by devstack, there are three images: 
> cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and 
> cirros-0.3.0-x86_64-uec-ramdisk.
> cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and 
> cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image.
> 
> Issue: If only the image(cirros-0.3.0-x86_64-uec) is copied to the volume 
> when creating a volume) from an image, this volume is unable to boot an 
> instance without the references to the kernel and the ramdisk images. The 
> current cinder only copies the image cirros-0.3.0-x86_64-uec to one 
> targeted volume(Vol-1), which is marked as bootable but unable to do a 
> successful boot with the current nova code, even if image-id is removed in 
> the parameter.
> 
> Possible solutions: There are two ways in my mind to resolve it. One is we 
> just need the code change in Nova to let it find the reference images for 
> the bootable volume(Vol-1) and there is no need to change anything in 
> cinder, since the kernel and ramdisk id are saved in the 
> volume_glance_metadata, where the references point to the images(kernel 
> and ramdisk) for the volume(Vol-1). 
> 
> The other is that if we need multiple images to boot an instance, we need 
> a new way to create the bootable volume. For example, we can create three 
> separate volumes for three of the images and set the new references in 
> volume_glance_metadata with the kernel_volume_id and ramdisk_volume_id. 
> The benefit of this approach is that the volume can live independent of 
> the existence of the original images. Even the images get lost 
> accidentally, the volumes are still sufficient to boot an instance, 
> because all the information have been copied to Cinder part.
> 
> I am trying to looking for the "another way to prepare your bootable 
> volume" as you mentioned and asking for the suggestions. 
> And I think the second approach could be one way. Do you think it is a 
> good approach?
> 
> Best wishes,
> Vincent Hou (侯胜博)
> 
> Staff Software Engineer, Open Standards and Open Source Team, Emerging 
> Technology Institute, IBM China Software Development Lab
> 
> Tel: 86-10-82450778 Fax: 86-10-82453660
> Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
> Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
> West Road, Haidian District, Beijing, P.R.C.100193
> 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
> 
> 
> 
> Mate Lakat  
> 2013/07/01 04:18
> Please respond to
> OpenStack Development Mailing List 
> 
> 
> To
> OpenStack Development Mailing List , 
> cc
> jsbry...@us.ibm.com, "Duncan Thomas  John 
> Griffith" 
> Subject
> Re: [openstack-dev] [cinder] Propose to add copying the reference images 
> when creating a volume
> 
> 
> 
> 
> 
> 
> Hi,
> 
> I just proposed a patch for the boot_from_volume_exercise.sh to get rid
> of --image. To be honest, I did not look at the various execution paths.
> My initial thought is that boot from volume means you boot from volume.
> If you only have a kernel + ramdisk image, I simply assumed that you
> can't do it. 
> 
> I would not do any magic. Boot from volume should boot from volume. If
> you only have 3 part images, you need to find another way to prepare
> your bootable volume.
> 
> btw, here is my change:
> 
> https://review.openstack.org/34761
> 
> Cheers,
> Mate
> 
> On Mon, Jul 01, 2013 at 01:25:23AM -0400, Sheng Bo Hou wrote:
> > Hi Cinder folks,
> > 
> > I am currently fixing the bugs related to booting the instance from the 
> > volume. I found there are bugs both in Nova and 
> > Cinder.
> > 
> > Cinder: https://bugs.launchpad.net/cinder/+bug/1159824
> > Nova: https://bugs.launchpad.net/nova/+bug/1191069
> > 
> > For the volumes created from the image, I propose to copy the reference 
> > image during the creation of
> > the main image. For example, an image may refer to a kernel image and a 
> > ramdisk image. When we create a volume
> > from this image, we only copied this one to the volume. The kernel and 
> > ramdisk images are still in glance, and
> > the volume still refers to the kernel and ramdisk images.
> > 
> > I think if an image has other reference images, the reference images 
> also 
> > need to be copied to the volumes(kernel volume and ramdisk volume),
> > and then set the volume referring to the kernel volume a

Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Bryan D. Payne
>  +1 for using Barbican
>>
>
> Simo just got finished saying Barbican was *not* the correct place to put
> this information...


Understood.  I'm disagreeing with Simo.  And I'm agreeing with Jarret Raim.

-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking] Changes to the OVS agent tunneling

2013-07-02 Thread Kyle Mestery (kmestery)
I've been spending a fair amount of time working with the OVS agent recently, 
and I've written up a small Google Document [1] detailing the end goal of all 
of this work. The short story is that I am introducing changes into the OVS 
agent to add support for multiple tunnel_types when the agent is run with the 
ML2 plugin. The ML2 plugin will support both GRE and VXLAN tunnels at the same 
time, for example. I'd appreciate feedback from folks on this document.

Thanks,
Kyle

[1] 
https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3ysWgSyHa4V0pYELAiePD1s/edit?usp=sharing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Keystone store-quota-data blueprint

2013-07-02 Thread Martins, Tiago
That's a relieve! As you said, we designed the Quota API using the Trust API as 
example, so it is not in the default pipeline, and we hope to commit it soon 
for being reviewed. But the design is open for reviewing and feedback 
https://wiki.openstack.org/wiki/DomainQuotaManagementAndEnforcement

Regards,
Tiago

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: terça-feira, 2 de julho de 2013 11:33
To: OpenStack Development Mailing List
Cc: Dmitry Stepanenko
Subject: Re: [openstack-dev] [Openstack] Keystone store-quota-data blueprint


On Monday, July 1, 2013, Jamie Lennox wrote:
On Tue, 2013-07-02 at 02:03 +, Everett Toews wrote:
> This topic came up at the last summit in Portland at [1] and [2].
>
>
> Yehia and another colleague of his from HP had a design that was
> discussed and it seemed like they were going to start work on it.
> Another developer from CERN expressed interest too. I'm not sure if
> anything ever really got started on it.
>
>
> I think you'll want to wait until Dolph gets back (next week?) before
> doing any major work on it. Ask him before moving forward.

Also know that Dolph has said that nothing that affects the API will be
accepted after h-2 so it would have to be finished, reviewed and
commited by the 16th. Given that he's away and the discussion that would
be around this I'd say it's very tight.

"away" ... apparently they have the internet on islands, too.

Henry Nash brought up the issue of extensions that are not included in the 
default pipeline (which is how I think quota storage should be spec'd and 
implemented) vs the API feature freeze. The goal of the API feature freeze is 
to avoid crunch time (and producing new bugs) on release critical / core API 
functionality while we should be focusing on stability and polish there. I 
don't think extension development necessarily conflicts with that goal (besides 
sapping review bandwidth), so I'm happy to see them merge during milestone 3.

Granted, a whole bunch of m2 bp's already in review / approaching review fall 
into that category, but we should still try and get them in before m3 :)




> Regards,
> Everett
>
>
> [1] 
> http://openstacksummitapril2013.sched.org/event/c0c6befcb4361e54d5c7e45b2f772de7
> [2] 
> http://openstacksummitapril2013.sched.org/event/7bf2cdde2dfad733b499d9c2a3f60b08
>
>
> P.S. This email really belongs in openstack-dev
>
> On Jul 1, 2013, at 10:24 AM, Dmitry Stepanenko wrote:
>
> > Hi folks,
> >
> >
> > we're going to work on store-quota-data blueprint
> > (https://blueprints.launchpad.net/keystone/+spec/store-quota-data).
> > Did anyone already work on it?
> >
> >
> > Thanks & regards,
> > Dmitry
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openst...@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openst...@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jay Pipes

On 07/02/2013 11:12 AM, Bryan D. Payne wrote:


 > I don't understand. Users already have custody of their own keys. The
 > only thing that Keystone/Nova has is the public key fingerprint
[1], not
 > the private key...

You acatually have the public key, not just the fingerprint, but indeed
I do not see why abrbican should be involved here.  apublic key does not
need the same level of protection of a private key or a symmetric
encryption key, so by storing this data in barbican we would only
needlessly expose barbican to more access patternsand more
logging/auditing volume than is needed.


I believe you're confusing a couple of points here.  In this case, for
public keys, what matters is integrity.  For the other cases that you
mentioned, both integrity and confidentiality matter.  I believe that
given the high integrity requirements that it *does* make sense to store
these in a more protected location.

+1 for using Barbican

-bryan


Simo just got finished saying Barbican was *not* the correct place to 
put this information...


-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume

2013-07-02 Thread John Griffith
On Mon, Jul 1, 2013 at 11:14 AM, Vishvananda Ishaya
wrote:

>
> On Jul 1, 2013, at 3:35 AM, Sheng Bo Hou  wrote:
>
> Hi Mate,
>
> First, thanks for answering.
> I was trying to find the way to prepare the bootable volume.
> Take the default image downloaded by devstack, there are three images:
> cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and
> cirros-0.3.0-x86_64-uec-ramdisk.
> cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and
> cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image.
>
> *Issue:* If only the image(cirros-0.3.0-x86_64-uec) is copied to the
> volume when creating a volume) from an image, this volume is unable to boot
> an instance without the references to the kernel and the ramdisk images.
> The current cinder only copies the image cirros-0.3.0-x86_64-uec to one
> targeted volume(*Vol-1*), which is marked as bootable but unable to do a
> successful boot with the current nova code, even if image-id is removed in
> the parameter.
>
> *Possible solutions:* There are two ways in my mind to resolve it. One is
> we just need the code change in Nova to let it find the reference images
> for the bootable volume(*Vol-1*) and there is no need to change anything
> in cinder, since the kernel and ramdisk id are saved in the
> volume_glance_metadata, where the references point to the images(kernel and
> ramdisk) for the volume(*Vol-1*).
>
>
> You should be able to create an image in glance that references the volume
> in block device mapping but also has a kernel_id and ramdisk_id parameter
> so it can boot properly. I know this is kind of an odd way to do things,
> but this seems like an edge case and I think it is a valid workaround.
>
> Vish
>
> The other is that if we need multiple images to boot an instance, we need
> a new way to create the bootable volume. For example, we can create three
> separate volumes for three of the images and set the new references in
> volume_glance_metadata with the kernel_volume_id and ramdisk_volume_id. The
> benefit of this approach is that the volume can live independent of the
> existence of the original images. Even the images get lost accidentally,
> the volumes are still sufficient to boot an instance, because all the
> information have been copied to Cinder part.
>
> I am trying to looking for the "another way to prepare your bootable
> volume" as you mentioned and asking for the suggestions.
> And I think the second approach could be one way. Do you think it is a
> good approach?
>
> Best wishes,
> Vincent Hou (侯胜博)
>
> Staff Software Engineer, Open Standards and Open Source Team, Emerging
> Technology Institute, IBM China Software Development Lab
>
> Tel: 86-10-82450778 Fax: 86-10-82453660
> Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
> Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
> West Road, Haidian District, Beijing, P.R.C.100193
> 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
>
>
>  *Mate Lakat *
>
> 2013/07/01 04:18
>  Please respond to
> OpenStack Development Mailing List 
>
>   To
> OpenStack Development Mailing List ,
> cc
> jsbry...@us.ibm.com, "Duncan Thomas  John
> Griffith" 
> Subject
> Re: [openstack-dev] [cinder] Propose to add copying the reference images
> when creating a volume
>
>
>
>
> Hi,
>
> I just proposed a patch for the boot_from_volume_exercise.sh to get rid
> of --image. To be honest, I did not look at the various execution paths.
> My initial thought is that boot from volume means you boot from volume.
> If you only have a kernel + ramdisk image, I simply assumed that you
> can't do it.
>
> I would not do any magic. Boot from volume should boot from volume. If
> you only have 3 part images, you need to find another way to prepare
> your bootable volume.
>
> btw, here is my change:
>
> https://review.openstack.org/34761
>
> Cheers,
> Mate
>
> On Mon, Jul 01, 2013 at 01:25:23AM -0400, Sheng Bo Hou wrote:
> > Hi Cinder folks,
> >
> > I am currently fixing the bugs related to booting the instance from the
> > volume. I found there are bugs both in Nova and
> > Cinder.
> >
> > Cinder: https://bugs.launchpad.net/cinder/+bug/1159824
> > Nova: https://bugs.launchpad.net/nova/+bug/1191069
> >
> > For the volumes created from the image, I propose to copy the reference
> > image during the creation of
> > the main image. For example, an image may refer to a kernel image and a
> > ramdisk image. When we create a volume
> > from this image, we only copied this one to the volume. The kernel and
> > ramdisk images are still in glance, and
> > the volume still refers to the kernel and ramdisk images.
> >
> > I think if an image has other reference images, the reference images
> also
> > need to be copied to the volumes(kernel volume and ramdisk volume),
> > and then set the volume referring to the kernel volume and the ramdisk
> > volume. This feature will make booting from
> > a volume completely independent of the existence of the glance image.
> >
> > Do you t

Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Bryan D. Payne
>  > I don't understand. Users already have custody of their own keys. The
> > only thing that Keystone/Nova has is the public key fingerprint [1], not
> > the private key...
>
> You acatually have the public key, not just the fingerprint, but indeed
> I do not see why abrbican should be involved here.  apublic key does not
> need the same level of protection of a private key or a symmetric
> encryption key, so by storing this data in barbican we would only
> needlessly expose barbican to more access patternsand more
> logging/auditing volume than is needed.
>

I believe you're confusing a couple of points here.  In this case, for
public keys, what matters is integrity.  For the other cases that you
mentioned, both integrity and confidentiality matter.  I believe that given
the high integrity requirements that it *does* make sense to store these in
a more protected location.

+1 for using Barbican

-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3

2013-07-02 Thread David Kranz

On 07/02/2013 10:49 AM, Russell Bryant wrote:

On 07/02/2013 07:24 AM, John Garbutt wrote:

On 1 July 2013 15:49, Andrew Laski  wrote:

On 07/01/13 at 11:23am, Mauro S M Rodrigues wrote:

One more though, about os-multiple-create: I was also thinking to remove
it, I don't see any real advantage to use it since it doesn't offer any kind
of flexibility like chose different flavors, images and other attributes. So
anyone creating multiple servers would probably prefer an external
automation tool instead of multiple server IMHO.

So anyone using it? There are a good reason to keep it? Did I miss
something about this extension?

I would like to see this extension go away, but only by a small margin.
Just because it complicates the boot/spawn workflow a bit, which really
isn't a big deal.

I am +1 for not moving os-multiple-create into v3.

I think this work is a better way to look at spawning multiple VMs:
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

The CLI can then get updated so users are still able to spawn multiple
VMs in a nice way.

Heat, presumably, will control this, eventually.

Instance groups seems different.  That's about putting multiple
instances in a group for the purposes of applying policy.  I don't see
anything in there that replicates booting multiple instances with a
single API call.

I think that's right. Also, we can't change the ec2 api which supports 
multiple instances so wouldn't multiple boot still need to be supported? 
When I switched from OpenStack ec2 to nova api a long time ago I was 
surprised to learn it was an extension and not part of the core api.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jay Pipes

On 07/02/2013 10:56 AM, Simo Sorce wrote:


If 'access credentials' remain buried (as in they cannot never be
retrieved) in Keystone (or whatever IdM service it bridges to) then it
is probably the right place as it performs authentication anyway and
needs direct access to these credentials internally in some cases.

But Keystone is not the right place to function as storage and retrieval
system for private keys that's barbican's turf.


No disagreement at all from me on this one! :)


So for the nova keypairs I think Keystone is the natural place, as that
information doesn't need strong protection, it's just public keys.
For private keys Keystone wouldn't do, and a URL redirection scheme as
proposed by Jarret makes a lot of sense in this case.


++

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Simo Sorce
On Tue, 2013-07-02 at 10:07 -0400, Jay Pipes wrote:
> On 07/02/2013 09:49 AM, Jarret Raim wrote:
> > I've spent some time thinking about how Barbican (Key Management) can help
> > in this workflow.
> >
> > We will have the ability to generate SSH keys (and a host of other key &
> > certificate types). This is backed by cryptographically sound code and
> > we've spent some time figuring out the entropy problem and HSM support. If
> > the keys are stored in Barbican, we'd get the audit / logging and other
> > functionality needed for compliance.
> 
> What does the above mean? What about Barbican is audited/logged that 
> isn't in Keystone and why wouldn't such auditing/logging be added to 
> Keystone if it were needed for compliance? I'm trying to figure out why 
> there is yet another OpenStack-related project for storing 
> keys/credentials when Keystone already exists.

Barbican is meant to store primarily private/simmetric keys in a way
that allows user to get access to them after proper keystone
integration. This material is particularly sensitive and it was felt
that we needed a specific service that would have a higher level of
scrutiny and security. This is better achieved if the tool does only one
thing with a smallest code footprint than if it is merged together with
unrelated code. Auditing of code is expensive (mostly in terms of
time/eyes), so keeping a specialized service for these keys makes sense.

>  > We also get federation which will
> > allow customers of public Clouds (or shared private Clouds) to maintain
> > custody of their own keys rather than storing them in the provider.
> 
> I don't understand. Users already have custody of their own keys. The 
> only thing that Keystone/Nova has is the public key fingerprint [1], not 
> the private key...

You acatually have the public key, not just the fingerprint, but indeed
I do not see why abrbican should be involved here.  apublic key does not
need the same level of protection of a private key or a symmetric
encryption key, so by storing this data in barbican we would only
needlessly expose barbican to more access patternsand more
logging/auditing volume than is needed. 

> > There seem to be a couple of ways to take advantage of this functionality.
> > If a key is specific to a user, then Keystone could store a URI to the key
> > in Barbican and Nova could request it on server creation. Alternatively,
> > the user could pass a URI to a key into Nova directly. If we want to move
> > to always enabling SSH key access only on boot, Nova could create a key
> > under the requesting tenant in Barbican and use it on server create.
> 
> OK, so the above would basically be a "driver" in Keystone parlance for 
> the credentials module, where Keystone would just store the key in 
> Barbican and retrieve said key.
> 
> At this point, though, what exactly is the point of Barbican over a 
> simple database or KVS driver?

Not much, and perhaps even worsen the situation as I hinted above, but I
think Jared assumed you were talking about generating/storing private
keys, and as you noted it is not the case.

> > Things get more interesting when we are talking about IPSec certificates
> > and the like. Barbican seems a more logical place to generate / store /
> > share these types of keys than Keystone.
> 
> Generate...perhaps. Store... I doubt it. Share...I think Keystone is the 
> most logical place to share credentials. After all, it's the 
> authentication/identity component in OpenStack.

Nope, if you need to store private keys that you need to routinely
retrieve and re-distribute then barbican is the right and only place.

> While encryption and key generation are interesting topics, they are 
> tangential to the fact that credentials are an attribute of the 
> identity/user, and that information is in Keystone.

If 'access credentials' remain buried (as in they cannot never be
retrieved) in Keystone (or whatever IdM service it bridges to) then it
is probably the right place as it performs authentication anyway and
needs direct access to these credentials internally in some cases.

But Keystone is not the right place to function as storage and retrieval
system for private keys that's barbican's turf.

So for the nova keypairs I think Keystone is the natural place, as that
information doesn't need strong protection, it's just public keys.
For private keys Keystone wouldn't do, and a URL redirection scheme as
proposed by Jarret makes a lot of sense in this case.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3

2013-07-02 Thread Russell Bryant
On 07/02/2013 07:24 AM, John Garbutt wrote:
> On 1 July 2013 15:49, Andrew Laski  wrote:
>> On 07/01/13 at 11:23am, Mauro S M Rodrigues wrote:
>>> One more though, about os-multiple-create: I was also thinking to remove
>>> it, I don't see any real advantage to use it since it doesn't offer any kind
>>> of flexibility like chose different flavors, images and other attributes. So
>>> anyone creating multiple servers would probably prefer an external
>>> automation tool instead of multiple server IMHO.
>>>
>>> So anyone using it? There are a good reason to keep it? Did I miss
>>> something about this extension?
>>
>> I would like to see this extension go away, but only by a small margin.
>> Just because it complicates the boot/spawn workflow a bit, which really
>> isn't a big deal.
> 
> I am +1 for not moving os-multiple-create into v3.
> 
> I think this work is a better way to look at spawning multiple VMs:
> https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
> 
> The CLI can then get updated so users are still able to spawn multiple
> VMs in a nice way.
> 
> Heat, presumably, will control this, eventually.

Instance groups seems different.  That's about putting multiple
instances in a group for the purposes of applying policy.  I don't see
anything in there that replicates booting multiple instances with a
single API call.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A patch review

2013-07-02 Thread Dolph Mathews
On Monday, July 1, 2013, Wenhao Xu wrote:

> Hi guys,
>
> The review  of (https://review.openstack.org/#/c/34652/) has been idled
> for a while. I am wondering anyone has a free time slot to review it?
> Thanks.
>

Please tag the name of the relevant project in the subject line, especially
when emailing the list with requests like this. Thanks!


> Regards,
> Wenhao
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Email is not registered problem

2013-07-02 Thread Ben Nemec

On 2013-07-01 18:29, Qing He wrote:
The emails from this list stopped coming to my email address, is this 
related?


I don't see how it could be.  To my knowledge there's no connection 
between Gerrit and openstack-dev.  I actually use different addresses 
for both.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Dolph Mathews
On Monday, July 1, 2013, Jamie Lennox wrote:

> On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:
> > Hi folks
> >
> > I'm interested in it too.
> > I'm working on VPN support for Neutron.
> > Public key authentication is one of feature milestone in the IPsec
> > implementation.
> > But I believe key-pair management api and the implementation will be
> > quite similar in Key for IPsec and Nova.
> >
> > so I'm +1 for moving key management for Keystone.
> >
> > Best
> > Nachi
>
> I don't know how nova's keypair management works but i assume we are
> talking about keys for ssh-ing into new virtual machines rather than
> keys for authentication against nova.
>
> Keystone's v3 api has credentials storage (see
>
> https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md),
>  is this sufficient on behalf of keystone? There is some support in the
> current master of keystoneclient for working with these credentials.


+1; I'd like to know what the gap is from Identity API v3's /credentials to
nova key pair API, if any. The credential API was intended to avoid making
too many assumptions about how it would be used, so hopefully it can be
adopted as it is for EC2 creds today.


>
> Otherwise would the upcoming barbican be a more appropriate place?
>
> If i've got this wrong and we are using these keys to actually
> authenticate against nova then if someone can point me to the code i'll
> see how hard it is to transfer to keystone.
>
> >
> >
> > 2013/7/1 Thierry Carrez >:
> > > Russell Bryant wrote:
> > >> On 07/01/2013 01:10 PM, Jay Pipes wrote:
> > >>> On 07/01/2013 12:23 PM, Mauro S M Rodrigues wrote:
> >  +1.. make sense to me, I always thought that was weird hehe
> >  Say the word and we will remove it from v3.
> > >>>
> > >>> Well, it's not weird, per-se... I mean I understand why it is the
> way it
> > >>> is. Nova, of course, preceded Keystone.
> > >>>
> > >>> But, it sounds like this would be something to put on the Icehouse
> > >>> horizon? Can the Nova and Keystone PTLs comment if there is interest
> in
> > >>> this?
> > >>
> > >> There is interest from me.  Dolph?
> > >
> > > Dolph is not around this week, so the answer may take a while :)
> > >
> > > --
> > > Thierry Carrez (ttx)
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Keystone store-quota-data blueprint

2013-07-02 Thread Dolph Mathews
On Monday, July 1, 2013, Jamie Lennox wrote:

> On Tue, 2013-07-02 at 02:03 +, Everett Toews wrote:
> > This topic came up at the last summit in Portland at [1] and [2].
> >
> >
> > Yehia and another colleague of his from HP had a design that was
> > discussed and it seemed like they were going to start work on it.
> > Another developer from CERN expressed interest too. I'm not sure if
> > anything ever really got started on it.
> >
> >
> > I think you'll want to wait until Dolph gets back (next week?) before
> > doing any major work on it. Ask him before moving forward.
>
> Also know that Dolph has said that nothing that affects the API will be
> accepted after h-2 so it would have to be finished, reviewed and
> commited by the 16th. Given that he's away and the discussion that would
> be around this I'd say it's very tight.


"away" ... apparently they have the internet on islands, too.

Henry Nash brought up the issue of extensions that are not included in the
default pipeline (which is how I think quota storage should be spec'd and
implemented) vs the API feature freeze. The goal of the API feature freeze
is to avoid crunch time (and producing new bugs) on release critical / core
API functionality while we should be focusing on stability and polish
there. I don't think extension development necessarily conflicts with that
goal (besides sapping review bandwidth), so I'm happy to see them merge
during milestone 3.

Granted, a whole bunch of m2 bp's already in review / approaching review fall
into that category, but we should still try and get them in before m3 :)



>
>
> > Regards,
> > Everett
> >
> >
> > [1]
> http://openstacksummitapril2013.sched.org/event/c0c6befcb4361e54d5c7e45b2f772de7
> > [2]
> http://openstacksummitapril2013.sched.org/event/7bf2cdde2dfad733b499d9c2a3f60b08
> >
> >
> > P.S. This email really belongs in openstack-dev
> >
> > On Jul 1, 2013, at 10:24 AM, Dmitry Stepanenko wrote:
> >
> > > Hi folks,
> > >
> > >
> > > we're going to work on store-quota-data blueprint
> > > (https://blueprints.launchpad.net/keystone/+spec/store-quota-data).
> > > Did anyone already work on it?
> > >
> > >
> > > Thanks & regards,
> > > Dmitry
> > > ___
> > > Mailing list: https://launchpad.net/~openstack
> > > Post to : openst...@lists.launchpad.net 
> > > Unsubscribe : https://launchpad.net/~openstack
> > > More help   : https://help.launchpad.net/ListHelp
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openst...@lists.launchpad.net 
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.

2013-07-02 Thread Mark McLoughlin
tl;dr - this has been fixed on master for a while and the fix is
included in oslo.config-1.2.0a3

  http://docs.openstack.org/developer/oslo.config/#a3

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Current DB status

2013-07-02 Thread Mark McLoughlin
On Mon, 2013-07-01 at 17:35 +0100, Mark McLoughlin wrote:
> On Mon, 2013-07-01 at 19:33 +0300, Gary Kotton wrote:
> > On 07/01/2013 07:07 PM, Mark McLoughlin wrote:
> > > On Mon, 2013-07-01 at 18:59 +0300, Gary Kotton wrote:
> > >> On 07/01/2013 06:49 PM, Mark McLoughlin wrote:
> > >>> On Mon, 2013-07-01 at 18:32 +0300, Gary Kotton wrote:
> >  On 07/01/2013 06:13 PM, Mark McLoughlin wrote:
> > > (Oslo is not an acronym)
> >  ok.
> > 
> >  can someone please clarify what the database status is in Oslo?
> > >>> What apart from the oslo.config issue do you want clarified?
> > >> Boris had concerns about the slave database support. He mentioned that
> > >> the blueprint may be blocked -
> > >> https://blueprints.launchpad.net/nova/+spec/db-slave-handle.
> > > It's blocked in the sense that Nova can't yet use oslo.config because of
> > > the aforementioned issue.
> > >
> > >> This patch
> > >> updates to oslo.config-1.2.0a2 - so it may address the blueprints
> > >> blocking item.
> > >>
> > >> My concern here is about the process. I was under the assumption, and
> > >> may be wrong here, is that if code is approved in Oslo the is can be
> > >> consumed by other projects. Over the last few months we have had a large
> > >> amount of database issues in Neutron and moved to the Oslo DB code to
> > >> address these.
> > > Yes, we've had (and continue to have) problems - no project has been
> > > able to use new oslo.config features because of a series of python
> > > packaging related issues.
> > >
> > > If you're suggesting that e.g. the DB code in oslo-incubator should
> > > avoid requiring oslo-config-1.2.0, that wouldn't help quantum much since
> > > the dependency on 1.2.0 was added in order to enable behaviour that
> > > quantum required.
> > 
> > The Oslo DB code requires oslo.config-1.2.0a2. This config version has 
> > supported for 'deprecated_opts' - which enabled the common DB code to 
> > support Neutron and Nova DB configuration variables.
> > 
> > I was not aware that that the slave blueprint was blocked because it was 
> > waiting for a specific config version (hopefully the Nova DB patch deals 
> > with as it uses the required config). My concern was that Oslo code was 
> > approved in one project (Neutron) and in another it raises question 
> > (Nova). Basically it would be nice to know if there are issues with the 
> > Oslo DB support and if so what needs to be done to address them.
> 
> The blueprint says:
> 
>   https://blueprints.launchpad.net/nova/+spec/db-slave-handle
> 
>   "This is blocked until we can get oslo.config 1.2.0a2 into nova. 
>Currently it breaks unit tests when you run with tox."
> 
> Which is the same issue as the one described here:
> 
>   https://bugs.launchpad.net/oslo/+bug/1194807

Ok, I think I've everything lined up to unblock this:

  https://review.openstack.org/#/q/I6f3eb5fd2c75615d9a1cae172aed859b36b27d4c,n,z

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jay Pipes

On 07/02/2013 09:49 AM, Jarret Raim wrote:

I've spent some time thinking about how Barbican (Key Management) can help
in this workflow.

We will have the ability to generate SSH keys (and a host of other key &
certificate types). This is backed by cryptographically sound code and
we've spent some time figuring out the entropy problem and HSM support. If
the keys are stored in Barbican, we'd get the audit / logging and other
functionality needed for compliance.


What does the above mean? What about Barbican is audited/logged that 
isn't in Keystone and why wouldn't such auditing/logging be added to 
Keystone if it were needed for compliance? I'm trying to figure out why 
there is yet another OpenStack-related project for storing 
keys/credentials when Keystone already exists.


> We also get federation which will

allow customers of public Clouds (or shared private Clouds) to maintain
custody of their own keys rather than storing them in the provider.


I don't understand. Users already have custody of their own keys. The 
only thing that Keystone/Nova has is the public key fingerprint [1], not 
the private key...



There seem to be a couple of ways to take advantage of this functionality.
If a key is specific to a user, then Keystone could store a URI to the key
in Barbican and Nova could request it on server creation. Alternatively,
the user could pass a URI to a key into Nova directly. If we want to move
to always enabling SSH key access only on boot, Nova could create a key
under the requesting tenant in Barbican and use it on server create.


OK, so the above would basically be a "driver" in Keystone parlance for 
the credentials module, where Keystone would just store the key in 
Barbican and retrieve said key.


At this point, though, what exactly is the point of Barbican over a 
simple database or KVS driver?



Things get more interesting when we are talking about IPSec certificates
and the like. Barbican seems a more logical place to generate / store /
share these types of keys than Keystone.


Generate...perhaps. Store... I doubt it. Share...I think Keystone is the 
most logical place to share credentials. After all, it's the 
authentication/identity component in OpenStack.


While encryption and key generation are interesting topics, they are 
tangential to the fact that credentials are an attribute of the 
identity/user, and that information is in Keystone.


Best,
-jay

[1] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L611



I'm open to other options - we are going to build this type of
functionality and I'm interested in how people would like to use it.




Jarret



On 7/2/13 7:46 AM, "Jay Pipes"  wrote:


On 07/02/2013 08:26 AM, Simo Sorce wrote:

On Mon, 2013-07-01 at 21:03 -0400, Jay Pipes wrote:

On 07/01/2013 07:49 PM, Jamie Lennox wrote:

On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:

Hi folks

I'm interested in it too.
I'm working on VPN support for Neutron.
Public key authentication is one of feature milestone in the IPsec
implementation.
But I believe key-pair management api and the implementation will be
quite similar in Key for IPsec and Nova.

so I'm +1 for moving key management for Keystone.

Best
Nachi


I don't know how nova's keypair management works but i assume we are
talking about keys for ssh-ing into new virtual machines rather than
keys for authentication against nova.

Keystone's v3 api has credentials storage (see

https://github.com/openstack/identity-api/blob/master/openstack-identit
y-api/src/markdown/identity-api-v3.md ), is this sufficient on behalf
of keystone? There is some support in the current master of
keystoneclient for working with these credentials.

Otherwise would the upcoming barbican be a more appropriate place?

If i've got this wrong and we are using these keys to actually
authenticate against nova then if someone can point me to the code
i'll
see how hard it is to transfer to keystone.


Actually, no, I think you have it right (though the correct link is

https://github.com/openstack/identity-api/blob/master/openstack-identity
-api/v3/src/markdown/identity-api-v3.md)

I think the main work, though, has to be in removing/replacing the Nova
API /keypairs stuff with calls to Keystone's v3/credentials API.

Would the appropriate way to do this be to add an API shim into Nova's
API that simply calls out to the Keystone v3/credentials API IFF
Keystone's v3 API is enabled in the deployment? Then, deprecate the old
code and when Keystone v2 API is sunsetted, then remove the old Nova
keypairs API codepaths?


I guess you also need to handle a migration of the data from one store
to the other ?
Or are these data migrations left as an exercise to the admins ?


No, you are correct, a migration script should be included as part of
the code.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/open

Re: [openstack-dev] [oslo.config] Config files overriding CLI: The path of most surprise.

2013-07-02 Thread John Dennis
On 07/01/2013 05:52 PM, Clint Byrum wrote:
> Last week I went to use oslo.config in a utility I am writing called
> os-collect-config[1]...
> 
> While running unit tests on the main() method that is used for the CLI,
> I was surprised to find that my unit tests were picking up values from
> a config file I had created just as a test. The tests can be fixed to
> disable config file lookups, but what was more troublesome was that the
> config file was overriding values I was passing in as sys.argv.
> 
> I have read the thread[2] which suggest that CLI should defer to config
> file because config files are somehow less permanent than the CLI.
> 
> I am writing today to challenge that notion, and also to suggest that even
> if that is the case, it is inappropriate to have oslo.config operate in
> such a profoundly different manner than basically any other config library
> or system software in general use. CLI options are _for config files_
> and if packagers are shipping configurations in systemd unit files,
> upstart jobs, or sysvinits, they are doing so to control the concerns
> of that particular invocation of whatever command they are running,
> and not to configure the software entirely.
> 
> CLI args are by definition ephemeral, even if somebody might make them
> "permanent" in their system, I doubt any packager would then expect that
> these CLI args would be overridden by any config files. This default is
> just wrong, and needs to be fixed.

+1

When I read "Option values in config files override those on the command
line." in the cfg.py docstring I thought surely that must be a typo
because it's it's the opposite of years of established practice.

I think the following captures the expected behavior

> I was also confused by the ordering in this list, though when I read more 
> carefully it seems to agree with me:
> 
>> - Default value in source code
>> - Overridden by value in config file
>> - Overridden by value in environment variable
>> - Overridden by value given as command line option
> 
> I'd like to rewrite that list as
> 
> - Value given as a command line option
> - Failing that, value in environment variable
> - Failing that, value in config file
> - Failing that, default value in source code 
> 

John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jarret Raim
I've spent some time thinking about how Barbican (Key Management) can help
in this workflow.

We will have the ability to generate SSH keys (and a host of other key &
certificate types). This is backed by cryptographically sound code and
we've spent some time figuring out the entropy problem and HSM support. If
the keys are stored in Barbican, we'd get the audit / logging and other
functionality needed for compliance. We also get federation which will
allow customers of public Clouds (or shared private Clouds) to maintain
custody of their own keys rather than storing them in the provider.

There seem to be a couple of ways to take advantage of this functionality.
If a key is specific to a user, then Keystone could store a URI to the key
in Barbican and Nova could request it on server creation. Alternatively,
the user could pass a URI to a key into Nova directly. If we want to move
to always enabling SSH key access only on boot, Nova could create a key
under the requesting tenant in Barbican and use it on server create.
 
Things get more interesting when we are talking about IPSec certificates
and the like. Barbican seems a more logical place to generate / store /
share these types of keys than Keystone.

I'm open to other options - we are going to build this type of
functionality and I'm interested in how people would like to use it.




Jarret



On 7/2/13 7:46 AM, "Jay Pipes"  wrote:

>On 07/02/2013 08:26 AM, Simo Sorce wrote:
>> On Mon, 2013-07-01 at 21:03 -0400, Jay Pipes wrote:
>>> On 07/01/2013 07:49 PM, Jamie Lennox wrote:
 On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:
> Hi folks
>
> I'm interested in it too.
> I'm working on VPN support for Neutron.
> Public key authentication is one of feature milestone in the IPsec
> implementation.
> But I believe key-pair management api and the implementation will be
> quite similar in Key for IPsec and Nova.
>
> so I'm +1 for moving key management for Keystone.
>
> Best
> Nachi

 I don't know how nova's keypair management works but i assume we are
 talking about keys for ssh-ing into new virtual machines rather than
 keys for authentication against nova.

 Keystone's v3 api has credentials storage (see
 
https://github.com/openstack/identity-api/blob/master/openstack-identit
y-api/src/markdown/identity-api-v3.md ), is this sufficient on behalf
of keystone? There is some support in the current master of
keystoneclient for working with these credentials.

 Otherwise would the upcoming barbican be a more appropriate place?

 If i've got this wrong and we are using these keys to actually
 authenticate against nova then if someone can point me to the code
i'll
 see how hard it is to transfer to keystone.
>>>
>>> Actually, no, I think you have it right (though the correct link is
>>> 
>>>https://github.com/openstack/identity-api/blob/master/openstack-identity
>>>-api/v3/src/markdown/identity-api-v3.md)
>>>
>>> I think the main work, though, has to be in removing/replacing the Nova
>>> API /keypairs stuff with calls to Keystone's v3/credentials API.
>>>
>>> Would the appropriate way to do this be to add an API shim into Nova's
>>> API that simply calls out to the Keystone v3/credentials API IFF
>>> Keystone's v3 API is enabled in the deployment? Then, deprecate the old
>>> code and when Keystone v2 API is sunsetted, then remove the old Nova
>>> keypairs API codepaths?
>>
>> I guess you also need to handle a migration of the data from one store
>> to the other ?
>> Or are these data migrations left as an exercise to the admins ?
>
>No, you are correct, a migration script should be included as part of
>the code.
>
>Best,
>-jay
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Swift 1.9.0 released: global clusters and more

2013-07-02 Thread John Dickinson
I'm pleased to announce that Swift 1.9.0 has been released. This has
been a great release with major features added, thanks to the combined
effort of 37 different contributors.

Full release notes: https://github.com/openstack/swift/blob/master/CHANGELOG
Download: https://launchpad.net/swift/havana/1.9.0

Feature Summary
===

Full global clusters support


With this release, Swift fully supports global clusters. A single
Swift cluster can now be deployed a wide geographic area (eg across an
ocean or continent) and still provide high durability and
availability. This feature has four major parts:

* Region tier for data placement
* Adjustable replica counts
* Separate replication network support
* Affinity on reads and writes

Improvements in disk performance


The object server can now be configured to use threadpools to increase
performance and smooth out latency on storage nodes. Also, many disk
operations were reordered to increase reliablility and improve
performance. This work is a direct result of the design summit
sessions in Portland.

Support for config directories
--

Swift now supports conf.d style config directories. This allows config
snippets to be managed independently and composed into the full config
for a Swift process. For example, a deployer can have a config snippet
for each piece of proxy middleware.

Multiple TempURL keys
-

The TempURL feature (temporary, signed URLs) now supports two signing
keys. This allows users to safely rotate keys without invalidating
existing signed URLs.

Other
-

There's a ton of "other" stuff in this release including features,
security fixes, general polishing, and bug fixes. I encourage you to
check out the full release notes for more info
(https://github.com/openstack/swift/blob/master/CHANGELOG).

New Contributors


Twelve of the 37 total contributors are first-time contributors to
Swift. They are:

* Fabien Boucher (fabien.bouc...@enovance.com)
* Brian D. Burns (ios...@gmail.com)
* Alex Gaynor (alex.gay...@gmail.com)
* Edward Hope-Morley (opentas...@gmail.com)
* Matthieu Huin (m...@enovance.com)
* Shri Javadekar (shrin...@maginatics.com)
* Sergey Kraynev (skray...@mirantis.com)
* Dieter Plaetinck (die...@vimeo.com)
* Chuck Short (chuck.sh...@canonical.com)
* Dmitry Ukov (du...@mirantis.com)
* Vladimir Vechkanov (vvechka...@mirantis.com)
* niu-zglinux (niu.zgli...@gmail.com)

Thank you to everyone who contributed.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-02 Thread Doug Hellmann
On Tue, Jul 2, 2013 at 5:52 AM, Robert Collins wrote:

> On 2 July 2013 21:32, Thierry Carrez  wrote:
> > Thierry Carrez wrote:
> >> """
> >> 'OpenStack Programs' are efforts which are essential to the completion
> >> of our mission. Programs can create any code repository and produce any
> >> deliverable they deem necessary to achieve their goals.
> >>
> >> Programs are placed under the oversight of the Technical Committee, and
> >> contributing to one of their code repositories grants you ATC status.
> >>
> >> Current efforts or teams which want to be recognized as an 'OpenStack
> >> Program' should place a request to the Technical Committee, including a
> >> clear mission statement describing how they help the OpenStack general
> >> mission and how that effort is essential to the completion of our
> >> mission. If programs have a goal that includes the production of
> >> a server 'integrated' deliverable, that specific project would still
> >> need to go through an Incubation period.
> >>
> >> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> >> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation',
> >> 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in
> >> incubation. Those programs should retroactively submit a mission
> >> statement and initial lead designation, if they don't have one already.
> >> """
> >
> > Oops. In this variant, Trove and Ironic, as programs, would not be "in
> > incubation" (only one of their deliverables would). That last paragraph
> > should be fixed as:
> >
> > """
> > The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> > 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation',
> > 'Infrastructure', 'QA', 'Oslo', 'Trove' and 'Ironic'. Those programs
> > should retroactively submit a mission statement and initial lead
> > designation, if they don't have one already.
> > """
> >
> > Maybe Ironic should be merged into the TripleO program when it's
> considered.
>
> Certainly; with our focus on deploy and operations, Ironic is very
> much something we'll care about forever :). OTOH, baremetal machine
> provisioning is a distinct concern from OpenStack deployment and
> operations. I don't know that there is a better place for Ironic; it's
> certainly got significant tentacles into other areas than just Nova
> [hence it being split out in the first place]. Nevertheless : clearly
> Ironic is a Project, and Incubated. I think whether it is incorporated
> into it's own Program, or TripleO, isn't a very interesting question.
> ATC membership is decoupled from things now, so \o/.
>
> On proposal 3, I wonder if it makes things too vague : if a Program
> can have one or more integrated Projects, it sort of suggests that
> perhaps Neutron be a Project of the Nova Program?
>

I like option 3 because it lets us move ahead without having to revisit
what may just have been an unfortunate narrowness of vision in the original
charter (who knew we would grow so quickly?). We have been letting the
projects evolve around feature sets in a way that helps us manage code and
feature complexity, e.g. breaking networking and block storage out of nova.
The addition of programs as groups of one or more projects is a natural way
to manage changes in the community's size and complexity as we continue to
grow.

Doug


>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Cloud Services
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Jay Pipes

On 07/02/2013 08:26 AM, Simo Sorce wrote:

On Mon, 2013-07-01 at 21:03 -0400, Jay Pipes wrote:

On 07/01/2013 07:49 PM, Jamie Lennox wrote:

On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:

Hi folks

I'm interested in it too.
I'm working on VPN support for Neutron.
Public key authentication is one of feature milestone in the IPsec
implementation.
But I believe key-pair management api and the implementation will be
quite similar in Key for IPsec and Nova.

so I'm +1 for moving key management for Keystone.

Best
Nachi


I don't know how nova's keypair management works but i assume we are
talking about keys for ssh-ing into new virtual machines rather than
keys for authentication against nova.

Keystone's v3 api has credentials storage (see
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md
 ), is this sufficient on behalf of keystone? There is some support in the 
current master of keystoneclient for working with these credentials.

Otherwise would the upcoming barbican be a more appropriate place?

If i've got this wrong and we are using these keys to actually
authenticate against nova then if someone can point me to the code i'll
see how hard it is to transfer to keystone.


Actually, no, I think you have it right (though the correct link is
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md)

I think the main work, though, has to be in removing/replacing the Nova
API /keypairs stuff with calls to Keystone's v3/credentials API.

Would the appropriate way to do this be to add an API shim into Nova's
API that simply calls out to the Keystone v3/credentials API IFF
Keystone's v3 API is enabled in the deployment? Then, deprecate the old
code and when Keystone v2 API is sunsetted, then remove the old Nova
keypairs API codepaths?


I guess you also need to handle a migration of the data from one store
to the other ?
Or are these data migrations left as an exercise to the admins ?


No, you are correct, a migration script should be included as part of 
the code.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Simo Sorce
On Mon, 2013-07-01 at 21:03 -0400, Jay Pipes wrote:
> On 07/01/2013 07:49 PM, Jamie Lennox wrote:
> > On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:
> >> Hi folks
> >>
> >> I'm interested in it too.
> >> I'm working on VPN support for Neutron.
> >> Public key authentication is one of feature milestone in the IPsec
> >> implementation.
> >> But I believe key-pair management api and the implementation will be
> >> quite similar in Key for IPsec and Nova.
> >>
> >> so I'm +1 for moving key management for Keystone.
> >>
> >> Best
> >> Nachi
> >
> > I don't know how nova's keypair management works but i assume we are
> > talking about keys for ssh-ing into new virtual machines rather than
> > keys for authentication against nova.
> >
> > Keystone's v3 api has credentials storage (see
> > https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md
> >  ), is this sufficient on behalf of keystone? There is some support in the 
> > current master of keystoneclient for working with these credentials.
> >
> > Otherwise would the upcoming barbican be a more appropriate place?
> >
> > If i've got this wrong and we are using these keys to actually
> > authenticate against nova then if someone can point me to the code i'll
> > see how hard it is to transfer to keystone.
> 
> Actually, no, I think you have it right (though the correct link is 
> https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md)
> 
> I think the main work, though, has to be in removing/replacing the Nova 
> API /keypairs stuff with calls to Keystone's v3/credentials API.
> 
> Would the appropriate way to do this be to add an API shim into Nova's 
> API that simply calls out to the Keystone v3/credentials API IFF 
> Keystone's v3 API is enabled in the deployment? Then, deprecate the old 
> code and when Keystone v2 API is sunsetted, then remove the old Nova 
> keypairs API codepaths?

I guess you also need to handle a migration of the data from one store
to the other ?
Or are these data migrations left as an exercise to the admins ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova scheduler code refactoring

2013-07-02 Thread Alexey Ovchinnikov
Hi everyone,

it appears that in different parts of nova various redundant
and ambiguously designed code pieces are laying hidden. I have
started hunting them recently with proposing and implementing a blueprint
for scheduler's host manager refactoring:

https://blueprints.launchpad.net/nova/+spec/host-manager-overhaul
https://review.openstack.org/#/c/33621

Please consider these changes, your suggestions and
criticism are welcome and appreciated!

Alexey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Discussion about resize and migrate status separation

2013-07-02 Thread John Garbutt
The confirm will happen automatically, after a period of time.
Maybe 0 is a valid option there, I can't remember.

John

On 1 July 2013 03:18, guohliu  wrote:
> On 06/28/2013 05:58 PM, John Garbutt wrote:
>>
>> I am not sure its worth the extra calls.
>>
>> Hopefully once we have live-migrate and cold-migrate/resize refactor
>> done we can add a new single API that better combines the different
>> approaches, which should remove the confusion.
>>
>> Thinking about it, that should sove the issue about the different states
>> too.
>>
>> John
>>
>> On 28 June 2013 09:18, guohliu  wrote:
>>>
>>> On 06/27/2013 07:05 PM, John Garbutt wrote:

 I haven't seen any plans to change this.

 The way I see it, the states make most sense for resize, which is the
 end-user facing operation.

 Personally I see migrate as a more admin focused operation.
 So to help simplify the code, I am OK with slightly confusing states
 for those users.
 The exception, I guess, is when users want to "re-balance" their
 servers between availability zones.

 With any luck, post refactor, it should be easier to re-visit this,
 and perhaps add those extra states.

 John

 On 27 June 2013 09:14, guohliu  wrote:
>
> Greetings,
>
> I apologize if this question was already covered and I missed it, as we
> know
> the migrate and resize share the same code path as well as instance
> status,
> notification message etc in current code base, somehow it might confuse
> the
> user when he/she perform migrate but get VERIFY_RESIZE status, and as
> far
> as
> I know this wasn't to be targeted in migrate refactor work, I just
> wondering
> to know do you think this is a issue? how about to separate migrate and
> resize with different instance status and notification message as well
> as
> task status? any comments would be appreciated.
>
> Best Regards
> Guohliu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>> Hi John,
>>>
>>> Thanks for your comments, it makes sense to me, the resize is all good,
>>> we
>>> might just need to slightly separate migrate status based on resize code
>>> logic, but one exception is we might need to add confirm migrate and
>>> revert
>>> migrate in admin action. thoughts?
>>>
>>>
>>> Regards
>>> Guohliu
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> I prefer that we don't need confirm and revert for migrate, because it
> doesn't like resize, server location is not a transparent info compare to
> flavor type, when migrate finish, that's it, thoughts? I would like to
> propose a patch for detail discussion.
>
>
> Regards
> Guohliu
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3

2013-07-02 Thread John Garbutt
On 1 July 2013 15:49, Andrew Laski  wrote:
> On 07/01/13 at 11:23am, Mauro S M Rodrigues wrote:
>> One more though, about os-multiple-create: I was also thinking to remove
>> it, I don't see any real advantage to use it since it doesn't offer any kind
>> of flexibility like chose different flavors, images and other attributes. So
>> anyone creating multiple servers would probably prefer an external
>> automation tool instead of multiple server IMHO.
>>
>> So anyone using it? There are a good reason to keep it? Did I miss
>> something about this extension?
>
> I would like to see this extension go away, but only by a small margin.
> Just because it complicates the boot/spawn workflow a bit, which really
> isn't a big deal.

I am +1 for not moving os-multiple-create into v3.

I think this work is a better way to look at spawning multiple VMs:
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

The CLI can then get updated so users are still able to spawn multiple
VMs in a nice way.

Heat, presumably, will control this, eventually.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN][OSSG] Nova Baremetal Exposes Previous Tenant Data

2013-07-02 Thread Clark, Robert Graham
Nova Baremetal Exposes Previous Tenant Data
-

### Summary ###
Data of previous tenants may be exposed to new ones when using Nova Baremetal

### Affected Services / Software ###
Keystone, Databases

### Discussion ###
Nova Baremetal is intended for testing and development only, it is not intended 
to be production ready. Experience has shown that despite that warning the 
OpenStack community is keen to embrace new technologies and deploy at-risk. 
This OSSN serves to signpost some of the risks.

Without secure boot, and without full openflow hardware networking during the 
boot process, it is impossible to trust multiple tenants on baremetal at all - 
because the vectors for attack are so low level that instances may be running 
in a virtual environment and unaware of it, with the virtual environment 
capturing secrets, forcing entropy pools to be predictable and other such 
hostile behaviour.

### Recommended Actions ###
Do not use Nova Baremetal where secure separation of tenants on hardware is a 
requirement without a full verifiable boot chain and network hardware.

### Contacts / References ###
This OSSN : https://bugs.launchpad.net/ossn/+bug/1174153
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Question about fixed_ip_bulk_create() db.api function.

2013-07-02 Thread Jay Pipes

On 07/02/2013 05:32 AM, Victor Sergeyev wrote:

Hello All.

I have a question about the patch `Add unique constraint to FixedIp`
(see https://review.openstack.org/#/c/29364/), which is a part of
blueprint `Complete db unique key enforcement on all tables dbs`
(https://blueprints.launchpad.net/nova/+spec/db-enforce-unique-keys).

There is a discussion about fixed_ip_bulk_create() behavior in this
patch review. At the moment this function receives a list of fixed_ips
and tries to insert these IPs to database one-by-one. If
DBDuplicateEntry exception raised, the function raises an exception with
the first duplicate IP in exception message. The same is true for
floating_ip_bulk_create() function.

Currently we can get only one duplicate IP error message per request.
So, if we add 1000 IP addresses, including 100 duplicates, we have to
call this function 100 times to find out which IPs aren't unique.

A few ways of modifying of this function have been proposed:
1) we can try to add all IPs to database using a bulk insert. If one of
those IPs violates the unique constraint, the exception is raised
containing the first duplicate IP address, the transaction is rolled
back and no IPs are saved to DB.
This keeps current behavior, but is significantly faster when a large
number of rows are inserted (Oslo integrity error handling code should
be modified a bit to retrieve the duplicated value from an exception
message).

2) we can add all unique IPs to database and write duplicate IPs to log.
This is much slower, because we have to create a separate transaction
for each IP (as the first integrity error would cancel the current
transaction until ROLLBACK was emitted).
On the other hand, this allows us to save valid IPs to database and
provide the caller with a list of duplicate IPs.

3) we can try to add all IPs to database. All duplicate IPs are
collected. If there is at least one duplicate, no IPs are saved to
database, but the caller receives a list with all duplicate IPs.
(A separate transaction is needed for each IP. The rollback must be done
manually by issuing of DELETE FROM matching duplicate IPs).


Or option #4:

Do a single query before starting any transaction that finds the 
intersection of inserted floating IPs with existing floating IPs. Let's 
call that set A.


if len(A) > 0:
   raise SomeException("Message with list of existing floating IPs")
else:
   try:
 session.begin()
 execute single query to insert records.
 session.commit()
   except:
 session.rollback()

That way you get a fail-fast scenario and the only time you have any 
issues (i.e. a rollback) is the rare occasion when you have a matching 
floating IP inserted into the database in between your original query to 
get the intersection and when you issue the call to insert the records.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move keypair management out of Nova and into Keystone?

2013-07-02 Thread Day, Phil
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 02 July 2013 02:04
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Move keypair management out of Nova and into
> Keystone?
> 
> On 07/01/2013 07:49 PM, Jamie Lennox wrote:
> > On Mon, 2013-07-01 at 14:09 -0700, Nachi Ueno wrote:
> >> Hi folks
> >>
> >> I'm interested in it too.
> >> I'm working on VPN support for Neutron.
> >> Public key authentication is one of feature milestone in the IPsec
> >> implementation.
> >> But I believe key-pair management api and the implementation will be
> >> quite similar in Key for IPsec and Nova.
> >>
> >> so I'm +1 for moving key management for Keystone.
> >>
> >> Best
> >> Nachi
> >
> > I don't know how nova's keypair management works but i assume we are
> > talking about keys for ssh-ing into new virtual machines rather than
> > keys for authentication against nova.
> >
> > Keystone's v3 api has credentials storage (see
> > https://github.com/openstack/identity-api/blob/master/openstack-identity-
> api/src/markdown/identity-api-v3.md ), is this sufficient on behalf of 
> keystone?
> There is some support in the current master of keystoneclient for working with
> these credentials.
> >
> > Otherwise would the upcoming barbican be a more appropriate place?
> >
> > If i've got this wrong and we are using these keys to actually
> > authenticate against nova then if someone can point me to the code
> > i'll see how hard it is to transfer to keystone.
> 
> Actually, no, I think you have it right (though the correct link is
> https://github.com/openstack/identity-api/blob/master/openstack-identity-
> api/v3/src/markdown/identity-api-v3.md)
> 
> I think the main work, though, has to be in removing/replacing the Nova API
> /keypairs stuff with calls to Keystone's v3/credentials API.
> 
> Would the appropriate way to do this be to add an API shim into Nova's API 
> that
> simply calls out to the Keystone v3/credentials API IFF Keystone's v3 API is
> enabled in the deployment? Then, deprecate the old code and when Keystone
> v2 API is sunsetted, then remove the old Nova keypairs API codepaths?
> 
> Best,
> -jay
> 

Yep - following the pattern set by things like the floating IP and SecGroups 
APIs as these moved to Quantum would defiantly be the way to go.

Beyond just the Nova API shim we'd need to change the logic in the server 
creation to get the key value from Keystone rather than the Nova DB.
There is a KeypairAPI in compute/api.py but not everything is abstracted to use 
it at the moment.   If we tidy that up then that would provide the redirection 
point to Keystone. 
  

It would also be really good if there was a migration tool developed at the 
same time to migrate existing keys from Nova to Quantum.

Phil

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Basic definition of OpenStack Programs and first batch

2013-07-02 Thread Robert Collins
On 2 July 2013 21:32, Thierry Carrez  wrote:
> Thierry Carrez wrote:
>> """
>> 'OpenStack Programs' are efforts which are essential to the completion
>> of our mission. Programs can create any code repository and produce any
>> deliverable they deem necessary to achieve their goals.
>>
>> Programs are placed under the oversight of the Technical Committee, and
>> contributing to one of their code repositories grants you ATC status.
>>
>> Current efforts or teams which want to be recognized as an 'OpenStack
>> Program' should place a request to the Technical Committee, including a
>> clear mission statement describing how they help the OpenStack general
>> mission and how that effort is essential to the completion of our
>> mission. If programs have a goal that includes the production of
>> a server 'integrated' deliverable, that specific project would still
>> need to go through an Incubation period.
>>
>> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
>> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation',
>> 'Infrastructure', 'QA' and 'Oslo'. 'Trove' and 'Ironic' are in
>> incubation. Those programs should retroactively submit a mission
>> statement and initial lead designation, if they don't have one already.
>> """
>
> Oops. In this variant, Trove and Ironic, as programs, would not be "in
> incubation" (only one of their deliverables would). That last paragraph
> should be fixed as:
>
> """
> The initial Programs are 'Nova', 'Swift', 'Cinder', 'Neutron',
> 'Horizon', 'Glance', 'Keystone', 'Heat', 'Ceilometer', 'Documentation',
> 'Infrastructure', 'QA', 'Oslo', 'Trove' and 'Ironic'. Those programs
> should retroactively submit a mission statement and initial lead
> designation, if they don't have one already.
> """
>
> Maybe Ironic should be merged into the TripleO program when it's considered.

Certainly; with our focus on deploy and operations, Ironic is very
much something we'll care about forever :). OTOH, baremetal machine
provisioning is a distinct concern from OpenStack deployment and
operations. I don't know that there is a better place for Ironic; it's
certainly got significant tentacles into other areas than just Nova
[hence it being split out in the first place]. Nevertheless : clearly
Ironic is a Project, and Incubated. I think whether it is incorporated
into it's own Program, or TripleO, isn't a very interesting question.
ATC membership is decoupled from things now, so \o/.

On proposal 3, I wonder if it makes things too vague : if a Program
can have one or more integrated Projects, it sort of suggests that
perhaps Neutron be a Project of the Nova Program?

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >