[openstack-dev] I18n meeting tomorrow

2013-09-03 Thread Ying Chun Guo


Hi,


There will be OpenStack I18n team meeting at 0700UTC on Thursday (September
5th) in IRC channel #openstack-meeting.
The time, we use Asia/Europe friendly time. Welcome to join the meeting.

During the previous several weeks, we have good progress with the
infrastructure setting up in Transifex.
We have the common glossary shared within all openstack projects. We have
Horizon ready for translations.
Tomorrow is the string frozen date. Now it's the quite important time for
translating now.
We want to make sure Horizon can have the high quality internationalized
release for Havanna version.
If you are interested in translations or tools, welcome to join us.

We will cover following topics this time:

   Action items from the last meeting
   Horizon I18n version release process
   Translated document publish process
   Open discussion


For more details, please look into
https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting.


You can also contact us through IRC channel #openstack-translation, or
mailing address: openstack-i...@list.openstack.org.
Please refer to our wiki page for more details:
https://wiki.openstack.org/wiki/I18nTeam


Best regards
Daisy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][nova] an unit test problem

2013-09-03 Thread Wangpan
Hi experts,

I have an odd unit test issue in the commit 
https://review.openstack.org/#/c/44639/
the test results are here:
http://logs.openstack.org/39/44639/7/check/gate-nova-python27/4ddc671/testr_results.html.gz

the not passed test is: 
nova.tests.compute.test_compute_api.ComputeCellsAPIUnitTestCase.test_delete_in_resized
I have two questions about this issue:
1) why it is passed when I run it by 'testr run 
nova.tests.compute.test_compute_api.ComputeCellsAPIUnitTestCase.test_delete_in_resized'
 and also 'nosetests ' in my local venv
2) why the other test 
nova.tests.compute.test_compute_api.ComputeAPIUnitTestCase.test_delete_in_resized
 is passed, which also inherits from the class '_ComputeAPIUnitTestMixIn'

because it is OK in my local venv, so I have no idea to fix it, anybody can 
give me some advice?
Thanks a lot!

2013-09-04



Wangpan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Wait a minute... I thought we were going to remove Alembic until Icehouse-1?

2013-09-03 Thread Jay Pipes
So I went to do the work I said I was going to do at last week's 
Ceilometer meeting -- translate the 2 Alembic migrations in the 
Ceilometer source into SA-migrate migrations -- and then rebased my 
branch only to find 2 more Alembic migrations added in the last few days:


https://review.openstack.org/#/c/42716/
https://review.openstack.org/#/c/42715/

I will note that there is no unit testing of either of these migrations, 
because neither of them runs on SQLite, which is what the unit tests use 
(improperly, IMHO). There is a unique constraint name in one of them 
(only apparently used in the PostgreSQL driver) that is inconsistent 
with the naming of unique constraints that is used in the other 
migration. Note that I am not in favor of the unique constraint naming 
convention of table_columnA0columnB0columnC0, as I've noted in the 
upstream oslo.db patch that adds a linter-style check for this convention:


https://review.openstack.org/#/c/42307/

I thought we were going to translate the existing 2 Alembic migrations 
to SA-migrate migrations, and then do a switch to Alembic (removing the 
old SA-migrate versioning) in Icehouse-1? This was supposed to get us 
past the current mess of having both SA-migrate and Alembic migrations 
in the same source code base -- which is confusing a bunch of 
contributors who have written SA-migrate migrations.


Can we have a decision on this please?

I thought the plan from last week was:

1) Translate the 2 Alembic migrations to SA-Migrate migrations
2) Remove Alembic support from Ceilometer
3) Add unit tests (pretty much as-is from Glance) that would test the 
SA-migrate migrations in the unit tests as well as the MySQL and 
PostgreSQL testers in the gate

4) Add SA-migrate migrations for the remainder of Havana
5) Immediately after the cut of Havana final, do a cutover to Alembic 
from SA-migrate that would:
 a) Create an initial Alembic migration that would be the schema state 
of the Ceilometer database at the last cut of Havana
 b) Write a simple check for the migrate_version table in the database 
to check if the database was under SA-migrate control. If so, do nothing 
other than remove the migrate_version table

 c) Remove all the ceilometer/storage/sqlalchemy/migrate_repo/*


Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-03 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2013-09-03 16:12:00 -0700:
> On Tue, Sep 3, 2013 at 5:52 PM, Steven Hardy  wrote:
> 
> > Hi,
> >
> > I have a question for the keystone folks re the expected behavior when
> > deleting a trust.
> >
> > Is it expected that you can only ever delete a trust as the user who
> > created it, and that you can *not* delete the trust when impersonating that
> > user using a token obtained via that trust?
> >
> 
> We have some tests in keystone somewhat related to this scenario, but
> nothing that asserts that specific behavior-
> 
> https://github.com/openstack/keystone/blob/master/keystone/tests/test_auth.py#L737-L763
> 
> > The reason for this question, is for the Heat use-case, this may represent
> > a significant operational limitation, since it implies that the user who
> > creates the stack is the only one who can ever delete it.
> >
> 
> I don't follow this implication-- can you explain further? I don't see how
> the limitation above (if it exists) would impact this behavior or be a
> blocker for the design below.
> 

The way heatclient works right now, it will obtain a trust from
keystone, and then give that trust to Heat to use while it is managing
the stack. However, if this user was just one user in a team of users
who manage that stack, then when the stack is deleted, neither heat,
nor the user who is deleting the stack will be able to delete the trust
that was given to Heat.

This presents an operational hurdle for Heat users, as they will have to
have a stack "owner" user that is shared amongst a team. Otherwise they
may be stuck in a situation where the creating user is not available to
delete a stack that must be deleted for some reason.

Ideally as a final operation with the trust Heat, or the user doing the
delete, would be able to use the trust to delete itself.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Joe Gordon
On Tue, Sep 3, 2013 at 6:44 PM, John Griffith
wrote:

>
>
>
> On Tue, Sep 3, 2013 at 7:27 PM, Bryan D. Payne  wrote:
>
>>
>>   > How can someone use your code without a key manager?

 Some key management mechanism is required although it could be
 simplistic. For example, we’ve tested our code internally with an
 implementation of the key manager interface that returns a single, constant
 key.

>>> That works for testing but doesn't address: "the current dearth of key
>>> management within OpenStack does not preclude the use of our existing work
>>> within a production environment"
>>>
>>
>> My understanding here is that users are free to use any key management
>> mechanism that they see fit.  This can be a simple "return a static key"
>> option.  Or it could be using something more feature rich like Barbican.
>>  Or it could be something completely home grown that is suited to a
>> particular OpenStack deployment.
>>
>> I don't understand why we are getting hung up on having a key manager as
>> part of OpenStack in order to accept this work.  Clearly there are other
>> pieces of OpenStack that have external dependencies (message queues, to
>> name one).
>>
>>
As Russell so eloquently said " I generally want *everything* we merge to
be usable with the code in the tree" That doesn't mean something cannot
have external dependencies, it just needs to be usable with the
external dependencies and no additional integration work should be required.



>  I, for one, am looking forward to using this feature and would be very
>> disappointed to see it pushed back for yet another release.
>>
>
>>
>>
>>>  Is a feature complete if no one can use it?
>>>
>>> I am happy with a less then secure but fully functional key manager.
>>>  But with no key manager that can be used in a real deployment, what is the
>>> value of including this code?
>>>
>>
>> Of course people can use it.  They just need to integrate with some
>> solution of the deployment's choosing that provides key management
>> capabilities.  And, of course, if you choose to not use the volume
>> encryption then you don't need to worry about it at all.
>>
>> I've watched this feature go through many, many iterations throughout
>> both the Grizzly and Havana release cycles.  The authors have been working
>> hard to address everyone's concerns.  In fact, they have navigated quite a
>> gauntlet to get this far.  And what they have now is an excellent, working
>> solution.  Let's accept this nice security enhancement and move forward.
>>
>
>> Cheers,
>> -bryan
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> Do you have any docs or guides describing a reference implementation that
> would be able to use this in the manner you describe?
>

++


>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread John Griffith
On Tue, Sep 3, 2013 at 7:27 PM, Bryan D. Payne  wrote:

>
>   > How can someone use your code without a key manager?
>>>
>>> Some key management mechanism is required although it could be
>>> simplistic. For example, we’ve tested our code internally with an
>>> implementation of the key manager interface that returns a single, constant
>>> key.
>>>
>> That works for testing but doesn't address: "the current dearth of key
>> management within OpenStack does not preclude the use of our existing work
>> within a production environment"
>>
>
> My understanding here is that users are free to use any key management
> mechanism that they see fit.  This can be a simple "return a static key"
> option.  Or it could be using something more feature rich like Barbican.
>  Or it could be something completely home grown that is suited to a
> particular OpenStack deployment.
>
> I don't understand why we are getting hung up on having a key manager as
> part of OpenStack in order to accept this work.  Clearly there are other
> pieces of OpenStack that have external dependencies (message queues, to
> name one).
>
> I, for one, am looking forward to using this feature and would be very
> disappointed to see it pushed back for yet another release.
>
>
>
>>  Is a feature complete if no one can use it?
>>
>> I am happy with a less then secure but fully functional key manager.  But
>> with no key manager that can be used in a real deployment, what is the
>> value of including this code?
>>
>
> Of course people can use it.  They just need to integrate with some
> solution of the deployment's choosing that provides key management
> capabilities.  And, of course, if you choose to not use the volume
> encryption then you don't need to worry about it at all.
>
> I've watched this feature go through many, many iterations throughout both
> the Grizzly and Havana release cycles.  The authors have been working hard
> to address everyone's concerns.  In fact, they have navigated quite a
> gauntlet to get this far.  And what they have now is an excellent, working
> solution.  Let's accept this nice security enhancement and move forward.
>
> Cheers,
> -bryan
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Do you have any docs or guides describing a reference implementation that
would be able to use this in the manner you describe?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Bryan D. Payne
>  > How can someone use your code without a key manager?
>>
>> Some key management mechanism is required although it could be
>> simplistic. For example, we’ve tested our code internally with an
>> implementation of the key manager interface that returns a single, constant
>> key.
>>
> That works for testing but doesn't address: "the current dearth of key
> management within OpenStack does not preclude the use of our existing work
> within a production environment"
>

My understanding here is that users are free to use any key management
mechanism that they see fit.  This can be a simple "return a static key"
option.  Or it could be using something more feature rich like Barbican.
 Or it could be something completely home grown that is suited to a
particular OpenStack deployment.

I don't understand why we are getting hung up on having a key manager as
part of OpenStack in order to accept this work.  Clearly there are other
pieces of OpenStack that have external dependencies (message queues, to
name one).

I, for one, am looking forward to using this feature and would be very
disappointed to see it pushed back for yet another release.



>  Is a feature complete if no one can use it?
>
> I am happy with a less then secure but fully functional key manager.  But
> with no key manager that can be used in a real deployment, what is the
> value of including this code?
>

Of course people can use it.  They just need to integrate with some
solution of the deployment's choosing that provides key management
capabilities.  And, of course, if you choose to not use the volume
encryption then you don't need to worry about it at all.

I've watched this feature go through many, many iterations throughout both
the Grizzly and Havana release cycles.  The authors have been working hard
to address everyone's concerns.  In fact, they have navigated quite a
gauntlet to get this far.  And what they have now is an excellent, working
solution.  Let's accept this nice security enhancement and move forward.

Cheers,
-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Join the online meeting for moderators of Ask OpenStack

2013-09-03 Thread Stefano Maffulli
We are holding an online meeting for all the people that use Ask OpenStack.

On Thursday Sept. 5 from 6PM to 7PM Pacific Time on IRC freenode.net
#openstack-community

The main intention is to share best practices and in the long term
improve the objective of Ask OpenStack:

 to provide the best place on Internet for people to find
 solutions (answers) to common problems (questions) related to
 OpenStack.

Since anybody can become a moderator with only 100 Karma points this
meeting is for every user of Ask OpenStack.

We will go through the existing recommendations on the wiki and do some
"live" practice together on existing questions.

Please spread the word about this meeting among your fellows at the
local user groups.

Regards,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-09-03 Thread Stas Maksimov
Hi Thomas,

Not yet, sorry. But working on it (in parallel!), was having a bit of an
issue setting up a new env with devstack.

Will update you as soon as I have some results.

Thanks,
Stas





On 3 September 2013 23:00, Thomas Maddox wrote:

>  Hey Stas,
>
>  Were you ever able to get any answers on this? :)
>
>  Thanks!
>
>  -Thomas
>
>   On 8/12/13 9:42 AM, "Thomas Maddox"  wrote:
>
>   Happens all of the time. I haven't been able to get a single meter
> stored. :(
>
>   From: Stas Maksimov 
> Reply-To: OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> Date: Monday, August 12, 2013 9:34 AM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend
>
>  Is it sporadic or happens all the time?
>
>  In my case my Ceilometer VM was different from HBase VM, so I'm not sure
> if DHCP issues can affect localhost connections.
>
>  Thanks,
> Stas
>
> On 12 August 2013 15:29, Thomas Maddox wrote:
>
>>  Hmmm, that's interesting.
>>
>>  That would effect an all-in-one deployment? It's referencing localhost
>> right now; not distributed. My Thrift server is hbase://127.0.0.1:9090/.
>> Or would that still effect it, because it's a software facilitated
>> localhost reference and I'm doing dev inside of a VM (in the cloud) rather
>> than a hardware host?
>>
>>  I really appreciate your help!
>>
>>  -Thomas
>>
>>   From: Stas Maksimov 
>> Reply-To: OpenStack Development Mailing List <
>> openstack-dev@lists.openstack.org>
>> Date: Monday, August 12, 2013 9:17 AM
>> To: OpenStack Development Mailing List > >
>> Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend
>>
>>  Aha, so here it goes. The problem was not caused by monkey-patching or
>> multithreading issues, it was caused by the DevStack VM losing its
>> connection and getting a new address from the DHCP server. Once I fixed the
>> connection issues, the problem with eventlet disappeared.
>>
>> Hope this helps,
>> Stas
>>
>> On 12 August 2013 14:49, Stas Maksimov  wrote:
>>
>>> Hi Thomas,
>>>
>>> I definitely saw this before, iirc it was caused by monkey-patching
>>> somewhere else in ceilometer. It was fixed in the end before i submitted
>>> hbase implementation.
>>>
>>> At this moment unfortunately that's all I can recollect on the subject.
>>> I'll get back to you if I have an 'aha' moment on this. Feel free to
>>> contact me off-list regarding this hbase driver.
>>>
>>> Thanks,
>>> Stas.
>>>   Hey team,
>>>
>>>  I am working on a fix for retrieving the latest metadata on a resource
>>> rather than the first with the HBase implementation, and I'm running into
>>> some trouble when trying to get my dev environment to work with HBase. It
>>> looks like a concurrency issue when it tries to store the metering data.
>>> I'm getting the following error in my logs (summary):
>>>
>>>  *013-08-11 18:52:33.980 2445 ERROR
>>> ceilometer.collector.dispatcher.database
>>> [req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second
>>> simultaneous read on fileno 7 detected.  Unless you really know what you're
>>> doing, make sure that only one greenthread can read any particular socket.
>>>  Consider using a pools.Pool. If you do know what you're doing and want to
>>> disable this error, call eventlet.debug.hub_prevent_multiple_readers(False)
>>> *
>>>
>>>  *Full traceback*: http://paste.openstack.org/show/43872/
>>>
>>>  Has anyone else run into this lovely little problem? It looks like the
>>> implementation needs to use happybase.ConnectionPool, unless I'm missing
>>> something.
>>>
>>>  Thanks in advance for help! :)
>>>
>>>  -Thomas
>>>
>>>  ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups with OVS instead of iptables?

2013-09-03 Thread Ravi Chunduru
It is possible to enforce security groups on OVS provided you have Openflow
Controller instead of neutron agent managing the OVS switches.


On Tue, Sep 3, 2013 at 10:29 AM, Scott Devoid  wrote:

> +1 for an answer to this.
>
> The reference documentation suggests running Neutron OVS with a total of 6
> software switches between the VM and public NAT addresses. [1]
> What are the performances differences folks see with this configuration
> vs. the 2 software switch configuration for linux bridge?
>
> [1]
> http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178
>
>
> On Tue, Sep 3, 2013 at 8:34 AM, Lorin Hochstein 
> wrote:
>
>> (Also asked at
>> https://ask.openstack.org/en/question/4718/security-groups-with-ovs-instead-of-iptables/
>> )
>>
>> The only security group implementations in neutron seem to be
>> iptables-based. Is it technically possible to implement security groups
>> using openvswitch flow rules, instead of iptables rules?
>>
>> It seems like this would cut down on the complexity associated with the
>> current OVSHybridIptablesFirewallDriver implementation, where we need to
>> create an extra linux bridge and veth pair to work around the
>> iptables-openvswitch issues. (This also breaks if the user happens to
>> install the openvswitch brcompat module).
>>
>> Lorin
>> --
>> Lorin Hochstein
>> Lead Architect - Cloud Services
>> Nimbis Services, Inc.
>> www.nimbisservices.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-03 Thread Steven Hardy
Hi,

I have a question for the keystone folks re the expected behavior when
deleting a trust.

Is it expected that you can only ever delete a trust as the user who
created it, and that you can *not* delete the trust when impersonating that
user using a token obtained via that trust?

The reason for this question, is for the Heat use-case, this may represent
a significant operational limitation, since it implies that the user who
creates the stack is the only one who can ever delete it.

Current Heat behavior is to allow any user in the same tenant, provided
they have the requisite roles, to delete the stack, which AFAICT atm will
not be possible when using trusts.

Clarification as to whether this is as-designed or a bug somewhere much
appreciated, thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Russell Bryant
On 09/03/2013 06:26 PM, Bhandaru, Malini K wrote:
> The issue here is the key manager, barbican, under development is in 
> incubation.
> Folks can download and use barbican. The barbican team has worked deligently 
> to produce the system.
> In fact, folks can download and use and vote for Joel's patch to be merged.
> And do give us feedback on barbican.
> 
> The chicken-egg problem .. and the desire to keep the key manager as a 
> separate service entails the incubation requirement.

Couple of things...

Barbican is not yet in incubation.  That is an official project status
that must be applied for and is reviewed by the OpenStack Technical
Committee.

Even if it was, we don't have a patch ready to go along with this
feature to make use of the Barbican API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Russell Bryant
On 09/03/2013 05:41 PM, Coffman, Joel M. wrote:
>> How can someone use your code without a key manager?
> 
> Some key management mechanism is required although it could be
> simplistic. For example, we’ve tested our code internally with an
> implementation of the key manager interface that returns a single,
> constant key.

I understand Joe's concern.  I've used a similar argument to turn down
other features.  I generally want *everything* we merge to be usable
with the code in the tree.  If it's not usable, I push to have it wait
until it is.

In this case, it's obviously something we should have caught and brought
up earlier.  If there is any possible way a simple implementation of the
key manager interface could be included, then that could probably save
this for Havana.  We could consider a feature freeze exception to give
it a few extra days, but not more than that.

Otherwise, as much as I really hate to say it, this will probably have
to get deferred.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups with OVS instead of iptables?

2013-09-03 Thread Salvatore Orlando
I am not entirely sure that any of the open source plugins available in the
neutron source tree currently provides the ability of ensuring security
groups through OVS flow management.
But I might be missing some out-of-tree plugin, of which I have little to
no knowledge.

To answer the initial question - yet it's technically possible, but it's
also cumbersome.
Nova security group rules can easily generate a situation in which
thousands of rules are generated. If not properly handled by adopting
appropriate strategies, such as masking bits for nw addresses, the number
can easily spiral. This means that it is likely that most packes will miss
the kernel-level flow table and require a context switch to user mode (it
might be even worse if you store the rules on the controller and you ask
your switch to fetch them with openflow) - in this case the performance
would be even worse than the double bridge we traverse now.

Said that, it's not impossible. It's just that so far nobody has seriously
tackled this issue.

Regards,
Salvatore


On 3 September 2013 23:54, Ravi Chunduru  wrote:

> It is possible to enforce security groups on OVS provided you have
> Openflow Controller instead of neutron agent managing the OVS switches.
>
>
> On Tue, Sep 3, 2013 at 10:29 AM, Scott Devoid  wrote:
>
>> +1 for an answer to this.
>>
>> The reference documentation suggests running Neutron OVS with a total of
>> 6 software switches between the VM and public NAT addresses. [1]
>> What are the performances differences folks see with this configuration
>> vs. the 2 software switch configuration for linux bridge?
>>
>> [1]
>> http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178
>>
>>
>> On Tue, Sep 3, 2013 at 8:34 AM, Lorin Hochstein > > wrote:
>>
>>> (Also asked at
>>> https://ask.openstack.org/en/question/4718/security-groups-with-ovs-instead-of-iptables/
>>> )
>>>
>>> The only security group implementations in neutron seem to be
>>> iptables-based. Is it technically possible to implement security groups
>>> using openvswitch flow rules, instead of iptables rules?
>>>
>>> It seems like this would cut down on the complexity associated with the
>>> current OVSHybridIptablesFirewallDriver implementation, where we need to
>>> create an extra linux bridge and veth pair to work around the
>>> iptables-openvswitch issues. (This also breaks if the user happens to
>>> install the openvswitch brcompat module).
>>>
>>> Lorin
>>> --
>>> Lorin Hochstein
>>> Lead Architect - Cloud Services
>>> Nimbis Services, Inc.
>>> www.nimbisservices.com
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ravi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-03 Thread Dolph Mathews
On Tue, Sep 3, 2013 at 5:52 PM, Steven Hardy  wrote:

> Hi,
>
> I have a question for the keystone folks re the expected behavior when
> deleting a trust.
>
> Is it expected that you can only ever delete a trust as the user who
> created it, and that you can *not* delete the trust when impersonating that
> user using a token obtained via that trust?
>

We have some tests in keystone somewhat related to this scenario, but
nothing that asserts that specific behavior-

https://github.com/openstack/keystone/blob/master/keystone/tests/test_auth.py#L737-L763


> The reason for this question, is for the Heat use-case, this may represent
> a significant operational limitation, since it implies that the user who
> creates the stack is the only one who can ever delete it.
>

I don't follow this implication-- can you explain further? I don't see how
the limitation above (if it exists) would impact this behavior or be a
blocker for the design below.


>
> Current Heat behavior is to allow any user in the same tenant, provided
> they have the requisite roles, to delete the stack


That seems like a reasonable design. With trusts, any user who has been
delegated the requisite role on the same tenant should be able to delete
the stack.


> which AFAICT atm will
> not be possible when using trusts.
>

Similar to the above, I don't understand how trusts presents a blocker?


>
> Clarification as to whether this is as-designed or a bug somewhere much
> appreciated, thanks!
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Bhandaru, Malini K
The issue here is the key manager, barbican, under development is in incubation.
Folks can download and use barbican. The barbican team has worked deligently to 
produce the system.
In fact, folks can download and use and vote for Joel's patch to be merged.
And do give us feedback on barbican.

The chicken-egg problem .. and the desire to keep the key manager as a separate 
service entails the incubation requirement.
Regards
Malini

From: Joe Gordon [joe.gord...@gmail.com]
Sent: Tuesday, September 03, 2013 6:06 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume
encryption

On Tue, Sep 3, 2013 at 5:41 PM, Coffman, Joel M. 
mailto:joel.coff...@jhuapl.edu>> wrote:

> How can someone use your code without a key manager?

Some key management mechanism is required although it could be simplistic. For 
example, we’ve tested our code internally with an implementation of the key 
manager interface that returns a single, constant key.

That works for testing but doesn't address: "the current dearth of key 
management within OpenStack does not preclude the use of our existing work 
within a production environment"




I think the underlying issue is how to handle interrelated features – if Nova 
doesn’t want to accept the volume encryption feature without a full-fledged key 
manager, then why accept a key manager (or its interface stubs) unless it 
already has a feature that requires it (e.g., volume encryption)? And 
round-and-round it goes.

You can propose both patches at the same time one being dependent on the other, 
so we can merge both at the same time




I’d also like to point out that the volume encryption feature is “complete” and 
won’t require changes when a full-fledged key manager becomes available. All 
that’s needed is to specify the key manager via a configuration option. So this 
request is definitely *not* a case of trying to land a feature that isn’t 
finished and is disabled by default (see [1], [2], and [3]).

Is a feature complete if no one can use it?

I am happy with a less then secure but fully functional key manager.  But with 
no key manager that can be used in a real deployment, what is the value of 
including this code?




[1] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008244.html

[2] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008315.html

[3] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008268.html





From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Tuesday, September 03, 2013 4:48 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume encryption







On Tue, Sep 3, 2013 at 4:38 PM, Coffman, Joel M. 
mailto:joel.coff...@jhuapl.edu>> wrote:

We have fully implemented support for transparently encrypting Cinder 
volumes 
from within Nova (see  https://review.openstack.org/#/c/30976/), but the lack 
of a secure key manager within OpenStack currently precludes us from 
integrating our work with that piece of the overall architecture. Instead, a 
key manager interface (see  https://review.openstack.org/#/c/30973/) abstracts 
this interaction. We would appreciate the consideration of the Nova core team 
regarding merging our existing work because 1) there is nothing immediately 
available with which to integrate; 2) services such as 
Barbican are on the path to 
incubation and alternative key management schemes (e.g., KMIP Client for volume 
encryption key 
management)
 have also been proposed; 3) we avoid the hassle of rebasing until the 
aforementioned services become available; and 4) our code does not directly 
depend upon a particular key manager but upon the aforementioned interface, 
which should be simple for key managers to implement. Furthermore, the current 
dearth of key management within OpenStack does not preclude the use of our 
existing work within a production environment; although the security is 
diminished, our implementation provides protection against certain attacks like 
intercepting the iSCSI communication between the compute and storage host.





How can someone use your code without a key manager?





Feedback regarding the possibility of merging our work would be appreciated.



Joel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Joe Gordon
On Tue, Sep 3, 2013 at 5:41 PM, Coffman, Joel M. wrote:

> > How can someone use your code without a key manager?
>
> Some key management mechanism is required although it could be simplistic.
> For example, we’ve tested our code internally with an implementation of the
> key manager interface that returns a single, constant key.
>
That works for testing but doesn't address: "the current dearth of key
management within OpenStack does not preclude the use of our existing work
within a production environment"


> 
>
> ** **
>
> I think the underlying issue is how to handle interrelated features – if
> Nova doesn’t want to accept the volume encryption feature without a
> full-fledged key manager, then why accept a key manager (or its interface
> stubs) unless it already has a feature that requires it (e.g., volume
> encryption)? And round-and-round it goes.
>

You can propose both patches at the same time one being dependent on the
other, so we can merge both at the same time


> 
>
> ** **
>
> I’d also like to point out that the volume encryption feature is
> “complete” and won’t require changes when a full-fledged key manager
> becomes available. All that’s needed is to specify the key manager via a
> configuration option. So this request is definitely **not** a case of
> trying to land a feature that isn’t finished and is disabled by default
> (see [1], [2], and [3]).
>
Is a feature complete if no one can use it?

I am happy with a less then secure but fully functional key manager.  But
with no key manager that can be used in a real deployment, what is the
value of including this code?


> 
>
> ** **
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2013-April/008244.html*
> ***
>
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2013-April/008315.html*
> ***
>
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2013-April/008268.html*
> ***
>
> ** **
>
> ** **
>
> *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
> *Sent:* Tuesday, September 03, 2013 4:48 PM
> *To:* OpenStack Development Mailing List
> *Subject:* Re: [openstack-dev] [nova] key management and Cinder volume
> encryption
>
> ** **
>
> ** **
>
> ** **
>
> On Tue, Sep 3, 2013 at 4:38 PM, Coffman, Joel M. 
> wrote:
>
> We have fully implemented support for transparently encrypting Cinder
> volumesfrom
>  within Nova (see
> https://review.openstack.org/#/c/30976/), but the lack of a secure key
> manager within OpenStack currently precludes us from integrating our work
> with that piece of the overall architecture. Instead, a key manager
> interface (see  https://review.openstack.org/#/c/30973/) abstracts this
> interaction. We would appreciate the consideration of the Nova core team
> regarding merging our existing work because 1) there is nothing immediately
> available with which to integrate; 2) services such as 
> Barbicanare on the path to 
> incubation and alternative key management schemes (e.g., KMIP
> Client for volume encryption key 
> management)
> have also been proposed; 3) we avoid the hassle of rebasing until the
> aforementioned services become available; and 4) our code does not directly
> depend upon a particular key manager but upon the aforementioned interface,
> which should be simple for key managers to implement. Furthermore, the
> current dearth of key management within OpenStack does not preclude the use
> of our existing work within a production environment; although the security
> is diminished, our implementation provides protection against certain
> attacks like intercepting the iSCSI communication between the compute and
> storage host.
>
>  
>
> ** **
>
> How can someone use your code without a key manager?
>
>  
>
> ** **
>
> Feedback regarding the possibility of merging our work would be
> appreciated.
>
>  
>
> Joel
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ** **
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-09-03 Thread Thomas Maddox
Hey Stas,

Were you ever able to get any answers on this? :)

Thanks!

-Thomas

On 8/12/13 9:42 AM, "Thomas Maddox" 
mailto:thomas.mad...@rackspace.com>> wrote:

Happens all of the time. I haven't been able to get a single meter stored. :(

From: Stas Maksimov mailto:maksi...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 12, 2013 9:34 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Is it sporadic or happens all the time?

In my case my Ceilometer VM was different from HBase VM, so I'm not sure if 
DHCP issues can affect localhost connections.

Thanks,
Stas

On 12 August 2013 15:29, Thomas Maddox 
mailto:thomas.mad...@rackspace.com>> wrote:
Hmmm, that's interesting.

That would effect an all-in-one deployment? It's referencing localhost right 
now; not distributed. My Thrift server is 
hbase://127.0.0.1:9090/. Or would that still effect it, 
because it's a software facilitated localhost reference and I'm doing dev 
inside of a VM (in the cloud) rather than a hardware host?

I really appreciate your help!

-Thomas

From: Stas Maksimov mailto:maksi...@gmail.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, August 12, 2013 9:17 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Aha, so here it goes. The problem was not caused by monkey-patching or 
multithreading issues, it was caused by the DevStack VM losing its connection 
and getting a new address from the DHCP server. Once I fixed the connection 
issues, the problem with eventlet disappeared.

Hope this helps,
Stas

On 12 August 2013 14:49, Stas Maksimov 
mailto:maksi...@gmail.com>> wrote:

Hi Thomas,

I definitely saw this before, iirc it was caused by monkey-patching somewhere 
else in ceilometer. It was fixed in the end before i submitted hbase 
implementation.

At this moment unfortunately that's all I can recollect on the subject. I'll 
get back to you if I have an 'aha' moment on this. Feel free to contact me 
off-list regarding this hbase driver.

Thanks,
Stas.

Hey team,

I am working on a fix for retrieving the latest metadata on a resource rather 
than the first with the HBase implementation, and I'm running into some trouble 
when trying to get my dev environment to work with HBase. It looks like a 
concurrency issue when it tries to store the metering data. I'm getting the 
following error in my logs (summary):

013-08-11 18:52:33.980 2445 ERROR ceilometer.collector.dispatcher.database 
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous read 
on fileno 7 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False)

Full traceback: http://paste.openstack.org/show/43872/

Has anyone else run into this lovely little problem? It looks like the 
implementation needs to use happybase.ConnectionPool, unless I'm missing 
something.

Thanks in advance for help! :)

-Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Coffman, Joel M.
> How can someone use your code without a key manager?
Some key management mechanism is required although it could be simplistic. For 
example, we've tested our code internally with an implementation of the key 
manager interface that returns a single, constant key.

I think the underlying issue is how to handle interrelated features - if Nova 
doesn't want to accept the volume encryption feature without a full-fledged key 
manager, then why accept a key manager (or its interface stubs) unless it 
already has a feature that requires it (e.g., volume encryption)? And 
round-and-round it goes.

I'd also like to point out that the volume encryption feature is "complete" and 
won't require changes when a full-fledged key manager becomes available. All 
that's needed is to specify the key manager via a configuration option. So this 
request is definitely *not* a case of trying to land a feature that isn't 
finished and is disabled by default (see [1], [2], and [3]).

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008244.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008315.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008268.html


From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Tuesday, September 03, 2013 4:48 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume encryption



On Tue, Sep 3, 2013 at 4:38 PM, Coffman, Joel M. 
mailto:joel.coff...@jhuapl.edu>> wrote:
We have fully implemented support for transparently encrypting Cinder 
volumes 
from within Nova (see  https://review.openstack.org/#/c/30976/), but the lack 
of a secure key manager within OpenStack currently precludes us from 
integrating our work with that piece of the overall architecture. Instead, a 
key manager interface (see  https://review.openstack.org/#/c/30973/) abstracts 
this interaction. We would appreciate the consideration of the Nova core team 
regarding merging our existing work because 1) there is nothing immediately 
available with which to integrate; 2) services such as 
Barbican are on the path to 
incubation and alternative key management schemes (e.g., KMIP Client for volume 
encryption key 
management)
 have also been proposed; 3) we avoid the hassle of rebasing until the 
aforementioned services become available; and 4) our code does not directly 
depend upon a particular key manager but upon the aforementioned interface, 
which should be simple for key managers to implement. Furthermore, the current 
dearth of key management within OpenStack does not preclude the use of our 
existing work within a production environment; although the security is 
diminished, our implementation provides protection against certain attacks like 
intercepting the iSCSI communication between the compute and storage host.


How can someone use your code without a key manager?


Feedback regarding the possibility of merging our work would be appreciated.

Joel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Documentation and patches

2013-09-03 Thread Anne Gentle
On Tue, Sep 3, 2013 at 10:14 AM, Thierry Carrez wrote:

> Anne Gentle wrote:
> > Nova is the overall "winner" (cough) with 110 doc bugs followed by
> > keystone with 26. 110 doc bugs indicates a serious need.
>
> Ouch.
>
> > Teams, please follow through on docs and see how you can help.
> >
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=nova
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=keystone
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=neutron
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=swift
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=glance
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=cinder
> > https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=xen
>
> One way to raise awareness would be to crash the project-specific
> meetings and attract everyone's attention to those bugs, pasting the
> corresponding link. It just takes one doc-team member to be around, and
> should be more efficient than filing extra bugtasks or mentioning it on
> the release meeting (which is not attended by that many developers
> nowadays).
>
> Then nothing will beat getting personal, identifying the best person and
> tracking them down on IRC :) But that takes much more time.
>
>
Looking at the ical feed I know it's not possible for one person to be the
doc nag if they want to get any other work done in a week as there are now
nearly 30 hour-long meetings each week. :) One thought is to identify doc
liaisons per project that serve the purpose of bringing docs up at each
weekly meeting.
https://wiki.openstack.org/wiki/Documentation/ProjectDocLeads

Honestly, though, my sense is that the biggest difficulty is for devs to
step back and look at the big picture of OpenStack and write docs that way.
We also need more deployers writing docs for each other, and users writing
docs for consuming OpenStack clouds. Ideas on enabling those writers are
also welcome.

Anne


> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Coffman, Joel M.
We have fully implemented support for transparently encrypting Cinder 
volumes 
from within Nova (see  https://review.openstack.org/#/c/30976/), but the lack 
of a secure key manager within OpenStack currently precludes us from 
integrating our work with that piece of the overall architecture. Instead, a 
key manager interface (see  https://review.openstack.org/#/c/30973/) abstracts 
this interaction. We would appreciate the consideration of the Nova core team 
regarding merging our existing work because 1) there is nothing immediately 
available with which to integrate; 2) services such as 
Barbican are on the path to 
incubation and alternative key management schemes (e.g., KMIP Client for volume 
encryption key 
management)
 have also been proposed; 3) we avoid the hassle of rebasing until the 
aforementioned services become available; and 4) our code does not directly 
depend upon a particular key manager but upon the aforementioned interface, 
which should be simple for key managers to implement. Furthermore, the current 
dearth of key management within OpenStack does not preclude the use of our 
existing work within a production environment; although the security is 
diminished, our implementation provides protection against certain attacks like 
intercepting the iSCSI communication between the compute and storage host.

Feedback regarding the possibility of merging our work would be appreciated.

Joel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Joe Gordon
On Tue, Sep 3, 2013 at 4:38 PM, Coffman, Joel M. wrote:

> We have fully implemented support for transparently encrypting Cinder
> volumesfrom
>  within Nova (see
> https://review.openstack.org/#/c/30976/), but the lack of a secure key
> manager within OpenStack currently precludes us from integrating our work
> with that piece of the overall architecture. Instead, a key manager
> interface (see  https://review.openstack.org/#/c/30973/) abstracts this
> interaction. We would appreciate the consideration of the Nova core team
> regarding merging our existing work because 1) there is nothing immediately
> available with which to integrate; 2) services such as 
> Barbicanare on the path to 
> incubation and alternative key management schemes (e.g., KMIP
> Client for volume encryption key 
> management)
> have also been proposed; 3) we avoid the hassle of rebasing until the
> aforementioned services become available; and 4) our code does not directly
> depend upon a particular key manager but upon the aforementioned interface,
> which should be simple for key managers to implement. Furthermore, the
> current dearth of key management within OpenStack does not preclude the use
> of our existing work within a production environment; although the security
> is diminished, our implementation provides protection against certain
> attacks like intercepting the iSCSI communication between the compute and
> storage host.
>
> **
>

How can someone use your code without a key manager?


**
>
> Feedback regarding the possibility of merging our work would be
> appreciated.
>
> ** **
>
> Joel
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-03 Thread Michael Still
On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
 wrote:
>
> +1 I think we should be reconstructing data where we can, but keeping track of
> deleted data in a backup table so that we can restore it on a downgrade seems
> like overkill.

I guess it comes down to use case... Do we honestly expect admins to
regret and upgrade and downgrade instead of just restoring from
backup? If so, then we need to have backup tables for the cases where
we can't reconstruct the data (i.e. it was provided by users and
therefore not something we can calculate).

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Recent Keystone OpenLDAP install documentation

2013-09-03 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Hello,

I am looking for recent OpenLDAP installation and configuration documentation 
to use with Keystone Havana H2. Please let me know if you have a pointer to 
some.

Regards,

Mark Miller

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron + Grenade (or other upgrade testing)

2013-09-03 Thread Sean Dague

On 08/27/2013 01:20 PM, Maru Newby wrote:


On Aug 26, 2013, at 10:23 AM, Dean Troyer  wrote:


On Mon, Aug 26, 2013 at 10:50 AM, Maru Newby  wrote:
Is anyone working on/planning on adding support for neutron to grenade?  Or is 
there any other automated upgrade testing going on for neutron?

We deliberately avoided migrations in Grenade (like Nova Volume -> Cinder) as we 
wanted to focus on upgrades within projects.  Migrations will necessarily be much 
more complicated, especially Nova Network -> Neutron.  At some point Neutron 
should be added to Grenade, but only as a release upgrade step for some basic 
configuration.

That said, I'm sure there would be great appreciation for a recipe to duplicate 
an existing Nova Network config in Neutron.  We can debate if that belongs in 
Grenade should it ever exist…


I was referring to upgrades within projects - in this case Quantum to Neutron.  
I'm assuming that belongs in grenade?


That would be totally fair game for grenade.

That being said, there is no neutron or quantum support in grenade at 
all at this point. So it would be new features for people to bring in.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Teaching me to fish

2013-09-03 Thread Lloyd Dewolf
Hi OpenStack Developers,

I started working with OpenStack, November 2011 coming from five years
working on WordPress. Part of my motivation was to get deeper in the
stack.

Wow, in these 1.5 years have I gone deep! Thank you for your patience
in providing my the best education. Working with you and Piston Cloud
have been among my greatest experiences in learning and growth. Thank
you.

The OpenStack development and test processes are setting very high
standards for open source!

It's incredible to see how far OpenStack has come and what's on the
horizon. What an incredible project!

I'm now on to other things. If you want to stay in touch, please
update your address book to foolswis...@gmail.com .

Thanks for the fishing,
Lloyd

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Q&A section added to incubation request

2013-09-03 Thread Kurt Griffiths
Folks,

I've attempted to consolidate recent questions and answers re Marconi at
the bottom of our incubation request:

http://goo.gl/msB8l0

Please take a look and let me know if there's anything I missed or if you
have any corrections.

Cheers,
Kurt





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate breakage process - Let's fix! (related but not specific to neutron)

2013-09-03 Thread Sean Dague

On 08/17/2013 12:26 AM, Clint Byrum wrote:

Excerpts from Maru Newby's message of 2013-08-16 16:42:23 -0700:


On Aug 16, 2013, at 11:44 AM, Clint Byrum  wrote:


Excerpts from Maru Newby's message of 2013-08-16 11:25:07 -0700:

Neutron has been in and out of the gate for the better part of the past month, 
and it didn't slow the pace of development one bit.  Most Neutron developers 
kept on working as if nothing was wrong, blithely merging changes with no 
guarantees that they weren't introducing new breakage.  New bugs were indeed 
merged, greatly increasing the time and effort required to get Neutron back in 
the gate.  I don't think this is sustainable, and I'd like to make a suggestion 
for how to minimize the impact of gate breakage.

For the record, I don't think consistent gate breakage in one project should be 
allowed to hold up the development of other projects.  The current approach of 
skipping tests or otherwise making a given job non-voting for innocent projects 
should continue.  It is arguably worth taking the risk of relaxing gating for 
those innocent projects rather than halting development unnecessarily.

However, I don't think it is a good idea to relax a broken gate for the 
offending project.  So if a broken job/test is clearly Neutron related, it 
should continue to gate Neutron, effectively preventing merges until the 
problem is fixed.  This would both raise the visibility of breakage beyond the 
person responsible for fixing it, and prevent additional breakage from slipping 
past were the gating to be relaxed.

Thoughts?



I think this is a cultural problem related to the code review discussing
from earlier in the week.

We are not looking at finding a defect and reverting as a good thing where
high fives should be shared all around. Instead, "you broke the gate"
seems to mean "you are a bad developer". I have been a bad actor here too,
getting frustrated with the gate-breaker and saying the wrong thing.

The problem really is "you _broke_ the gate". It should be "the gate has
found a defect, hooray!". It doesn't matter what causes the gate to stop,
it is _always_ a defect. Now, it is possible the defect is in tempest,
or jenkins, or HP/Rackspace's clouds where the tests run. But it is
always a defect that what worked before, does not work now.

Defects are to be expected. None of us can write perfect code. We should
be happy to revert commits and go forward with an enabled gate while
the team responsible for the commit gathers information and works to
correct the issue.


You're preaching to the choir, and I suspect that anyone with an interest in 
software quality is likely to prefer problem solving to finger pointing.  
However, my intent with this thread was not to promote more constructive 
thinking about defect detection.  Rather, I was hoping to communicate a flaw in 
the existing process and seek consensus on how that process could best be 
modified to minimize the cost of resolving gate breakage.



I believe that the process is a symptom of the culture. If we were
more eager to revert/discover/fix/re-submit on failure, we wouldn't
be turning off the gate for things. Instead we cling to whatever has
had the requisite "+2/approval" as if passing the stringent review has
imparted our code with magical powers which will eventually morph into
a passing gate.

In a perfect world we could make our CI infrastructure bisect the failures
to try and isolate the commits that did them so at least anybody can see
the commit that did the damage and revert it quickly. Realistically, most
of the time we remove from the gate because the failures are intermittent
and take _forever_ to discover, so that may not even be possible.

I am suggesting that we all change our perspective and embrace "revert
this immediately" as "thank you for finding that defect" not "you jerk
why did you revert my code". It may still be hard to find which commit
to revert, but at least one can spend that time with the idea that they
will be rewarded, rather than punished, for their efforts.


Late on the thread (was out), but an important clarification here is to 
realize that most gate breaks aren't 100% fails, they are 5% or 2% or 1% 
(or less) fails.


For a patch to land in Nova it's got to run through pass tempest 3 times 
in a gate run (and it probably won't have been pushed there until it 
passed in the check run). Which means it's got to work at least 90% of 
the time.


Neutron, because it only runs 1 configuration of tempest, and only in 
smoke mode, means that a patch that only works 50% of the time can 
easily land.


Bisection of patches to the failure point only works if you have a 
binary test for success and failure. In the Tempest gate, with a real 
devstack environment, running 20 services asynchronously on variability 
performance guests in a cloud, if we had a consistent binary test we'd 
never have landed the code in the first place.


So there isn't an automatic bisection solution.

 

Re: [openstack-dev] [Infra] Meeting Tuesday September 3rd at 19:00 UTC

2013-09-03 Thread Elizabeth Krumbach Joseph
On Mon, Sep 2, 2013 at 9:53 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday September 3rd, at 19:00 UTC in
> #openstack-meeting

Meeting minutes and logs:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-03-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-03-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-09-03-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] REST API proposal

2013-09-03 Thread Jay Pipes

On 08/30/2013 04:37 AM, Nikolay Starodubtsev wrote:

Hi, everyone!
We have created a proposal for Climate REST API
https://docs.google.com/document/d/1U36k5wk0sOUyLl-4Cz8tmk8RQFQGWKO9dVhb87ZxPC8/
And we like to discuss it with everyone.


If you enable commenting on the proposal, then we can put comments into 
the document for you to respond to.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security groups with OVS instead of iptables?

2013-09-03 Thread Scott Devoid
+1 for an answer to this.

The reference documentation suggests running Neutron OVS with a total of 6
software switches between the VM and public NAT addresses. [1]
What are the performances differences folks see with this configuration vs.
the 2 software switch configuration for linux bridge?

[1]
http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178


On Tue, Sep 3, 2013 at 8:34 AM, Lorin Hochstein wrote:

> (Also asked at
> https://ask.openstack.org/en/question/4718/security-groups-with-ovs-instead-of-iptables/
> )
>
> The only security group implementations in neutron seem to be
> iptables-based. Is it technically possible to implement security groups
> using openvswitch flow rules, instead of iptables rules?
>
> It seems like this would cut down on the complexity associated with the
> current OVSHybridIptablesFirewallDriver implementation, where we need to
> create an extra linux bridge and veth pair to work around the
> iptables-openvswitch issues. (This also breaks if the user happens to
> install the openvswitch brcompat module).
>
> Lorin
> --
> Lorin Hochstein
> Lead Architect - Cloud Services
> Nimbis Services, Inc.
> www.nimbisservices.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] REST API proposal

2013-09-03 Thread Sergey Lukjanov
Done

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Sep 3, 2013, at 21:53, Jay Pipes  wrote:

> On 08/30/2013 04:37 AM, Nikolay Starodubtsev wrote:
>> Hi, everyone!
>> We have created a proposal for Climate REST API
>> https://docs.google.com/document/d/1U36k5wk0sOUyLl-4Cz8tmk8RQFQGWKO9dVhb87ZxPC8/
>> And we like to discuss it with everyone.
> 
> If you enable commenting on the proposal, then we can put comments into the 
> document for you to respond to.
> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-03

2013-09-03 Thread Shawn Hartsock


Feature freeze is on the 5th! Let's try and get these last few reviews through 
ASAP! To that end, here's the reviews first in priority order based on 
blueprint ... then again in order based on up/down votes. If there's something 
that needs discussion and you're stuck, feel free to reach out to me 
personally. Lots of merges means developers need to stay vigilant and rebase 
often! Keep at it!

 reviews by priority 
* https://review.openstack.org/#/c/30282/ + needs core review after refresh
* https://review.openstack.org/#/c/40245/ + needs core review
* https://review.openstack.org/#/c/41387/ + needs core review
* https://review.openstack.org/#/c/34903/ + needs reviews
* https://review.openstack.org/#/c/37659/ - revise code please!
* https://review.openstack.org/#/c/37819/ + needs reviews
* https://review.openstack.org/#/c/33100/ + needs core reviews

 reviews by fitness 

Needs one more core review/approval:
* NEW, https://review.openstack.org/#/c/33504/ ,'VMware: nova-compute crashes 
if VC not available'
https://bugs.launchpad.net/nova/+bug/1192016
core votes,1, non-core votes,5, down votes, 0

Ready for core reviewer:
* NEW, https://review.openstack.org/#/c/30628/ ,'Fix VCDriver to pick the 
datastore that has capacity'
https://bugs.launchpad.net/nova/+bug/1171930
core votes,0, non-core votes,9, down votes, 0
* NEW, https://review.openstack.org/#/c/41657/ ,'Fix VMwareVCDriver to support 
multi-datastore'
https://bugs.launchpad.net/nova/+bug/1104994
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy 
settings and overrides'
https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/43721/ ,'VMware: handle exceptions from 
RetrievePropertiesEx correctly'
https://bugs.launchpad.net/nova/+bug/1216961
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/33100/ ,'Fixes host stats for 
VMWareVCDriver'
https://bugs.launchpad.net/nova/+bug/1190515
core votes,0, non-core votes,8, down votes, 0
* NEW, https://review.openstack.org/#/c/43994/ ,'VMwareVCDriver: Fix instance 
create with sparse disk image'
https://bugs.launchpad.net/nova/+bug/1171226
core votes,0, non-core votes,6, down votes, 0
* NEW, https://review.openstack.org/#/c/43641/ ,'VMware: Fix ensure_vlan_bridge 
to work properly with existing DVS'
https://bugs.launchpad.net/nova/+bug/1194018
core votes,0, non-core votes,5, down votes, 0
* NEW, https://review.openstack.org/#/c/40298/ ,'Fix snapshot in VMwareVCDriver'
https://bugs.launchpad.net/nova/+bug/1184807
core votes,0, non-core votes,7, down votes, 0
* NEW, https://review.openstack.org/#/c/43268/ ,'VMware: enable VNC access 
without user having to enter password'
https://bugs.launchpad.net/nova/+bug/1215352
core votes,0, non-core votes,3, down votes, 0
* NEW, https://review.openstack.org/#/c/30282/ ,'Multiple Clusters using single 
compute service'

https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
core votes,0, non-core votes,3, down votes, 0

Needs VMware API expert review:
* NEW, https://review.openstack.org/#/c/43665/ ,'VMware: Validate the returned 
object data prior to accessing'
https://bugs.launchpad.net/nova/+bug/1215958
core votes,0, non-core votes,4, down votes, 0
* NEW, https://review.openstack.org/#/c/35633/ ,'Enhance the vCenter driver to 
support FC volume attach'
https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
core votes,0, non-core votes,2, down votes, 0
* NEW, https://review.openstack.org/#/c/34903/ ,'Deploy vCenter templates'

https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver
core votes,0, non-core votes,2, down votes, 0
* NEW, https://review.openstack.org/#/c/43621/ ,'VMware: Handle case when there 
are no hosts in cluster'
https://bugs.launchpad.net/nova/+bug/1197041
core votes,0, non-core votes,2, down votes, 0

Needs discussion/work (has -1):
* NEW, https://review.openstack.org/#/c/37659/ ,'Enhance VMware instance disk 
usage'
https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
core votes,0, non-core votes,2, down votes, -1
* NEW, https://review.openstack.org/#/c/42024/ ,'VMWare: Disabling linked clone 
doesn't cache images'
https://bugs.launchpad.net/nova/+bug/1207064
core votes,0, non-core votes,1, down votes, -3
 
Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
* If anything is missing, add 'hartsocks' as a reviewer to the patch so I can 
examine it.
* We hang out in #openstack-vmware if you need to chat.

Happy stacking!


# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin

[openstack-dev] Hyper-V Meeting Minutes

2013-09-03 Thread Peter Pouliot
Hi everyone,

Here are the minutes from today's Hyper-V meeting.


Meeting ended Tue Sep  3 17:01:15 2013 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-09-03-16.01.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-09-03-16.01.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-09-03-16.01.log.html


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] cluster scaling on the 0.2 branch

2013-09-03 Thread Jon Maron
Found an error in the HDP validation code affecting the node count of the 
additional (new) node group.  Looked at the savanna core code and realized the 
nature of the way the node groups were being scaled up (both existing and 
additional) and that pointed me to the issue.

-- Jon

On Aug 30, 2013, at 3:47 PM, Jon Maron  wrote:

> I've done some additional debugging/testing, and the issue is definitely in 
> the savanna provisioning code.
> 
> I have verified that the correct inputs are provided to the validate_scaling 
> method invocation, and that those references remain unaltered.  The scaling 
> request involves adding one node of a new node group named 'another', and 
> adding one node to the existing 'slave' node group:
> 
> cluster.node_groups:
> 
> [ {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 857213), 
> updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 857222), 
> id=u'effcc91c-d0de-4508-84ba-9cedc7e321f6', name=u'master', flavor_id=u'3', 
> image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
> u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
> u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
> volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
> cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
> node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}>, 
>  {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 860178), 
> updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 860184), 
> id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515', name=u'slave', flavor_id=u'3', 
> image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
> u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
> node_configs={}, volumes_per_node=0, volumes_size=10, 
> volume_mount_prefix=u'/volumes/disk', count=1, 
> cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
> node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}>]
> 
> additional:
> 
> { updated=None, id=None, name=u'another', flavor_id=u'3', image_id=None, 
> node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
> u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
> volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
> count=1, cluster_id=None, 
> node_group_template_id=u'f7f2ddc3-18ca-439f-9c08-570ff9307baf'}>: 1}
> 
> existing:
> 
> {u'slave': 2}
> 
> Once the scale_cluster() call is made, the cluster does have the additional 
> node group, but the list of instances isn't correct:
> 
> cluster.node_groups (note the addition of the 'another' node group):
> 
> - [ {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 857213), 
> updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 857222), 
> id=u'effcc91c-d0de-4508-84ba-9cedc7e321f6', name=u'master', flavor_id=u'3', 
> image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
> u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
> u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
> volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
> cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
> node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}>, 
> -  {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 860178), 
> updated=datetime.datetime(2013, 8, 30, 19, 34, 51, 39463), 
> id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515', name=u'slave', flavor_id=u'3', 
> image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
> u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
> node_configs={}, volumes_per_node=0, volumes_size=10, 
> volume_mount_prefix=u'/volumes/disk', count=2, 
> cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
> node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}>, 
> -  {created=datetime.datetime(2013, 8, 30, 19, 34, 49, 309577), 
> updated=datetime.datetime(2013, 8, 30, 19, 34, 49, 309584), 
> id=u'b8ea4e37-68d1-471d-9ddf-b74c2c533892', name=u'another', flavor_id=u'3', 
> image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
> u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
> node_configs={}, volumes_per_node=0, volumes_size=10, 
> volume_mount_prefix=u'/volumes/disk', count=1, 
> cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
> node_group_template_id=u'f7f2ddc3-18ca-439f-9c08-570ff9307baf'}>]
> 
> However, only the instance for the existing node group is passed in:
> 
> [ {created=datetime.datetime(2013, 8, 30, 19, 34, 50, 727467), 
> updated=datetime.datetime(2013, 8, 30, 19, 35, 36, 853529), extra=None, 
> node_group_id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515', 
> instance_id=u'59e4f689-5124-4205-8629-ad90ffc913d5', 
> instance_name=u'scale-slave-002', internal_ip=u'192.168.32.8', 
> management_ip=u'172.18.3.9', volumes=[]}>]
> 
> So clearly, the list of instances passed in is lacking the instance reference 
> for the 'another' node group.  
> 
> I have filed https://b

[openstack-dev] Hyper-V meeting

2013-09-03 Thread Peter Pouliot
Hi All,

Today's Hyper-V meeting agenda  will be the following.


* Current review backlog

* Puppet Hyper-V module status


Best,




Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-03 Thread Vishvananda Ishaya

On Sep 2, 2013, at 4:24 AM, Joshua Hesketh  wrote:

> Howdy,
> 
> At the moment database migrations are required to implement a downgrade
> method to ensure updates can be rolled back. Currently downgrades are only
> being tested by Jenkins/tox against sqlite databases. MySQL and Postgresql
> are not tested and often have special edge cases. I have been working on
> fixing broken downgrades to work correctly for these engines[0].
> 
> However, separate to that, there are cases where a migration may upgrade in a
> way that is not backwards compatible. For example, a column or even whole
> table may be dropped. The question of what to do in these circumstances is
> unclear with many migrations taking different approaches.
> 
> For example, migration 209 creates a dump_table and populates it with data
> that will be dropped in the upgrade. This then sits there until the migration
> is downgraded at which point the data is copied back into place and the
> dump_table is removed. A similar approach (in [1]) is also being reviewed
> where lost data is copied into backup_table_migration_214 and restored on
> downgrade. Then we have the example [2] where no lost data is preserved.
> 
> These all pose a few different questions. Namely:
> 
> 1) Do we want to require migrations to downgrade data, or just schema?
> 2) If we do downgrade data then how should it look?
> 
> There is perhaps a secondary discussion around the necessity of downgrades
> given most sysadmins would snapshot/backup a restore point before upgrading
> themselves but I want to suspend that for the purpose of this discussion.
> 
> I think with #2 we should at least be listing the migration number in the
> table name for reference sake. This will also give administrators an idea of
> when they can safely drop backup tables (as a side note, perhaps this should
> be documented).
> 
> We also have examples where data can be reconstructed without a backup table,
> usually when the migration moves data into a new table (for example [3]). So
> perhaps another option is we only require downgrading for data that can be
> reconstructed without backups.

+1 I think we should be reconstructing data where we can, but keeping track of
deleted data in a backup table so that we can restore it on a downgrade seems
like overkill.

Vish

> 
> Cheers,
> Josh
> 
> [0] https://review.openstack.org/#/c/40137/
> [1] https://review.openstack.org/#/c/39685/
> [2] https://review.openstack.org/#/c/42736/
> [3] Migration 186
> 
> --
> Rackspace Australia
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions related to live migration without target host

2013-09-03 Thread Vishvananda Ishaya
I made a comment on the review, but don't we still need this check if we do 
live migration with a target host?

Vish

On Sep 2, 2013, at 3:59 PM, Guangya Liu  wrote:

> Greetings,
> 
> There is an issue related to "live migration without target host" might want 
> to get more discussion/feedback from you experts: 
> https://bugs.launchpad.net/nova/+bug/1214943.
> 
> I have proposed a fix for this issue 
> (https://review.openstack.org/#/c/43213/), the fix was directly remove the 
> checking for free ram and always trust the result from nova scheduler as nova 
> scheduler already select a best host for live migration based on the function 
> filter_scheduler.py:select_hosts.
> 
> Please show your comments if any, you can also directly append your comments 
> to  https://review.openstack.org/#/c/43213/.
> 
> Thanks,
> 
> Jake Liu
> UnitedStack Inc
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Documentation and patches

2013-09-03 Thread Thierry Carrez
Anne Gentle wrote:
> Nova is the overall "winner" (cough) with 110 doc bugs followed by
> keystone with 26. 110 doc bugs indicates a serious need.

Ouch.

> Teams, please follow through on docs and see how you can help.
> 
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=nova
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=keystone
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=neutron
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=swift
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=glance
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=cinder
> https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=xen

One way to raise awareness would be to crash the project-specific
meetings and attract everyone's attention to those bugs, pasting the
corresponding link. It just takes one doc-team member to be around, and
should be more efficient than filing extra bugtasks or mentioning it on
the release meeting (which is not attended by that many developers
nowadays).

Then nothing will beat getting personal, identifying the best person and
tracking them down on IRC :) But that takes much more time.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Clarification of XenServer versions

2013-09-03 Thread Bob Ball
I've received a number of questions recently about XenServer and XCP and how 
they are different - particularly now that XS 6.2 has been released - so 
thought I should try and get the answers all in one place.

XenServer:
As of XS 6.2, it was announced that it will be fully open sourced[1] and all 
features are entirely free to use (i.e. there are no paid editions)
There is a new website at www.xenserver.org where the build and code is 
available.

XCP:
As XS 6.2 is fully open source, no new versions of XCP will be released[2].  
You can install XS in XCP mode by selecting that option at install time, and 
there is an upgrade path from XCP to XS 6.2.  We'd recommend all XCP users to 
upgrade to XenServer 6.2 

Xenserver-core[3]: 
This is a method for building the core packages in a xenserver installation on 
an existing RPM-based system.  Initial support for this configuration (notably 
running nova services in domain 0) was recently added in Havana[4].

Xcp-xapi/Kronos:
This is a method of installing an old snapshot of a xenserver-core like system 
on Debian-based systems.  Xcp-xapi is not updated with the latest changes in 
xapi and there will be a version of xenserver-core for Debian/Ubuntu, which 
will be built from the main branch and therefore continuously up to date, 
announced in the next few weeks.

I hope this answers any questions in one place, but if there are more 
questions, just let me know!  I have updated the page at 
https://wiki.openstack.org/wiki/XenServer/Install to make it a little clearer 
and we will of course be making similar clarifications in the install guide in 
the next few weeks.

Thanks,

Bob

[1] Some source code repositories may not be available until the end of 
September: 
http://xenserver.org/overview-xenserver-open-source-virtualization/source-code.html
[2] 
http://xenserver.org/discuss-virtualization/q-and-a/i-am-a-user-of-xcp-how-will-i-be-impacted.html
 
[3] 
http://xenserver.org/discuss-virtualization/virtualization-blog/entry/making-sense-of-xenserver-vs-xenserver-core-vs-citrix-xenserver.html
 and 
http://xenserver.org/discuss-virtualization/virtualization-blog/entry/tech-preview-of-xenserver-libvirt-ceph.html
 - although if you intend to use xenserver-core, I'd recommend using the latest 
snapshot described in 
https://lists.xenserver.org/sympa/arc/xs-devel/2013-08/msg00027.html 
[4] https://blueprints.launchpad.net/nova/+spec/xenserver-core 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Smokestack] Review 40296 hit by Torpedo

2013-09-03 Thread Salvatore Orlando
Hi,

We are removing the schema autogeneration capability from Neutron [1].

The patch however did not pass Torpedo tests on Smokestack; looking at the
logs it seems Firestack deploys Neutron but the db is empty, which is
consistent with firestack leveraging schema auto-generation capabilities.

I've tried to look at Firestack to see how this can be fixed; from what I
gather it depends on RDO packaging, but I'm not yet sure what should be
done to this aim; any help would be greatly appreciated. The same will
probably apply to ubuntu and debian packaging.

Regards,
Salvatore

[1] Why? https://bugs.launchpad.net/neutron/+bug/1207402
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Documentation and patches

2013-09-03 Thread Anne Gentle
On Mon, Sep 2, 2013 at 3:51 AM, Thierry Carrez wrote:

> Lorin Hochstein wrote:
> > Would it help  if doc bugs were associated with the relevant OpenStack
> > project, in addition to the openstack-manauls project? For example, we
> > could mark nova-related doc bugs as "nova" project bugs in addition to
> > "openstack-manuals" project bugs.
>
> I'm not a big fan of this idea.
>
> The goal of task tracking is to have actionable tasks that can be
> completed by committing a solution to one given repository. So if the
> fix is supposed to land in an docs repo (and not, say, in the nova
> repo), then the bug should appear in the docs task tracking, and be
> automatically closed when the commit mentioning the bug lands into that
> repository.
>
The problem if you add a 'nova' task for the same bug is that there will
> be no way of closing that bug automatically, since there is actually
> nothing to land in that repository. You're using task tracking to do a
> notification and an easy search for the developers. That is not what the
> 'project' means in task tracking. You're using a 'nova' task only to
> attract 'nova devs' attention. This makes the whole task tracking more
> inaccurate.
>
> I would rather use tagging to link a doc bug (or a QA bug) to a wanted
> audience, and encourage people to search for tasks being tagged with the
> name of their project, which is where their help is required. Or use
> subscription. Something that is linked to the people and not the code
> repository.
>
>
Okay, thanks for this input. I agree we need people and their attention.

Nova is the overall "winner" (cough) with 110 doc bugs followed by keystone
with 26. 110 doc bugs indicates a serious need.

Teams, please follow through on docs and see how you can help.

https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=nova
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=keystone
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=neutron
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=swift
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=glance
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=cinder
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=xen

Thanks,
Anne


>  --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

I've done a wrong copy&paste, see correction inline.

Le 03/09/2013 12:34, Simon Pasquier a écrit :

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these
filters is inaccurate?

My test environment has 2 compute nodes: compute1 and compute3. First, I
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup

So far so good, everything's active:
$ nova list
+--+-+++-+--+

| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+

| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
+--+-+++-+--+


Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
local --availability-zone nova:compute3 vm-compute3-nogroup


The command is:

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --hint group=foo vm1-foo



$ nova list
+--+-+++-+--+

| ID   | Name| Status |
Task State | Power State | Networks |
+--+-+++-+--+

| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE |
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE |
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  |
None   | NOSTATE |  |
+--+-+++-+--+


I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
will see, the log message is there but it looks like group_hosts() [3]
is returning all my hosts instead of only the ones that run instances
from the group.

Concerning GroupAffinityFilter, I understood that it couldn't work
simultaneously with GroupAntiAffinityFilter but since I missed the
multiple schedulers, I couldn't figure out how it would be useful. So I
got it now.

Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3]
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137

Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e

xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug
the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f

ilter.py)

Now regarding the AffinityFilter. At this stage this does not work with
the AntiAffinity filter. We were banking on this being used with the
multiple scheduler policies (https://review.openstack.org/#/c/37407/)

Thanks
Gary



On 9/3/13 10:16 AM, "Simon Pasquier"  wrote

Re: [openstack-dev] [savanna] Fwd: Change in stackforge/savanna-extra[master]: Add diskimage-creating script, elements for mirrors

2013-09-03 Thread Ivan Berezovskiy
Hi,

1.We used mvn to create tar.gz and I'll add documentation for that.
2. The command is simple: "parameter1='some_value1' ...
parameterN='some_valueN' disk-image-create element1 ... elementM -o
image_name".
Some files in ~/.cache/image-create/ are owned by root (for example, it is
file SHA256SUMS*). So, we need to use 'sudo' for cleaning this directory.
3. This script should not be run with itself. And the command
"disk-image-create" should not be run with itself too.

Thanks, Ivan.


2013/9/3 Matthew Farrellee 

> Long weekend here in the US, so I didn't get a chance to comment before
> this was merged, so...
>
> Re Oozie - How did you create the oozie-3.3.2.tar.gz?
>
> Re sudo image-cache - That's not the case for me, the wget is run without
> sudo. How are you running disk-image-create?
>
> Re DIB_work - it's best practice to use /tmp for temporary work, and
> mktemp. This script running concurrently with itself will result in unknown
> output.
>
> Best,
>
>
> matt
>
>  Original Message 
> Subject: Change in stackforge/savanna-extra[**master]: Add
> diskimage-creating script, elements for mirrors
> Date: Thu, 29 Aug 2013 14:37:36 +
> From: Ivan Berezovskiy (Code Review) 
> Reply-To: iberezovs...@mirantis.com
> CC: Sergey Lukjanov ,Dmitry Mescheryakov <
> dmescherya...@mirantis.com>,Nadya Privalova <
> nprival...@mirantis.com>,Matthew Farrellee 
>
> Ivan Berezovskiy has posted comments on this change.
>
>
> Change subject: Add diskimage-creating script, elements for mirrors
> ..**..**..
>
>
> Patch Set 6: (16 inline comments)
>
>
> ..**..
> File diskimage-create/diskimage-**create.sh
> Line 11: export OOZIE_DOWNLOAD_URL="http://**a8e0dce84b3f00ed7910-**
> a5806ff0396addabb148d230fde09b**7b.r31.cf1.rackcdn.com/oozie-**
> 3.3.2.tar.gz
> "
> We don't use custom tarball. It is our own tarbal. Please, show me link,
> if you know, where I can download oozie with all binary files.
>
>
> Line 15: if [ $str = 'NAME="Ubuntu"' ]; then
> Package 'redhat-lsb' is not preinstalled in some Fedora images like cloud
> image. So, we can't use this command.
> In DIB you can see script 02-lsb ('https://github.com/**
> openstack/diskimage-builder/**blob/master/elements/fedora/**
> pre-install.d/02-lsb')
> that install this package.
>
> Line 21: fi
> Done
>
>
> Line 24:   sudo rm -rf /home/$USER/.cache/image-**create/*
> Image caching execute under 'sudo'. You can try to delete images without
> sudo and you'll see 'premission denied'
>
> Line 31: cd DIB_work
> Why? This directory will be removed after creating images.
>
>
> Line 41: export DIB_COMMIT_ID=`git show --format=%H | head -1`
> https://github.com/stackforge/**savanna-extra/blob/master/**
> elements/savanna-version/**install.d/01-savanna-version
>
> Line 42: cd ../
> Done
>
>
> Line 48: export SAVANNA_ELEMENTS_COMMIT_ID=`**git show --format=%H | head
> -1`
> https://github.com/stackforge/**savanna-extra/blob/master/**
> elements/savanna-version/**install.d/01-savanna-version
>
> Line 49: cd ../
> Done
>
> Line 64: fi
> We can't use 'lsb_release' as I said before.
>
>
> ..**..
> File diskimage-create/README.rst
> Line 7: 1. If you want to change build parameters, you should edit this
> script at 'export' commands.
> Done
>
>
> Line 9: 2. If you want to use your local mirrors, you can specify urls for
> Fedora and Ubuntu mirrors using parameters 'FEDORA_MIRROR' and
> 'UBUNTU_MIRROR' like this:
> Done
>
>
> Line 15: 3. If you want to add your element to this repository, you should
> edit this script in your commit (you should export variables for your
> element and add name of element to variables 'element_sequence').
> Done
>
>
> ..**..
> File elements/apt-mirror/root.d/0-**check
> Line 2: if [ -z "$UBUNTU_MIRROR" ]; then
> Done
>
>
> ..**..
> File elements/yum-mirror/root.d/0-**check
> Line 2: if [ -z "$FEDORA_MIRROR" ]; then
> Done
>
>
> ..**..
> File README.rst
> Line 10: * Script for creating Fedora and Ubuntu cloud images with our
> elements and default parameters. You should run command only:
> Done
>
>
> --
> To view, visit 
> https://review.openstack.org/**43916
> To unsubscribe, visit 
> https://review.openstack.org/**settings
>
> 

[openstack-dev] [Neutron] Security groups with OVS instead of iptables?

2013-09-03 Thread Lorin Hochstein
(Also asked at
https://ask.openstack.org/en/question/4718/security-groups-with-ovs-instead-of-iptables/
)

The only security group implementations in neutron seem to be
iptables-based. Is it technically possible to implement security groups
using openvswitch flow rules, instead of iptables rules?

It seems like this would cut down on the complexity associated with the
current OVSHybridIptablesFirewallDriver implementation, where we need to
create an extra linux bridge and veth pair to work around the
iptables-openvswitch issues. (This also breaks if the user happens to
install the openvswitch brcompat module).

Lorin
-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-03 Thread Anne Gentle
On Tue, Sep 3, 2013 at 7:07 AM, Ronen Kat  wrote:

> Hi Emilien,
>
> I have looked at the block guide - the master branch it is mostly empty,
> most of the content was moved to the OpenStack configuration reference.
> Any suggestion on relation what should be in the configuration reference
> vs. the block admin guide - which is now practically empty.
>
>
Yes, your observations are accurate.

The Admin manuals are for day-to-day operations and running block storage.
The Config Reference is for one-time configuration and tuning. We have been
discussing these changes on the -dev mailing list. [1]

So, here are the placements:

1. Backup configuration

Configuration Reference

2. General description of Cinder backup (commands, features, etc)

Block Storage Administration Guide

3. Description of the available backup drivers

Configuration Reference

I believe this is the best for our audience.

Thanks,
Anne

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013347.html



> Regards,
> __
> Ronen I. Kat
> Storage Research
> IBM Research - Haifa
> Phone: +972.3.7689493
> Email: ronen...@il.ibm.com
>
>
>
>
> From:   Emilien Macchi 
> To: OpenStack Development Mailing List
> ,
> Cc: Ronen Kat/Haifa/IBM@IBMIL
> Date:   03/09/2013 02:57 PM
> Subject:Re: [openstack-dev] Cinder Backup documentation - to which
> doc
> it should go?
>
>
>
> Hi Ronen,
>
> The best place for your documentation is in the OpenStack Bloc Storage
> guide [1].
> If you need help with OpenStack manuals, we have a dedicated wiki page [2].
>
> Please let us know if you need support.
>
>
> Regards,
>
> [1]
>
> https://github.com/openstack/openstack-manuals/tree/master/doc/src/docbkx/openstack-block-storage-admin
>
> [2] https://wiki.openstack.org/wiki/Documentation/HowTo
>
> Emilien Macchi
> 
> # OpenStack Engineer
> // eNovance Inc.  http://enovance.com
> // ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
> // 10 rue de la Victoire 75009 Paris
>
> On 09/03/2013 12:50 PM, Ronen Kat wrote:
> >
> > I noticed the complains about code submission without appropriate
> > documentation submission, so I am ready to do my part for Cinder
> backup
> > I have just one little question.
> > Not being up to date on the current set of OpenStack manuals, and as I
> > noticed that the block storage admin guide lost a lot of content, to
> which
> > document(s) should I add the Cinder backup documentation?
> >
> > The documentation includes:
> > 1. Backup configuration
> > 2. General description of Cinder backup (commands, features, etc)
> > 3. Description of the available backup drivers
> >
> > Should all three go to the same place? Or different documents?
> >
> > Thanks,
> >
> > Regards,
> > __
> > Ronen I. Kat
> > Storage Research
> > IBM Research - Haifa
> > Phone: +972.3.7689493
> > Email: ronen...@il.ibm.com
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> [attachment "signature.asc" deleted by Ronen Kat/Haifa/IBM]
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Fwd: Change in stackforge/savanna-extra[master]: Add diskimage-creating script, elements for mirrors

2013-09-03 Thread Matthew Farrellee
Long weekend here in the US, so I didn't get a chance to comment before 
this was merged, so...


Re Oozie - How did you create the oozie-3.3.2.tar.gz?

Re sudo image-cache - That's not the case for me, the wget is run 
without sudo. How are you running disk-image-create?


Re DIB_work - it's best practice to use /tmp for temporary work, and 
mktemp. This script running concurrently with itself will result in 
unknown output.


Best,


matt

 Original Message 
Subject: Change in stackforge/savanna-extra[master]: Add 
diskimage-creating script, elements for mirrors

Date: Thu, 29 Aug 2013 14:37:36 +
From: Ivan Berezovskiy (Code Review) 
Reply-To: iberezovs...@mirantis.com
CC: Sergey Lukjanov ,Dmitry Mescheryakov 
,Nadya Privalova 
,Matthew Farrellee 


Ivan Berezovskiy has posted comments on this change.

Change subject: Add diskimage-creating script, elements for mirrors
..


Patch Set 6: (16 inline comments)


File diskimage-create/diskimage-create.sh
Line 11: export 
OOZIE_DOWNLOAD_URL="http://a8e0dce84b3f00ed7910-a5806ff0396addabb148d230fde09b7b.r31.cf1.rackcdn.com/oozie-3.3.2.tar.gz";
We don't use custom tarball. It is our own tarbal. Please, show me link, 
if you know, where I can download oozie with all binary files.


Line 15: if [ $str = 'NAME="Ubuntu"' ]; then
Package 'redhat-lsb' is not preinstalled in some Fedora images like 
cloud image. So, we can't use this command.
In DIB you can see script 02-lsb 
('https://github.com/openstack/diskimage-builder/blob/master/elements/fedora/pre-install.d/02-lsb') 
that install this package.


Line 21: fi
Done

Line 24:   sudo rm -rf /home/$USER/.cache/image-create/*
Image caching execute under 'sudo'. You can try to delete images without 
sudo and you'll see 'premission denied'


Line 31: cd DIB_work
Why? This directory will be removed after creating images.

Line 41: export DIB_COMMIT_ID=`git show --format=%H | head -1`
https://github.com/stackforge/savanna-extra/blob/master/elements/savanna-version/install.d/01-savanna-version

Line 42: cd ../
Done

Line 48: export SAVANNA_ELEMENTS_COMMIT_ID=`git show --format=%H | head -1`
https://github.com/stackforge/savanna-extra/blob/master/elements/savanna-version/install.d/01-savanna-version

Line 49: cd ../
Done

Line 64: fi
We can't use 'lsb_release' as I said before.


File diskimage-create/README.rst
Line 7: 1. If you want to change build parameters, you should edit this 
script at 'export' commands.

Done

Line 9: 2. If you want to use your local mirrors, you can specify urls 
for Fedora and Ubuntu mirrors using parameters 'FEDORA_MIRROR' and 
'UBUNTU_MIRROR' like this:

Done

Line 15: 3. If you want to add your element to this repository, you 
should edit this script in your commit (you should export variables for 
your element and add name of element to variables 'element_sequence').

Done


File elements/apt-mirror/root.d/0-check
Line 2: if [ -z "$UBUNTU_MIRROR" ]; then
Done


File elements/yum-mirror/root.d/0-check
Line 2: if [ -z "$FEDORA_MIRROR" ]; then
Done


File README.rst
Line 10: * Script for creating Fedora and Ubuntu cloud images with our 
elements and default parameters. You should run command only:

Done

--
To view, visit https://review.openstack.org/43916
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I12632b5cee42b1dbfd79b7b7c3a7b26962ace625
Gerrit-PatchSet: 6
Gerrit-Project: stackforge/savanna-extra
Gerrit-Branch: master
Gerrit-Owner: Ivan Berezovskiy 
Gerrit-Reviewer: Dmitry Mescheryakov 
Gerrit-Reviewer: Ivan Berezovskiy 
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Matthew Farrellee 
Gerrit-Reviewer: Nadya Privalova 
Gerrit-Reviewer: Sergey Lukjanov 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-03 Thread Ronen Kat
Hi Emilien,

I have looked at the block guide - the master branch it is mostly empty,
most of the content was moved to the OpenStack configuration reference.
Any suggestion on relation what should be in the configuration reference
vs. the block admin guide - which is now practically empty.

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com




From:   Emilien Macchi 
To: OpenStack Development Mailing List
,
Cc: Ronen Kat/Haifa/IBM@IBMIL
Date:   03/09/2013 02:57 PM
Subject:Re: [openstack-dev] Cinder Backup documentation - to which doc
it should go?



Hi Ronen,

The best place for your documentation is in the OpenStack Bloc Storage
guide [1].
If you need help with OpenStack manuals, we have a dedicated wiki page [2].

Please let us know if you need support.


Regards,

[1]
https://github.com/openstack/openstack-manuals/tree/master/doc/src/docbkx/openstack-block-storage-admin

[2] https://wiki.openstack.org/wiki/Documentation/HowTo

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 09/03/2013 12:50 PM, Ronen Kat wrote:
>
> I noticed the complains about code submission without appropriate
> documentation submission, so I am ready to do my part for Cinder
backup
> I have just one little question.
> Not being up to date on the current set of OpenStack manuals, and as I
> noticed that the block storage admin guide lost a lot of content, to
which
> document(s) should I add the Cinder backup documentation?
>
> The documentation includes:
> 1. Backup configuration
> 2. General description of Cinder backup (commands, features, etc)
> 3. Description of the available backup drivers
>
> Should all three go to the same place? Or different documents?
>
> Thanks,
>
> Regards,
> __
> Ronen I. Kat
> Storage Research
> IBM Research - Haifa
> Phone: +972.3.7689493
> Email: ronen...@il.ibm.com
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[attachment "signature.asc" deleted by Ronen Kat/Haifa/IBM]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-03 Thread Emilien Macchi
Hi Ronen,

The best place for your documentation is in the OpenStack Bloc Storage
guide [1].
If you need help with OpenStack manuals, we have a dedicated wiki page [2].

Please let us know if you need support.


Regards,

[1]
https://github.com/openstack/openstack-manuals/tree/master/doc/src/docbkx/openstack-block-storage-admin
[2] https://wiki.openstack.org/wiki/Documentation/HowTo

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 09/03/2013 12:50 PM, Ronen Kat wrote:
>
> I noticed the complains about code submission without appropriate
> documentation submission, so I am ready to do my part for Cinder backup
> I have just one little question.
> Not being up to date on the current set of OpenStack manuals, and as I
> noticed that the block storage admin guide lost a lot of content, to which
> document(s) should I add the Cinder backup documentation?
>
> The documentation includes:
> 1. Backup configuration
> 2. General description of Cinder backup (commands, features, etc)
> 3. Description of the available backup drivers
>
> Should all three go to the same place? Or different documents?
>
> Thanks,
>
> Regards,
> __
> Ronen I. Kat
> Storage Research
> IBM Research - Haifa
> Phone: +972.3.7689493
> Email: ronen...@il.ibm.com
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] review request for bp improve-block-device-handling

2013-09-03 Thread Nikola Đipanov
Hi folks,

I'd greatly appreciate if we could get the remaining two patches
reviewed and hopefully merged for the upcoming freeze this Wednesday.

Both patches have been up for a long time and have seen a number of
revisions and several +2 already. The patches in question are:

https://review.openstack.org/#/c/40229/
https://review.openstack.org/#/c/42474/

If you feel there are some major issues with them - of course - please
do bring them up, but in case the issues are smaller, I'd be more than
happy to raise individual bugs and take car of them as soon as humanly
possible, so that the patches will land for H-3.

Many thanks in advance,

Kind regards,

Nikola

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Instance naming in IG/ASG and problems related to UpdatePolicy

2013-09-03 Thread Steven Hardy
Hi Winson,

On Fri, Aug 30, 2013 at 02:23:18PM +, Chan, Winson C wrote:
> Regarding the last set of comments on the UpdatePolicy, I want to bring your 
> attention to a few items.  I already submitted a new patch set and didn't 
> want to reply on the old patch set so that's why I emailed.

Sorry for the slow response on this, you've already had some good feedback
from Zane and Clint, and I've just reviewed your latest patch:

https://review.openstack.org/#/c/43571/

I'd really like input from the other core guys on this, particularly my
complaints about the changes to the internal interfaces (where we start
passing lists-of-names around again instead of a number)

I'm opposed to spreading this interface change around as in the current
patch (as I said in Patch Set 3, prompting your mail) - the problem is the
interfaces just get much less obvious and far more error-prone IMHO (as has
been proven previously).

> As you are aware, IG/ASG currently create instances by appending group name 
> and #.  On resize, it identifies the newest instances to remove by sorting on 
> the name string and removing from the end of the list.

So I agree with Zane that this is not a requirement for your patch, merely
an implementation detail of the old code.  However if we can support this
mode of replacement (either now or in a subsequent patch) then fine.

> 
> Based on your comments, in the new patch set I have changed the naming of the 
> instances to just a # without prefixing the group name (or self.name).  I 
> also remove the name ranges stuff.  But we still have the following problems…
> 
>   1.  On a decrease in size where the oldest instances should be removed…  
> Since the naming is still number based, this means we'll have to remove 
> instances starting from 0 (since 0 is the oldest).  This leaves a gap in the 
> beginning of the list.  So on the next resize to increase, where to increase? 
>  Continue the numbering from the end?
>   2.  On replace, I let the UpdateReplace handle the batch replacement.  
> However, for the use case where we need to support MinInstancesInService (min 
> instances in service = 2, batch size = 2, current size = 2), this means we 
> need to create the new instances first before deleting the old ones instead 
> of letting the instance update to handle it.  Also, with the naming 
> restriction, this means I will have to create the 2 new replacements as '2' 
> and '3'.  After I delete the original '0' and '1', there's a gap in the 
> numbering of the instances…  Then this leads to the same question as above.  
> What happen on a resize after?
> 
> The ideal I think is to just use some random short id for the name of the 
> instances and then store a creation timestamp somewhere with the resource and 
> use the timestamp to determine the age of the instances for removal.  
> Thoughts?

+1, using the random short ID seems like a good plan, as does using the DB
timestamp instead of string sorting to decide ordering

However, I still don't think this means we need to pass lists of resource
names around inside the autoscaling implementation - the only place where
we care about the resource names should be in the function creating the
template, e.g not in resize/replace, we just pass the number to replace and
the number to create into _create_template, then update the nested stack
with the new template.

Thanks for the work so far on this, looks like it will be a nice new
feature when we get these final issues sorted out! :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-03 Thread Ronen Kat


I noticed the complains about code submission without appropriate
documentation submission, so I am ready to do my part for Cinder backup
I have just one little question.
Not being up to date on the current set of OpenStack manuals, and as I
noticed that the block storage admin guide lost a lot of content, to which
document(s) should I add the Cinder backup documentation?

The documentation includes:
1. Backup configuration
2. General description of Cinder backup (commands, features, etc)
3. Description of the available backup drivers

Should all three go to the same place? Or different documents?

Thanks,

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

Hello,

Thanks for the reply.

First of all, do you agree that the current documentation for these 
filters is inaccurate?


My test environment has 2 compute nodes: compute1 and compute3. First, I 
launch 1 instance (not being tied to any group) on each node:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute1 vm-compute1-nogroup
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute3 vm-compute3-nogroup


So far so good, everything's active:
$ nova list
+--+-+++-+--+
| ID   | Name| Status | 
Task State | Power State | Networks |

+--+-+++-+--+
| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | 
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | 
None   | Running | private=10.0.0.4 |

+--+-+++-+--+

Then I try to launch one instance in group 'foo' but it fails:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name 
local --availability-zone nova:compute3 vm-compute3-nogroup

$ nova list
+--+-+++-+--+
| ID   | Name| Status | 
Task State | Power State | Networks |

+--+-+++-+--+
| 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE | 
None   | Running | private=10.0.0.3 |
| c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE | 
None   | Running | private=10.0.0.4 |
| 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR  | 
None   | NOSTATE |  |

+--+-+++-+--+

I've pasted the scheduler logs [1] and my nova.conf file [2]. As you 
will see, the log message is there but it looks like group_hosts() [3] 
is returning all my hosts instead of only the ones that run instances 
from the group.


Concerning GroupAffinityFilter, I understood that it couldn't work 
simultaneously with GroupAntiAffinityFilter but since I missed the 
multiple schedulers, I couldn't figure out how it would be useful. So I 
got it now.


Best regards,

Simon

[1] http://paste.openstack.org/show/45672/
[2] http://paste.openstack.org/show/45671/
[3] 
https://github.com/openstack/nova/blob/master/nova/scheduler/driver.py#L137


Le 03/09/2013 10:49, Gary Kotton a écrit :

Hi,
Hopefully I will be able to address your questions. First lets start with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e
xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f
ilter.py)

Now regarding the AffinityFilter. At this stage this does not work with
the AntiAffinity filter. We were banking on this being used with the
multiple scheduler policies (https://review.openstack.org/#/c/37407/)

Thanks
Gary



On 9/3/13 10:16 AM, "Simon Pasquier"  wrote:


Reposting to openstack-dev as I got no answer on the general mailing list.


 Message original 
Sujet: [Openstack] Confused about GroupAntiAffinityFilter and
GroupAffinityFilter
Date : M

Re: [openstack-dev] Openstack Folsom + Quantum + XenServer 6.1.0 + Windows VM

2013-09-03 Thread Mate Lakat
Hi Dbarros,

Could you look at this blog post:

http://blogs.citrix.com/2013/06/14/openstack-networking-quantum-on-xenserver-from-notworking-to-networking/

I would not expect Grizzly to work with XenServer + Quantum without some
extra patches.

Let me know if you have further questions:
Mate


On Mon, Sep 02, 2013 at 04:41:00PM -0300, Dbarros wrote:
> Hi,
> 
> I'm having a network problem when spawn windows vm into xenserver.
> A 'tap' and 'vif' interface are created until the boot is completed, after
> that interface 'tap' is removed and interface 'vif' loses id tag
> From what i searched, because of this Windows VM cant configure network
> correctly.
> 
> ps; I can see the DHCP requests on the host, but them arent being sent to
> the network node by gre tunnel.
> 
> It's been weeks trying to find for a solution with no success. Can someone
> please help me on this issue?
> I've followed the link below to configure xenserver to openstack
> 
> https://github.com/openstack/nova/blob/master/plugins/xenserver/doc/networking.rst
> 
> Thanks in advance
> -- 
> Dbarros
> 5585 9919 6279

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Mate Lakat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Gary Kotton
Hi,
Hopefully I will be able to address your questions. First lets start with
the group anti-affinity. This was added towards the end of the Grizzly
release cycle as a scheduling hint. At the last summit we sat and agreed
on a more formal approach to deal with this and we proposed and developed
https://blueprints.launchpad.net/openstack/?searchtext=instance-group-api-e
xtension (https://wiki.openstack.org/wiki/GroupApiExtension).
At the moment the following are still in review and I hope that we will
make the feature freeze deadline:
Api support:
https://review.openstack.org/#/c/30028/

Scheduler support:
https://review.openstack.org/#/c/33956/

Client support:
https://review.openstack.org/#/c/32904/

In order to make use of the above you need to add GroupAntiAffinityFilter
to the filters that will be active (this is not one of the default
filters). When you deploy the first instance of a group you need to
specify that it is part of the group. This information is used for
additional VM's that are being deployed.

Can you please provide some extra details so that I can help you debug the
issues that you have encountered (I did not encounter the problems that
you have described):
1. Please provide the commands that you used with the deploying of the
instance
2. Please provide the nova configuration file
3. Can you please look at the debug traces and see if you see the log
message on line 97 
(https://review.openstack.org/#/c/21070/8/nova/scheduler/filters/affinity_f
ilter.py)

Now regarding the AffinityFilter. At this stage this does not work with
the AntiAffinity filter. We were banking on this being used with the
multiple scheduler policies (https://review.openstack.org/#/c/37407/)

Thanks
Gary



On 9/3/13 10:16 AM, "Simon Pasquier"  wrote:

>Reposting to openstack-dev as I got no answer on the general mailing list.
>
>
> Message original 
>Sujet: [Openstack] Confused about GroupAntiAffinityFilter and
>GroupAffinityFilter
>Date : Mon, 2 Sep 2013 11:19:58 +0200
>De : Simon Pasquier 
>Organisation : Bulll SAS
>Pour : 
>
>Hello,
>
>I tried to play with GroupAntiAffinityFilter and GroupAffinityFilter
>filters but it looks like the documentation is misleading [1]. Looking
>more precisely at the commits that introduced these filters [2][3], my
>assumption is that to use these filters, one would boot a first instance
>with '--hint group=foo' and the scheduler would update the
>instance_system_metadata table with {key:'group',value:'foo}. Then when
>starting other instances with the same hint option, the scheduler would
>filter the candidate hosts by querying the instance_system_metadata table.
>
>Still this doesn't work for me. In my tests with
>GroupAntiAffinityFilter, I have 3 compute nodes, each running one
>instance not in any group. Then when I launch a VM specifying a group
>hint, the scheduler fails to find a valid host because
>GroupAntiAffinityFilter filter returns 0 host.
>
>Could someone provide some guidance on how to use this filter?
>
>Regards,
>
>[1]
>http://docs.openstack.org/trunk/openstack-compute/admin/content/scheduler-
>filters.html#groupaffinityfilter
>[2] https://review.openstack.org/#/c/21070/
>[3] https://review.openstack.org/#/c/35788/
>
>-- 
>Simon Pasquier
>Software Engineer
>Bull, Architect of an Open World
>Phone: + 33 4 76 29 71 49
>http://www.bull.com
>
>___
>Mailing list: 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>Post to : openst...@lists.openstack.org
>Unsubscribe : 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Murano v0.2.5 release planning

2013-09-03 Thread Denis Koryavov
Hello folks,

Development process for Murano v0.2 is finished. Thus, today I would like
to start discussion about the next version of Murano v0.2.5 (while we are
fixing bugs in v0.2 and polishing our documentation).

Here is our ROADMAP [1]. As you can see, I already created a section for
Murano
v0.2.5 and filled several items. The items are links to Blueprints. I
suggest to discuss 2 things:

* Scope for v0.2.5.
* Release schedule.

Fol release schedule I suggest to create a table like this [2].

[1] https://wiki.openstack.org/wiki/Murano/Roadmap
[2] https://wiki.openstack.org/wiki/Havana_Release_Schedule

--
Denis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review

2013-09-03 Thread Peter Liljenberg
Ah,

Will do!
Thx!

/Peter


On 3 September 2013 08:56, Flavio Percoco  wrote:

> On 03/09/13 08:28 +0200, Peter Liljenberg wrote:
>
>> Hi,
>>
>> Could someone review this (Added support for JaCoCo plugin Publisher).
>>
>> https://review.openstack.org/#**/c/44705/
>>
>> /Peter
>>
>
>
> Hi,
>
> When asking for Code Reviews, pls, tag the email subject with the
> project in matter.
>
> Cheers,
> FF
>
> --
> @flaper87
> Flavio Percoco
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project & release status meeting - 21:00 UTC

2013-09-03 Thread Thierry Carrez
Today in the project/release status meeting, we are one day away from
the dreaded FeatureFreeze. We'll look into havana-3 project roadmaps and
check what can still make it in the next day, what might require a
feature freeze exception, and what is likely to be deferred. The "almost
feature complete" Havana-3 milestone should be published by Friday.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130903T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions related to live migration without target host

2013-09-03 Thread Alex Glikson
I tend to agree with Jake that this check is likely to conflict with the 
scheduler, and should be removed.

Regards,
Alex




From:   Guangya Liu 
To: openstack-dev@lists.openstack.org, 
Date:   03/09/2013 02:03 AM
Subject:[openstack-dev] Questions related to live migration 
without target  host



Greetings,

There is an issue related to "live migration without target host" might 
want to get more discussion/feedback from you experts: 
https://bugs.launchpad.net/nova/+bug/1214943.

I have proposed a fix for this issue (
https://review.openstack.org/#/c/43213/), the fix was directly remove the 
checking for free ram and always trust the result from nova scheduler as 
nova scheduler already select a best host for live migration based on the 
function filter_scheduler.py:select_hosts.

Please show your comments if any, you can also directly append your 
comments to  https://review.openstack.org/#/c/43213/.

Thanks,

Jake Liu
UnitedStack Inc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-03 Thread Simon Pasquier

Reposting to openstack-dev as I got no answer on the general mailing list.


 Message original 
Sujet: [Openstack] Confused about GroupAntiAffinityFilter and 
GroupAffinityFilter

Date : Mon, 2 Sep 2013 11:19:58 +0200
De : Simon Pasquier 
Organisation : Bulll SAS
Pour : 

Hello,

I tried to play with GroupAntiAffinityFilter and GroupAffinityFilter
filters but it looks like the documentation is misleading [1]. Looking
more precisely at the commits that introduced these filters [2][3], my
assumption is that to use these filters, one would boot a first instance
with '--hint group=foo' and the scheduler would update the
instance_system_metadata table with {key:'group',value:'foo}. Then when
starting other instances with the same hint option, the scheduler would
filter the candidate hosts by querying the instance_system_metadata table.

Still this doesn't work for me. In my tests with
GroupAntiAffinityFilter, I have 3 compute nodes, each running one
instance not in any group. Then when I launch a VM specifying a group
hint, the scheduler fails to find a valid host because
GroupAntiAffinityFilter filter returns 0 host.

Could someone provide some guidance on how to use this filter?

Regards,

[1]
http://docs.openstack.org/trunk/openstack-compute/admin/content/scheduler-filters.html#groupaffinityfilter
[2] https://review.openstack.org/#/c/21070/
[3] https://review.openstack.org/#/c/35788/

--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review

2013-09-03 Thread Flavio Percoco

On 03/09/13 08:28 +0200, Peter Liljenberg wrote:

Hi,

Could someone review this (Added support for JaCoCo plugin Publisher).

https://review.openstack.org/#/c/44705/

/Peter



Hi,

When asking for Code Reviews, pls, tag the email subject with the
project in matter.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev