Re: [openstack-dev] Help needed

2015-04-09 Thread Deepika Agrawal
This is the full log :-
 python update.py /opt/stack/keystone
Traceback (most recent call last):
  File update.py, line 274, in module
main(options, args)
  File update.py, line 259, in main
_copy_requires(options.suffix, options.softupdate, args[0])
  File update.py, line 219, in _copy_requires
source_reqs = _parse_reqs('global-requirements.txt')
  File update.py, line 140, in _parse_reqs
reqs[_parse_pip(pip)] = pip
  File update.py, line 101, in _parse_pip
elif install_require.url:
  File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py,
line 128, in url
return self.link.url
AttributeError: 'NoneType' object has no attribute 'url'

On Thu, Apr 9, 2015 at 3:03 PM, Abhishek Shrivastava abhis...@cloudbyte.com
 wrote:

 Can you give the full log.


 On Thu, Apr 9, 2015 at 2:57 PM, Deepika Agrawal deepika...@gmail.com
 wrote:

 hi guys!
 i am geting attribute error nonetype has no attribute URL in python
 update.py /opt/stack/keystone when i am going to run stack.sh.
  Please help!
 --
 Deepika Agrawal


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --


 *Thanks  Regards,*
 *Abhishek*
 *Cloudbyte Inc. http://www.cloudbyte.com*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Deepika Agrawal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-09 Thread Flavio Percoco

On 08/04/15 20:23 -0700, Joshua Harlow wrote:

Hope this helps:

'Let's do away with' == 'remove it/no longer use it'

vs 'let's use RPC-over-AMQP' which means continue using it/use it more.


a-ha... That explains it. My, obviously, non-native english
translation failed to parse that. Thanks for explaining :)

Feel free to ignore my last 2 emails :P

Cheers,
Flavio



Flavio Percoco wrote:

On 08/04/15 15:35 -0700, Min Pae wrote:

Uh sorry to nitpick, I think he said “let’s do away with” not “let’s use”
RPC-over-AMQP


How is that different? I honestly don't see the difference but I'm
surre I'm missing something in my translation.



On Wed, Apr 8, 2015 at 10:56 AM, Flavio Percoco fla...@redhat.com
wrote:

On 08/04/15 16:38 +, Sandy Walsh wrote:



From: Clint Byrum cl...@fewbar.com
Sent: Wednesday, April 8, 2015 1:15 PM

There's this:

https://wiki.openstack.org/wiki/Cue


Hmm, that looks interesting. Will read.


I also want to point out that what I'd actually rather see is that
all
of the services provide functionality like this. Users would be
served
by having an event stream from Nova telling them when their
instances
are active, deleted, stopped, started, error, etc.

Also, I really liked Sandy's suggestion to use the notifications on
the
backend, and then funnel them into something that the user can
consume.
The project they have, yagi, for putting them into atom feeds is
pretty
interesting. If we could give people a simple API that says
subscribe
to Nova/Cinder/Heat/etc. notifications for instance X, and put them
in an atom feed, that seems like something that would make sense
as
an under-the-cloud service that would be relatively low cost and
would
ultimately reduce load on API servers.


THIS!

Yes. It would be so good to pull apart the state-machine that is Nova
and
just emit completed actions via notifications. Then, have something
like
TaskFlow externalize the orchestration. Do away with RPC-over-AMQP.


Sorry for being nitpicky but, saying RPC-over-AMQP is way too
generic. What AMQP version? On top of what technology?

Considering all the issues OPs have with our current broker story, I
think considering implementing this on top of pure AMQP (which is how
that phrase reads) would not be good.

If you meant RPC-over-messaging then I think you should just keep
using oslo.nmessaging, which abstracts the problem of picking one
broker.

Unfortunately, this means users will need to consume this messages
from the messaging source using oslo.messaging as well. I say
unfortunately because I believe the API - or even the protocol - as
it is exposed through this library - or simply the broker - is not
something users should deal with. There are services that try to make
this interaction simpler - yes, Zaqar.

Flavio




And, anyone that is interested in the transitions can eavesdrop on the
notifications.

In our transition from StackTach.v2 to StackTach.v3 in production we
simply
cloned the notification feeds so the two systems can run in parallel*.
No
changes to OpenStack, no disruption of service. Later, we'll just kill
off
the v2 queues.

-S

* we did this in Yagi, since olso-messaging doesn't support multiple
queues
from one routing key.
__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?
subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco
__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpVs5uWMugWE.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [neutron] New version of python-neutronclient release for Kilo: 2.4.0

2015-04-09 Thread Kyle Mestery
The Neutron team is proud to announce the release of the latest version of
python-neutronclient. This release includes the following bug fixes and
improvements:

aa1215a Merge Fix one remaining E125 error and remove it from ignore list
cdfcf3c Fix one remaining E125 error and remove it from ignore list
b978f90 Add Neutron subnetpool API
d6cfd34 Revert Remove unused AlreadyAttachedClient
5b46457 Merge Fix E265 block comment should start with '# '
d32298a Merge Remove author tag
da804ef Merge Update hacking to 0.10
8aa2e35 Merge Make secgroup rules more readable in security-group-show
a20160b Merge Support fwaasrouterinsertion extension
ddbdf6f Merge Allow passing None for subnetpool
5c4717c Merge Add Neutron subnet-create with subnetpool
c242441 Allow passing None for subnetpool
6e10447 Add Neutron subnet-create with subnetpool
af3fcb7 Adding VLAN Transparency support to neutronclient
052b9da 'neutron port-create' missing help info for --binding:vnic-type
6588c42 Support fwaasrouterinsertion extension
ee929fd Merge Prefer argparse mutual exclusion
f3e80b8 Prefer argparse mutual exclusion
9c6c7c0 Merge Add HA router state to l3-agent-list-hosting-router
e73f304 Add HA router state to l3-agent-list-hosting-router
07334cb Make secgroup rules more readable in security-group-show
639a458 Merge Updated from global requirements
631e551 Fix E265 block comment should start with '# '
ed46ba9 Remove author tag
e2ca291 Update hacking to 0.10
9b5d397 Merge security-group-rule-list: show all info of rules briefly
b56c6de Merge Show rules in handy format in security-group-list
c6bcc05 Merge Fix failures when calling list operations using Python
binding
0c9cd0d Updated from global requirements
5f0f280 Fix failures when calling list operations using Python binding
c892724 Merge Add commands from extensions to available commands
9f4dafe Merge Updates pool session persistence options
ce93e46 Merge Added client calls for the lbaas v2 agent scheduler
c6c788d Merge Updating lbaas cli for TLS
4e98615 Updates pool session persistence options
a3d46c4 Merge Change Creates to Create in help text
4829e25 security-group-rule-list: show all info of rules briefly
5a6e608 Show rules in handy format in security-group-list
0eb43b8 Add commands from extensions to available commands
6e48413 Updating lbaas cli for TLS
942d821 Merge Remove unused AlreadyAttachedClient
a4a5087 Copy functional tests from tempest cli
dd934ce Merge exec permission to port_test_hook.sh
30b198e Remove unused AlreadyAttachedClient
a403265 Merge Reinstate Max URI length checking to V2_0 Client
0e9d1e5 exec permission to port_test_hook.sh
4b6ed76 Reinstate Max URI length checking to V2_0 Client
014d4e7 Add post_test_hook for functional tests
9b3b253 First pass at tempest-lib based functional testing
09e27d0 Merge Add OS_TEST_PATH to testr
7fcb315 Merge Ignore order of query parameters when compared in
MyUrlComparator
ca52c27 Add OS_TEST_PATH to testr
aa0042e Merge Fixed pool and health monitor create bugs
45774d3 Merge Honor allow_names in *-update command
17f0ca3 Ignore order of query parameters when compared in MyUrlComparator
aa0c39f Fixed pool and health monitor create bugs
6ca9a00 Added client calls for the lbaas v2 agent scheduler
c964a12 Merge Client command extension support
e615388 Merge Fix lbaas-loadbalancer-create with no --name
c61b1cd Merge Make some auth error messages more verbose
779b02e Client command extension support
e5e815c Fix lbaas-loadbalancer-create with no --name
7b8c224 Honor allow_names in *-update command
b9a7d52 Updated from global requirements
62a8a5b Make some auth error messages more verbose
8903cce Change Creates to Create in help text

For more details on the release, please see the LP page and the detailed
git log history.

https://launchpad.net/python-neutronclient/2.4/2.4.0

Please report any bugs in LP.

Thanks!
Kyle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help needed

2015-04-09 Thread Abhishek Shrivastava
You can get the logs in /opt/stack/logs folder as stack.sh.log

On Thu, Apr 9, 2015 at 3:03 PM, Abhishek Shrivastava abhis...@cloudbyte.com
 wrote:

 Can you give the full log.


 On Thu, Apr 9, 2015 at 2:57 PM, Deepika Agrawal deepika...@gmail.com
 wrote:

 hi guys!
 i am geting attribute error nonetype has no attribute URL in python
 update.py /opt/stack/keystone when i am going to run stack.sh.
  Please help!
 --
 Deepika Agrawal


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --


 *Thanks  Regards,*
 *Abhishek*
 *Cloudbyte Inc. http://www.cloudbyte.com*




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Advice on a Neutron ACL kludge

2015-04-09 Thread Neil Jerram
I think that people often mean different things by ACLs, so can you be
more precise?

Thanks,
Neil



From: Rich Wellner r...@objenv.com
Sent: 09 April 2015 01:29
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Advice on a Neutron ACL kludge

We are pursuing getting some sort of ACLs into neutron in the near term
(and then continuing to work with people here on a longer term solution
for Liberty).

For the short term, I think our needs will be met by taking the
iptablesmanager class and modfying (or overriding or creating a plugin)
so that the apply_synchronized call goes out to our switch instead of to
iptables. I was wondering if anyone else had tried a similar kludge or
if people had other recommendations for how to approach this kind of thing.

rw2


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] The specs process, effective operators feedback and product management

2015-04-09 Thread Assaf Muller
The Neutron specs process was introduced during the Juno timecycle. At the time 
it
was mostly a bureaucratic bottleneck (The ability to say no) to ease the pain 
of cores
and manage workloads throughout a cycle. Perhaps this is a somewhat naive 
outlook,
but I see other positives, such as more upfront design (Some is better than 
none),
less high level talk during the implementation review process and more focus on 
the details,
and 'free' documentation for every major change to the project (Some would say 
this
is kind of a big deal; What better way to write documentation than to force the 
developers
to do it in order for their features to get merged).

That being said, you can only get a feature merged if you propose a spec, and 
the only
people largely proposing specs are developers. This ingrains the open source 
culture of
developer focused evolution, that, while empowering and great for developers, 
is bad
for product managers, users (That are sometimes under-presented, as is the case 
I'm trying
to make) and generally causes a lack of a cohesive vision. Like it or not, the 
specs process
and the driver's team approval process form a sort of product management, 
deciding what
features will ultimately go in to Neutron and in what time frame.

We shouldn't ignore the fact that we clearly have people and product managers 
pulling the strings
in the background, often deciding where developers will spend their time and 
what specs to propose,
for the purpose of this discussion. I argue that managers often don't have the 
tools to understand
what is important to the project, only to their own customers. The Neutron 
drivers team, on the other hand,
don't have a clear incentive (Or I suspect the will) to spend enormous amounts 
of time doing 'product management',
as being a driver is essentially your third or fourth job by this point, and 
are the same people
solving gate issues, merging code, triaging bugs and so on. I'd like to avoid 
to go in to a discussion of what's
wrong with the current specs process as I'm sure people have heard me complain 
about this in
#openstack-neutron plenty of times before. Instead, I'd like to suggest a 
system that would perhaps
get us to implement specs that are currently not being proposed, and give an 
additional form of
input that would make sure that the development community is spending it's time 
in the right places.

While 'super users' have been given more exposure, and operators summits give 
operators
an additional tool to provide feedback, from a developer's point of view, the 
input is
non-empiric and scattered. I also have a hunch that operators still feel their 
voice is not being heard.

I propose an upvote/downvote system (Think Reddit), where everyone (Operators 
especially) would upload
paragraph long explanations of what they think is missing in Neutron. The 
proposals have to be actionable
(So 'Neutron sucks', while of great humorous value, isn't something I can do 
anything about),
and I suspect the downvote system will help self-regulate that anyway. The 
proposals are not specs, but are
like product RFEs, so for example there would not be a 'testing' section, as 
these proposals will not
replace the specs process anyway but augment it as an additional form of input. 
Proposals can range
from new features (Role based access control for Neutron resources, dynamic 
routing,
Neutron availability zones, QoS, ...) to quality of life improvements (Missing 
logs, too many
DEBUG level logs, poor trouble shooting areas with an explanation of what could 
be improved, ...)
to long standing bugs, Nova network parity issues, and whatever else may be 
irking the operators community.
The proposals would have to be moderated (Closing duplicates, low quality 
submissions and implemented proposals
for example) and if that is a concern then I volunteer to do so.

This system will also give drivers a 'way out': The last cycle we spent time 
refactoring this and that,
and developers love doing that so it's easy to get behind. I think that as in 
the next cycles we move back to features,
friction will rise and the process will reveal its flaws.

Something to consider: Maybe the top proposal takes a day to implement. Maybe 
some low priority bug is actually
the second highest proposal. Maybe all of the currently marked 'critical' bugs 
don't even appear on the list.
Maybe we aren't spending our time where we should be.

And now a word from our legal team: In order for this to be viable, the system 
would have to be a
*non binding*, *additional* form of input. The top proposal *could* be declined 
for the same reasons
that specs are currently being declined. It would not replace any of our 
current systems or processes.


Assaf Muller, Cloud Networking Engineer
Red Hat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Glance] PTL Candidacy

2015-04-09 Thread Tristan Cacqueray
confirmed

On 04/09/2015 12:55 AM, Nikhil Komawar wrote:
 Hello everyone,
 
 I would like to announce my candidacy as PTL for Glance for Liberty cycle.
 
 I have been part of the Glance program since Folsom release and have seen it 
 grow and get better over the years. As the PTL for Kilo, I have helped this 
 program achieve a steady forward momentum. In the process, quite a few 
 developers have joined the program, become core reviewers, and have been 
 providing excellent feedback on reviews, specs and development of libraries. 
 It has been a pleasure to collaborate with everyone and get the newer members 
 up to speed on the Glance project's development patterns and processes. Also, 
 we have been able to accomplish good progress on the newly added features 
 namely Artifacts and the Catalog Index Service. At the same time, other 
 blueprints have had good attention both from code contributors as well as 
 reviewers side. Although we began with a relatively small review team at the 
 advent of Kilo, over a dozen blueprints were implemented for Glance.  
 Additionally, we have been making continued progress in improving the 
 glance_store and python-glance
client libraries. Also, we've made a lot of progress in Glance for incubating 
cross-project changes like adopting graduated oslo policy, support of multiple 
sort keys and directions in glance and in the client, etc. We have also seen 
better feedback time on reviews, better awareness on review guidelines, more 
IRC presence, better collaboration in the meetings and on the Mailing Lists, 
and prompt attention to security bugs.  There have also been improvements in 
subtle aspects of the program like more Documentation support, feature updates 
in the client, as well as increased frequency of releases of the libraries. All 
of this great work has been made possible by the support of a group of really 
talented and proactive developers who have made the Glance ATCs into a vibrant 
community.
 
 For Liberty I would like to encourage the team to help Artifacts and the 
 Catalog Index Service achieve feature completion early in the first phase of 
 the cycle.  This will put us in a good position to address stability concerns 
 and make stability the primary focus of the Liberty cycle. As we have made 
 good progress in reducing the technical debt of promised features and 
 improving cross project experience within Glance, I think it's time to put 
 more focus in stabilizing the code bases. glance_store was introduced in the 
 Juno cycle, saw great improvements in Kilo, but requires more work to become 
 a mature repository. So, for Liberty, as a part of stability goal I would 
 like to work with the team in rising its Development Status to level 4 at the 
 very least. Also, there have been some really great feature proposals in the 
 later phase of Kilo that couldn't be implemented in the short window 
 available. I would like to help these proposal get some feedback as well as 
 work with their respe
ctive development teams in building solutions cohesive to the Glance program.
 
 During Kilo, Glance made the full transition from the old blueprint style of 
 design specifications to the more rigorous specs-based system.  From that 
 experience, the team has learned what did and didn't work well and has asked 
 for a better process for managing blueprints, specs and documentation. I 
 would like to partner with developers as well as operators in understanding 
 and addressing the pain points therein. I would like to partner with a wider 
 group in being able to setup documentation for the same. I would also like to 
 help the team get a healthier review speed and start implementing the 
 rotation policy for core reviewers.
 
 I have enjoyed working with all the Glance contributors and have learned a 
 lot while serving as PTL for Kilo. I'd appreciate the opportunity to apply 
 what I've learned to the Liberty release.
 
 Thanks for your consideration and support,
 -Nikhil Komawar
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Adding the Puppet modules to OpenStack

2015-04-09 Thread Emilien Macchi
It has been quite some time now that Puppet OpenStack contributors have
wanted to be part of the big tent so we would become an official project.

We talked about that over our last IRC meetings and decided to elect a
PTL so we would fit OpenStack requirements.

Today, we officially ask to the OpenStack TC to consider our candidacy
to be an official project: https://review.openstack.org/#/c/172112/

Please let us know any feedback in the review,
-- 
Emilien Macchi on behalf of Puppet OpenStack contributors



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Sahara] Kilo RC1 available

2015-04-09 Thread Thierry Carrez
Hello everyone,

Next to each the release candidate stage, we have Cinder and Sahara.
Their RC1 tarballs, as well as a lists of last-minute features and fixed
bugs since kilo-3 are available at:

https://launchpad.net/cinder/kilo/kilo-rc1
https://launchpad.net/sahara/kilo/kilo-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1 will be formally released as the 2015.1.0
final version on April 30. You are therefore strongly encouraged to test
and validate these tarballs !

Alternatively, you can directly test the proposed/kilo branches at:
https://github.com/openstack/cinder/tree/proposed/kilo
https://github.com/openstack/sahara/tree/proposed/kilo

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug
or
https://bugs.launchpad.net/sahara/+filebug

and tag it *kilo-rc-potential* to bring it to the release crew's attention.

Note that the master branches of Cinder and Sahara are now open for
Liberty development, and feature freeze restrictions no longer apply there !

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [stable] Request stable freeze exception: 162112 and 162113 libvirt live migration progress loop

2015-04-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This one is very suspicious for a last minute merge. Also, no nova
cores, neither nova-stable-maint cores commented on the backport.
Without it, there is no way it will be merged at all. Please reach
nova folks on the matter.

On 04/06/2015 11:44 PM, David Medberry wrote:
 Change requests https://review.openstack.org/162113 and
 https://review.openstack.org/16211 
 https://review.openstack.org/1621132 are effecting a lot of
 operators (any who perform live migration) in Juno
 
 Request stable/juno freeze exception to get these two added.
 
 David Medberry
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVJlEnAAoJEC5aWaUY1u57f+8H/ipiJuPOD2KYu+PyZXyLH/Yc
W6pZK/4/x3WMt4Y6LTmfLXR3V43FDIQcuK05pWxLLx18PCXy4HxwTWZRDnra1c+F
5o+ezJ0gd48Z94f40KUDmNnsTbqjAjQRo5wnmy6k9wZSO2Sk4+0NN76H6F1p96AX
c7mW2+OSTC86zt0GuQtdz2fqGcn/RmlCiBW4PmD3rQi+O7XSl7tbSkm5hJdm3N5F
/Bump9RXzsStT+Dlc61N2YS2k2jR7tyMPRhTp/+CZONDRCuxl5CD5lhc0SXNz4ly
nibQud8XuR3van/n+ECKEnLElA8JDgb/AmA1xkjauj6RBdvr4O7LcZq2qMRolZQ=
=89Eu
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Thierry Carrez
Doug Hellmann wrote:
 Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
 On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
 wrote:

 The question is, how should we proceed there ? This is new procedure, so
 I'm a bit unclear on the best way forward and would like to pick our
 collective brain. Should we just push requirements cap for all OpenStack
 libs and create stable branches from the last tagged release everywhere
 ? What about other libraries ? Should we push a cap there too ? Should
 we just ignore the whole thing for the Kilo release for all non-Oslo stuff
 ?

 Provided that represents the code being used for testing at this point, and
 I believe it does, this seems like a sensible default action.  Next cycle
 we can make a bit more noise about when this default action will occur,
 probably pick one of the other existing dates late in the cycle such as RC
 or string freeze or whatever. (Maybe that already happened and I can't
 remember?)
 
 I had hoped to have the spec approved in time to cut releases around
 the time Oslo did (1 week before feature freeze for applications,
 to allow us to merge the requirements cap before applications
 generate their RC1). At this point, I agree that we should go with
 the most recently tagged versions where possible. It sounds like
 we have a couple of libs that need releases, and we should evaluate
 those on a case-by-case basis, defaulting to not updating the stable
 requirements unless absolutely necessary.

OK, here is a plan, let me know if it makes sense.

If necessary:
Cinder releases python-cinderclient 1.1.2
Designate releases python-designateclient 1.1.2
Horizon releases django_openstack_auth 1.2.0
Ironic releases python-ironicclient 0.5.1

Then we cap in requirements stable/kilo branch (once it's cut, when all
RC1s are done):

python-barbicanclient =3.0.1 3.1.0
python-ceilometerclient =1.0.13 1.1.0
python-cinderclient =1.1.0 1.2.0
python-designateclient =1.0.0 1.2.0
python-heatclient =0.3.0 0.5.0
python-glanceclient =0.15.0 0.18.0
python-ironicclient =0.2.1 0.6.0
python-keystoneclient =1.1.0 1.4.0
python-neutronclient =2.3.11 2.4.0
python-novaclient =2.22.0 2.24.0
python-saharaclient =0.8.0 0.9.0
python-swiftclient =2.2.0 2.5.0
python-troveclient =1.0.7 1.1.0
glance_store =0.3.0 0.5.0
keystonemiddleware =1.5.0 1.6.0
pycadf =0.8.0 0.9.0
django_openstack_auth=1.1.7,!=1.1.8 1.3.0

As discussed we'll add openstackclient while we are at it:

python-openstackclient=1.0.0,1.1.0

That should trickle down to multiple syncs in multiple projects, which
we'd merge in a RC2. Next time we'll do it all the same time Oslo did
it, to avoid creating unnecessary respins (live and learn).

Anything I missed ?

Bonus question: will the openstack proposal bot actually propose
stable/kilo g-r changes to proposed/kilo branches ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Li Ma
Hi oslo all,

Currently devstack master relies on 1.8.1 release due to requirements
frozen (=1.8.0  1.9.0), however, ZeroMQ driver is able to run on
1.9.0 release. The result is that you cannot deploy ZeroMQ driver
using devstack master now due to some incompatibility between
oslo.messaging 1.8.1 and devstack master source.

So I try to backport 4 recent reviews [1-4] to stable/kilo to make
sure it is working. I'll appreciate allowing these backports and make
them into 1.8.2.

[1] https://review.openstack.org/172038
[2] https://review.openstack.org/172061
[3] https://review.openstack.org/172062
[4] https://review.openstack.org/172063

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [all] django_openstack_auth potential release

2015-04-09 Thread Thierry Carrez
David Lyle wrote:
 So we have a couple of options. First, leave django_openstack_auth at
 1.1.9 and let deployers and distros rationalize which version of Django
 they want to use and negotiate the dependency issues independently. Or
 second, release a new version of django_openstack_auth and determine if
 we want to fix the version django_openstack_auth in
 global-requirements.txt or leave the upper cap unbound.

I pinged packagers to get their feel on the issue.

It's sad that we have to do this at this point, but I feel like bumping
to 1.2.0 with the correct cap is the right solution here.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-09 Thread Victor Sergeyev
Thanks for your work on this! :)

On Thu, Apr 9, 2015 at 7:25 PM, Victor Stinner vstin...@redhat.com wrote:

 Hi,

 During the last OpenStack Summit at Paris, we discussed how we can port
 OpenStack to Python 3, because eventlet was not compatible with Python 3.
 There are multiple approaches: port eventlet to Python 3, replace eventlet
 with asyncio, replace eventlet with threads, etc. We decided to not take a
 decision and instead investigate all options.

 I fixed 4 issues with monkey-patching in Python 3 (importlib, os.open(),
 threading.RLock, threading.Thread). Good news: the just released eventlet
 0.17.3 includes these fixes and it is now fully compatible with Python 3!
 For example, the Oslo Messaging test suite now pass with this eventlet
 version! Currently, eventlet is disabled in Oslo Messaging on Python 3
 (eventlet tests are skipped).

 I just sent a patch for requirements and Oslo Messaging to bump to
 eventlet 0.17.3, but it will have to wait until everyone has master as
 Liberty.

https://review.openstack.org/#/c/172132/
https://review.openstack.org/#/c/172135/

 It becomes possible to port more projects depending on eventlet to Python
 3!

 Liberty cycle will be a good opportunity to port more OpenStack components
 to Python 3. Most OpenStack clients and Common Libraries are *already*
 Python 3 compatible, see the wiki page:

https://wiki.openstack.org/wiki/Python3

 --

 To replace eventlet, I wrote a spec to replace it with asyncio:

https://review.openstack.org/#/c/153298/

 Joshua Harlow wrote a spec to replace eventlet with threads:

https://review.openstack.org/#/c/156711/

 But then he wrote a single spec Replace eventlet + monkey-patching with
 ?? which covers threads and asyncio:

https://review.openstack.org/#/c/164035/

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RC1 state of play

2015-04-09 Thread Sylvain Bauza



Le 09/04/2015 08:01, Michael Still a écrit :

There are a few bugs still outstanding for nova's RC1. Here's a quick
summary. For each of these we need to either merge the fix, or bump
the bug from being release blocking.

-

https://bugs.launchpad.net/nova/+bug/1427351
cells: hypervisor API extension can't find compute_node services

This still has review https://review.openstack.org/#/c/160506/
outstanding, but a related review has landed. Do we need to land the
outstanding review as well?


We're in a good shape for merging the hypervisor-related issues that are 
fixed in https://review.openstack.org/#/c/160506/ so yes we need to land 
it too.


Unforunately, the cells job recently regressed due to some Tempest 
changes ([1] and others) related to networks, so there are a couple of 
last-minute patches that have been uploaded :
https://review.openstack.org/171865 for fixing the devstack side - 
creating a network in the child cell
https://review.openstack.org/171911 and 
https://review.openstack.org/171912 for fixing the Nova side because it 
has a miss (not using the UUID passed in the CLI if provided)


Both corresponding cell-related bugs are raised as Critical and marked 
for RC1 :

https://bugs.launchpad.net/nova/+bug/1427351
https://bugs.launchpad.net/nova/+bug/1441931

[1] https://github.com/openstack/tempest/commit/4bbc199


-

https://bugs.launchpad.net/nova/+bug/1430239
Hyper-V: *DataRoot paths are not set for instances

This one has https://review.openstack.org/#/c/162999 proposed as a
fix, which has one +2. Does anyone want to review a Hyper-V driver
change?

-

https://bugs.launchpad.net/nova/+bug/1431291
Scheduler Failures are no longer logged with enough detail for a site
admin to do problem determination

Two reviews outstanding here --
https://review.openstack.org/#/c/170421/ and its dependent (and WIP)
https://review.openstack.org/#/c/170472/ -- these seem to be not
really ready. What's the plan here?


Unscoped from RC1 milestone as it probably requires some big behaviour 
change for the scheduler RPC API (not raising a NoValidHost exception 
but rather return a list of filters failing)


-Sylvain


-

https://bugs.launchpad.net/nova/+bug/1313573
nova backup fails to backup an instance with attached volume
(libvirt, LVM backed)

For this we've merged a change which raises an exception if you try to
do this, so I think this is no longer release critical? It's still a
valid bug though so this shouldn't be closed.

-

https://bugs.launchpad.net/nova/+bug/1438238
Several concurent scheduling requests for CPU pinning may fail due to
racy host_state handling

The fix is https://review.openstack.org/#/c/169245/, which needs more reviews.




Michael




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][barbican] default certificate manager

2015-04-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi lbaas folks,

I've realized recently that the default certificate manager for lbaas
advanced service is now barbican based. Does it mean that to make
default configuration working as is, users will need to deploy
barbican service? If that's really the case, the default choice seems
to be unfortunate. I think it would be better not to rely on external
service in default setup, using local certificate manager.

Is there a particular reason behind the default choice?

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVJprKAAoJEC5aWaUY1u57OEcIANdh8uBUcHKxBqjYFwQWoJRx
jLLlH6uxivP3i9nBiYFTZG8uwFhwCzL5rl9uatB7+Wsu41uOTJZeUlCM4dN+xOIz
J9KujLv1oGD/FvgpVGP/arJ6SoCeiINmezwQAziid6dmtH1iYePFCCTCJedbMmND
KampF+RXmHIwXvwVN1jK/tDfGsMHOoGKjy4jmgw48jBWFch1PBWQnRn4ooxZDbmI
VGQvSbpDwkQ3+N3ELZHx0m7l9kGmRKQl/8Vwml6pJKtcrGObkQGGGPeTPYj8Y/NO
Peht83x+HkrIupXZpkm3ybyHWSQdJw+RdKquGWKPTrcNGL1zZTl46rHWF79rhxA=
=C8+6
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-04-08 10:49:15 -0400:
 On 04/08/2015 10:42 AM, Dean Troyer wrote:
  On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
  mailto:thie...@openstack.org wrote:
  
  The question is, how should we proceed there ? This is new procedure, so
  I'm a bit unclear on the best way forward and would like to pick our
  collective brain. Should we just push requirements cap for all OpenStack
  libs and create stable branches from the last tagged release everywhere
  ? What about other libraries ? Should we push a cap there too ? Should
  we just ignore the whole thing for the Kilo release for all non-Oslo
  stuff ?
  
  
  Provided that represents the code being used for testing at this point,
  and I believe it does, this seems like a sensible default action.  Next
  cycle we can make a bit more noise about when this default action will
  occur, probably pick one of the other existing dates late in the cycle
  such as RC or string freeze or whatever. (Maybe that already happened
  and I can't remember?)
 
 Yes, due to the way we're testing client libraries, that's the right
 approach. In future we should fully cap GR as the first Milestone 3
 task. And everything after that should be managed as an exception to get
 bumps.

This cycle we actually froze Oslo libs a week before K3, which gave
us time to create the stable branches and update the requirements
caps for K3. I recommend other library managers consider using the
same rough schedule for their freeze. I'll also note that freezing
early works much much better if you are releasing updates to the
library frequently, so we shouldn't be shy about releasing new
client libraries more than once or twice per cycle.

Given the cascading test jobs triggered by landing requirements
changes, we should try to consolidate the updates to the global
list for all of the caps (one patch would be ideal, but may not be
realistic).

Doug

 
  All other non-Oslo libs in the OpenStack world do not seem to be
  directly consumed by projects that have stable branches, and are
  therefore likely to not maintain stable branches. Please report any
  glaring omission there.
  
  
  OSC is not used by any of the integrated release projects but due to its
  dependencies on the other client libs and use in DevStack I would like
  to follow the same process for it here.  The current 1.0.3 release is
  the one that should be used for stable.
 
 Agreed.
 
 -Sean
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Doug Hellmann
Excerpts from Devananda van der Veen's message of 2015-04-08 14:37:26 +:
 Thierry,
 
 You left out python-ironicclient, which isn't a surprise as it isn't
 actually listed in Nova's requirements.txt file. I don't have a link handy
 to cite the previous discussions, but Nova felt that it was not appropriate
 to list a driver's dependency in their project's requirements file.
 
 As such, it is installed from pip in devstack/lib/ironic right now.
 
 I've tagged a 0.5.0 version two days ago, and plan a quick fix (0.5.1)
 today. I think it's reasonable for this library to be capped just like the
 other python-*clients, but I'm not sure how to express that, due to Nova
 not allowing this dependency in their requirements.txt file.

Caps are applied in the global requirements list and merged into
projects. If no projects explicitly list the dependency, we at least
configure the gate jobs properly and distro packagers have an indication
of which versions we mean to support. So, I recommend setting
ironicclient to:

python-ironicclient=0.2.1,0.6.0

If the lower bound needs to be raised, we should go ahead and do that,
too.

Doug

 
 -Devananda
 
 On Wed, Apr 8, 2015 at 7:18 AM Matthias Runge mru...@redhat.com wrote:
 
  On 08/04/15 15:55, Thierry Carrez wrote:
 
   I'm especially worried with python-cinderclient, python-designateclient
   and django_openstack_auth which are more than 2 months old and may well
   contemplate another kilo release that could be disrupting at this point.
  
  In general: a great idea, and I've been expecting this for a longer time
  now. That would save some work.
 
  django_openstack_auth: it's quite stable now, although it would make
  sense to cut a newer release for kilo. We will merge that into horizon
  eventually, since it's a helper and we never saw anyone to use it
  outside of horizon.
 
  Matthias
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management

2015-04-09 Thread Salvatore Orlando
On 9 April 2015 at 17:04, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Apr 9, 2015 at 9:52 AM, Assaf Muller amul...@redhat.com wrote:

 The Neutron specs process was introduced during the Juno timecycle. At
 the time it
 was mostly a bureaucratic bottleneck (The ability to say no) to ease the
 pain of cores
 and manage workloads throughout a cycle. Perhaps this is a somewhat naive
 outlook,
 but I see other positives, such as more upfront design (Some is better
 than none),
 less high level talk during the implementation review process and more
 focus on the details,
 and 'free' documentation for every major change to the project (Some
 would say this
 is kind of a big deal; What better way to write documentation than to
 force the developers
 to do it in order for their features to get merged).

 Right. Keep in mind that for Liberty we're making changes to this
 process. For instance, I've already indicated specs which were approved for
 Kilo but failed were moved to kilo-backlog. To get them into Liberty, you
 just propose a patch which moves the patch in the liberty directory. We
 already have a bunch that have taken this path. I hope we can merge the
 patches for these specs in Liberty-1.


It was never meant to be a bureaucratic bottleneck, although the ability of
moving out early in the process blueprint that did not fit in the scope of
the current release (or in the scope of the project altogether) was a goal.
However, it became a bureaucratic step - it has been surely been perceived
as that. Fast tracking blueprints which were already approved makes sense.
I believe the process should be made even slimmer, removing the deadlines
for spec proposal and approval, and making the approval process simpler -
with reviewers being a lot less pedant on one side, and proposer not
expecting approval of a spec to be a binding contract on the other side.




 That being said, you can only get a feature merged if you propose a spec,
 and the only
 people largely proposing specs are developers. This ingrains the open
 source culture of
 developer focused evolution, that, while empowering and great for
 developers, is bad
 for product managers, users (That are sometimes under-presented, as is
 the case I'm trying
 to make) and generally causes a lack of a cohesive vision. Like it or
 not, the specs process
 and the driver's team approval process form a sort of product management,
 deciding what
 features will ultimately go in to Neutron and in what time frame.

 We haven't done anything to limit reviews of specs by these other users,
 and in fact, I would love for more users to review these specs.


I think your analysis is correct. Neutron is a developer-led community, and
that's why the drivers acting also as product managers approve
specifications.
I don't want to discuss here the merits of the drivers team - that probably
deserves another discussion thread - but as Kyle says no-one has been
discouraged for reviewing specs and influencing the decision process. The
neutron-drivers meetings were very open in my opinion. However, if this
meant - as you say - that users, operators, and product managers (yes, them
too ;) ) were left off this process, I'm happy to hear proposals to improve
it.




 We shouldn't ignore the fact that we clearly have people and product
 managers pulling the strings
 in the background, often deciding where developers will spend their time
 and what specs to propose,
 for the purpose of this discussion. I argue that managers often don't
 have the tools to understand
 what is important to the project, only to their own customers. The
 Neutron drivers team, on the other hand,
 don't have a clear incentive (Or I suspect the will) to spend enormous
 amounts of time doing 'product management',
 as being a driver is essentially your third or fourth job by this point,
 and are the same people
 solving gate issues, merging code, triaging bugs and so on. I'd like to
 avoid to go in to a discussion of what's
 wrong with the current specs process as I'm sure people have heard me
 complain about this in
 #openstack-neutron plenty of times before.


Yes I have heard you complaining. Ideally I would borrow concepts from
anarchism to define an ideal way in which various contributors should take
over the different. However, I am afraid this will quickly translate in a
sort of extreme neo-liberism which will probably lead the project with self
destruction. But I'm all up for a change in the process since what we have
now is drifting towards Soviet-style bureaucracy.
Jokes apart, I think you are right, the process as it is just adds
responsibilities to a subset of people who are already busy with other
duties, increasing frustration in people who depends on them (being one of
these people I am fully aware of that!)


 Instead, I'd like to suggest a system that would perhaps
 get us to implement specs that are currently not being proposed, and give
 an additional form of
 input that would make sure 

[openstack-dev] [Nova] [Horizon] Insufficient (?) features in current Nova API

2015-04-09 Thread Timur Sufiev
Hello!

While analyzing Horizon behavior on a large scale we faced some performance
issues which are most probably caused by inefficient calls to Nova API from
Horizon, more specifically described at
https://bugs.launchpad.net/nova/+bug/1442310

Since my knowledge of Nova existing APIs is not very comprehensive I am not
quite sure whether current Nova API indeed doesn't support requesting the
details of a multiple instances limited by their instance_id-s (passed as
part of `search_opts` parameter) or I just failed to find the proper REST
call at http://developer.openstack.org/api-ref-compute-v2.1.html

Nova developers, could you please help me on that matter?

-- 
Timur Sufiev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
Hi Doug,

In the global requirements.txt, oslo.messaging version is still =
1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
there when I deploy with devstack master branch.

I submitted a review for the update.

On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com wrote:
 We are content to announce the release of:

 oslo.messaging 1.9.0: Oslo Messaging API

 This is the first release of the library for the Liberty development cycle.

 For more details, please see the git log history below and:

 http://launchpad.net/oslo.messaging/+milestone/1.9.0

 Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.messaging

 Changes in oslo.messaging 1.8.0..1.9.0
 --

 8da14f6 Use the oslo_utils stop watch in decaying timer
 ec1fb8c Updated from global requirements
 84c0d3a Remove 'UNIQUE_ID is %s' logging
 9f13794 rabbit: fix ipv6 support
 3f967ef Create a unique transport for each server in the functional tests
 23dfb6e Publish tracebacks only on debug level
 53fde06 Add pluggability for matchmakers
 b92ea91 Make option [DEFAULT]amqp_durable_queues work
 cc618a4 Reconnect on connection lost in heartbeat thread
 f00ec93 Imported Translations from Transifex
 0dff20b cleanup connection pool return
 2d1a019 rabbit: Improves logging
 0ec536b fix up verb tense in log message
 b9e134d rabbit: heartbeat implementation
 72a9984 Fix changing keys during iteration in matchmaker heartbeat
 cf365fe Minor improvement
 5f875c0 ZeroMQ deployment guide
 410d8f0 Fix a couple typos to make it easier to read.
 3aa565b Tiny problem with notify-server in simulator
 0f87f5c Fix coverage report generation
 3be95ad Add support for multiple namespaces in Targets
 513ce80 tools: add simulator script
 0124756 Deprecates the localcontext API
 ce7d5e8 Update to oslo.context
 eaa362b Remove obsolete cross tests script
 1958f6e Fix the bug redis do not delete the expired keys
 9f457b4 Properly distinguish between server index zero and no server
 0006448 Adjust tests for the new namespace

 Diffstat (except docs and test files)
 -

 .coveragerc|   7 +
 openstack-common.conf  |   6 +-
 .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
 .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
 .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
 oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
 oslo_messaging/_drivers/amqp.py|  55 +++-
 oslo_messaging/_drivers/amqpdriver.py  |  15 +-
 oslo_messaging/_drivers/common.py  |  20 +-
 oslo_messaging/_drivers/impl_qpid.py   |   4 +-
 oslo_messaging/_drivers/impl_rabbit.py | 357 ++---
 oslo_messaging/_drivers/impl_zmq.py|  32 +-
 oslo_messaging/_drivers/matchmaker.py  |   2 +-
 oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
 oslo_messaging/localcontext.py |  16 +
 oslo_messaging/notify/dispatcher.py|   4 +-
 oslo_messaging/notify/middleware.py|   2 +-
 oslo_messaging/openstack/common/_i18n.py   |  45 +++
 oslo_messaging/openstack/common/versionutils.py| 253 +++
 oslo_messaging/rpc/dispatcher.py   |   6 +-
 oslo_messaging/target.py   |   9 +-
 requirements-py3.txt   |  13 +-
 requirements.txt   |  15 +-
 setup.cfg  |   6 +
 test-requirements-py3.txt  |   4 +-
 test-requirements.txt  |   4 +-
 tools/simulator.py | 207 
 tox.ini|   3 +-
 43 files changed, 1673 insertions(+), 512 deletions(-)


 Requirements updates
 

 diff --git a/requirements-py3.txt b/requirements-py3.txt
 index 05cb050..4ec18c6 100644
 --- a/requirements-py3.txt
 +++ b/requirements-py3.txt
 @@ -5,5 +5,6 @@
 -oslo.config=1.9.0  # Apache-2.0
 -oslo.serialization=1.2.0   # Apache-2.0
 -oslo.utils=1.2.0   # Apache-2.0
 -oslo.i18n=1.3.0  # Apache-2.0
 -stevedore=1.1.0  # Apache-2.0
 +oslo.config=1.9.3,1.10.0  # Apache-2.0
 +oslo.context=0.2.0,0.3.0 # Apache-2.0
 +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
 +oslo.utils=1.4.0,1.5.0   # Apache-2.0
 +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
 +stevedore=1.3.0,1.4.0  # Apache-2.0
 @@ -21 +22 @@ kombu=2.5.0
 -oslo.middleware=0.3.0  # Apache-2.0
 +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
 diff --git a/requirements.txt b/requirements.txt
 index 3b49a53..ec5fef6 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -7,5 +7,6 @@ pbr=0.6,!=0.7,1.0
 

Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Doug Hellmann
Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
 On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
  The question is, how should we proceed there ? This is new procedure, so
  I'm a bit unclear on the best way forward and would like to pick our
  collective brain. Should we just push requirements cap for all OpenStack
  libs and create stable branches from the last tagged release everywhere
  ? What about other libraries ? Should we push a cap there too ? Should
  we just ignore the whole thing for the Kilo release for all non-Oslo stuff
  ?
 
 
 Provided that represents the code being used for testing at this point, and
 I believe it does, this seems like a sensible default action.  Next cycle
 we can make a bit more noise about when this default action will occur,
 probably pick one of the other existing dates late in the cycle such as RC
 or string freeze or whatever. (Maybe that already happened and I can't
 remember?)

I had hoped to have the spec approved in time to cut releases around
the time Oslo did (1 week before feature freeze for applications,
to allow us to merge the requirements cap before applications
generate their RC1). At this point, I agree that we should go with
the most recently tagged versions where possible. It sounds like
we have a couple of libs that need releases, and we should evaluate
those on a case-by-case basis, defaulting to not updating the stable
requirements unless absolutely necessary.

 
 All other non-Oslo libs in the OpenStack world do not seem to be
  directly consumed by projects that have stable branches, and are
  therefore likely to not maintain stable branches. Please report any
  glaring omission there.
 
 
 OSC is not used by any of the integrated release projects but due to its
 dependencies on the other client libs and use in DevStack I would like to
 follow the same process for it here.  The current 1.0.3 release is the one
 that should be used for stable.

Based on what's in the requirements list now, I think that means capping
with:

python-openstackclient=1.0.0,1.1.0

Doug

 
 dt
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Morgan Fainberg
Keystonemiddleware is pending a minor fix to sync g-r in a sane way to
match the rest of kilo (what we have for keystone et al).

However we are blocked because there is no stable Juno and icehouse
branches. I'd like to release the Python-keystone client with the
requirements update for kilo.

So keystonemiddleware would receive one more release before the cap.


On Thursday, April 9, 2015, Thierry Carrez thie...@openstack.org wrote:

 Doug Hellmann wrote:
  Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
  On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
 javascript:;
  wrote:
 
  The question is, how should we proceed there ? This is new procedure,
 so
  I'm a bit unclear on the best way forward and would like to pick our
  collective brain. Should we just push requirements cap for all
 OpenStack
  libs and create stable branches from the last tagged release everywhere
  ? What about other libraries ? Should we push a cap there too ? Should
  we just ignore the whole thing for the Kilo release for all non-Oslo
 stuff
  ?
 
  Provided that represents the code being used for testing at this point,
 and
  I believe it does, this seems like a sensible default action.  Next
 cycle
  we can make a bit more noise about when this default action will occur,
  probably pick one of the other existing dates late in the cycle such as
 RC
  or string freeze or whatever. (Maybe that already happened and I can't
  remember?)
 
  I had hoped to have the spec approved in time to cut releases around
  the time Oslo did (1 week before feature freeze for applications,
  to allow us to merge the requirements cap before applications
  generate their RC1). At this point, I agree that we should go with
  the most recently tagged versions where possible. It sounds like
  we have a couple of libs that need releases, and we should evaluate
  those on a case-by-case basis, defaulting to not updating the stable
  requirements unless absolutely necessary.

 OK, here is a plan, let me know if it makes sense.

 If necessary:
 Cinder releases python-cinderclient 1.1.2
 Designate releases python-designateclient 1.1.2
 Horizon releases django_openstack_auth 1.2.0
 Ironic releases python-ironicclient 0.5.1

 Then we cap in requirements stable/kilo branch (once it's cut, when all
 RC1s are done):

 python-barbicanclient =3.0.1 3.1.0
 python-ceilometerclient =1.0.13 1.1.0
 python-cinderclient =1.1.0 1.2.0
 python-designateclient =1.0.0 1.2.0
 python-heatclient =0.3.0 0.5.0
 python-glanceclient =0.15.0 0.18.0
 python-ironicclient =0.2.1 0.6.0
 python-keystoneclient =1.1.0 1.4.0
 python-neutronclient =2.3.11 2.4.0
 python-novaclient =2.22.0 2.24.0
 python-saharaclient =0.8.0 0.9.0
 python-swiftclient =2.2.0 2.5.0
 python-troveclient =1.0.7 1.1.0
 glance_store =0.3.0 0.5.0
 keystonemiddleware =1.5.0 1.6.0
 pycadf =0.8.0 0.9.0
 django_openstack_auth=1.1.7,!=1.1.8 1.3.0

 As discussed we'll add openstackclient while we are at it:

 python-openstackclient=1.0.0,1.1.0

 That should trickle down to multiple syncs in multiple projects, which
 we'd merge in a RC2. Next time we'll do it all the same time Oslo did
 it, to avoid creating unnecessary respins (live and learn).

 Anything I missed ?

 Bonus question: will the openstack proposal bot actually propose
 stable/kilo g-r changes to proposed/kilo branches ?

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Douglas Mendizabal
The Barbican Team also has a plan to release a new version of barbican client 
for Kilo.  The planned version is 3.1.0. [1] and it will include features 
landed during FFE.

Thanks,
-Douglas Mendizabal

[1] https://launchpad.net/python-barbicanclient/+milestone/3.1.0 
https://launchpad.net/python-barbicanclient/+milestone/3.1.0

 On Apr 9, 2015, at 11:23 AM, Akihiro Motoki amot...@gmail.com wrote:
 
 Neutron team has a plan to release a new version of neutornclient for Kilo.
 We waited the new release until all granted FFE patches land,
 and now we are almost ready to go. (waiting one patch in the gate)
 
 The planned new version is 2.4.0. It is because neutronclient uses 2.3.x 
 version
 for a long time (including Kilo) and we would like to have a room for
 bug fixing for Juno release.
 So we would like to propose the following for Kilo:
 
  python-neutronclient =2.4.0 2.5.0
 
 I am in the same page with Kyle.
 I hope this plan is acceptable.
 
 Thanks,
 Akihiro
 
 
 2015-04-10 0:09 GMT+09:00 Thierry Carrez thie...@openstack.org 
 mailto:thie...@openstack.org:
 Doug Hellmann wrote:
 Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
 On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
 The question is, how should we proceed there ? This is new procedure, so
 I'm a bit unclear on the best way forward and would like to pick our
 collective brain. Should we just push requirements cap for all OpenStack
 libs and create stable branches from the last tagged release everywhere
 ? What about other libraries ? Should we push a cap there too ? Should
 we just ignore the whole thing for the Kilo release for all non-Oslo stuff
 ?
 
 Provided that represents the code being used for testing at this point, and
 I believe it does, this seems like a sensible default action.  Next cycle
 we can make a bit more noise about when this default action will occur,
 probably pick one of the other existing dates late in the cycle such as RC
 or string freeze or whatever. (Maybe that already happened and I can't
 remember?)
 
 I had hoped to have the spec approved in time to cut releases around
 the time Oslo did (1 week before feature freeze for applications,
 to allow us to merge the requirements cap before applications
 generate their RC1). At this point, I agree that we should go with
 the most recently tagged versions where possible. It sounds like
 we have a couple of libs that need releases, and we should evaluate
 those on a case-by-case basis, defaulting to not updating the stable
 requirements unless absolutely necessary.
 
 OK, here is a plan, let me know if it makes sense.
 
 If necessary:
 Cinder releases python-cinderclient 1.1.2
 Designate releases python-designateclient 1.1.2
 Horizon releases django_openstack_auth 1.2.0
 Ironic releases python-ironicclient 0.5.1
 
 Then we cap in requirements stable/kilo branch (once it's cut, when all
 RC1s are done):
 
 python-barbicanclient =3.0.1 3.1.0
 python-ceilometerclient =1.0.13 1.1.0
 python-cinderclient =1.1.0 1.2.0
 python-designateclient =1.0.0 1.2.0
 python-heatclient =0.3.0 0.5.0
 python-glanceclient =0.15.0 0.18.0
 python-ironicclient =0.2.1 0.6.0
 python-keystoneclient =1.1.0 1.4.0
 python-neutronclient =2.3.11 2.4.0
 python-novaclient =2.22.0 2.24.0
 python-saharaclient =0.8.0 0.9.0
 python-swiftclient =2.2.0 2.5.0
 python-troveclient =1.0.7 1.1.0
 glance_store =0.3.0 0.5.0
 keystonemiddleware =1.5.0 1.6.0
 pycadf =0.8.0 0.9.0
 django_openstack_auth=1.1.7,!=1.1.8 1.3.0
 
 As discussed we'll add openstackclient while we are at it:
 
 python-openstackclient=1.0.0,1.1.0
 
 That should trickle down to multiple syncs in multiple projects, which
 we'd merge in a RC2. Next time we'll do it all the same time Oslo did
 it, to avoid creating unnecessary respins (live and learn).
 
 Anything I missed ?
 
 Bonus question: will the openstack proposal bot actually propose
 stable/kilo g-r changes to proposed/kilo branches ?
 
 --
 Thierry Carrez (ttx)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Akihiro Motoki amot...@gmail.com mailto:amot...@gmail.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: Message signed with OpenPGP using GPGMail

Re: [openstack-dev] [nova][ec2-api] Need advice about running Tempest against stackforge/ec2-api

2015-04-09 Thread Feodor Tersin
Hi.

As you can see adjusted Tempest (https://review.openstack.org/#/c/171222/)
runs well against both Nova EC2 and ec2api (
https://review.openstack.org/#/c/172059).


On Tue, Apr 7, 2015 at 5:50 PM, Feodor Tersin fter...@cloudscaling.com
wrote:

 Hi Sean


 On Mon, Apr 6, 2015 at 7:34 PM, Sean Dague s...@dague.net wrote:

 On 04/06/2015 12:13 PM, Andrey M. Pavlov wrote:
  Hi,
 
  We've got a couple of problems running original Tempest EC2 API test
 against new standalone stackforge/ec2-api project and
  I wanted to ask for some advice about how to deal with it.
 
  Tempest now is running against our ec2-api after this review was closed
 -
  https://review.openstack.org/#/c/170258/
 
  And now we face two problems (that can also be found in tempest logs of
 this review -
  https://review.openstack.org/#/c/170668/)
  For now I switched tempest gating job to non-voting until these
 problems are resolved in the following review -
  https://review.openstack.org/#/c/170646/
 
  Problems are:
  1)
 tempest.thirdparty.boto.test_ec2_network.EC2NetworkTest.test_disassociate_not_associated_floating_ip
  this test tries to allocate address and disassociate it without
 association.
  Amazon allows to do it and does not throw error. But EC2 implementation
 in Nova throws error.
  We have the same test in our own test suite against stackforge/ec2-api
 (but it's not merged yet) and I checked it against Amazon.
  I suggest to remove this test from tempest as incompatible with Amazon.
  Also it can be skipped but for me it has no sense.

 This seems fine as a removal.

  2)
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  This test registers three images by their manifests, run instance with
 image/kernel/ramdisk parameters,
  and ssh into this instance to check something.
  This is not the only test that runs instance with such parameters but
 this is the only one
  that ssh-s into such an instance.
  This instance runs but test can't ssh into it and it fails. Because
 this instance doesn't have ramdisk and kernel.
  It runs supplied with image property only. The VM comes up
 semi-functional and instance can't boot up as a result.
  Problem is in the ec2-api/nova communication. Nova public API doesn't
 support kernel and ramdisk parameters during instance creation.
 
  Next I'll file a bug to ec2-api with this description.

 This seems problematic, because I think what you are saying is that the
 stackforge EC2 API can't start a working guest. This is the only one of
 the ec2 tests that actually validates the guest is running correctly IIRC.

 Is there an equivalent test that exists that you think would be better?
 I'm also not sure I understand where the breakdown is here in missing
 functionality.


 I suggest to fix the test to fit both Nova EC2 and ec2api restrictions.
 Ec2api ignores ari/aki parameters for RunInstances operation, but supports
 registration of an ami image linked to ari and aki ones.
 Nova EC2 ignores the links in image registrations, but supports ari/aki
 parameters for RunInstances operation.
 So we could set these parameters for both operations to pass this test
 agains both Nova EC2 and ec2api.

 I've propesed a change for this: https://review.openstack.org/#/c/171222/



  In the long run we should discuss adding this feature to public API but
 for now we'd like to put Tempest
  in our project back to voting state.
  We've got several options about what to do for this and we need some
 help to pick one (or several):
  1) skip this test in tempest and switch tempest back to voting state in
 our project.
  The problem is that this test is still also employed against nova's EC2
 so it'll get skipped there as well.
  2) Leave it as non-voting until extension is added to nova.
  Great solution but it'll take way too long I understand.
  3) add special condition to skipping the test so that it's skipped only
 when employed against stackforge/ec2-api,
  while still working against nova if it's possible at all and not too
 much hassle.
 
  Kind regards,
  Andrey.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Mehdi Abaakouk


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Hi,

All of these patches are only bug fixes, so that's good for me.

So if others agreed, I can release 1.8.2 with these changes once they 
are landed.

We don't have any other changes pending in kilo branch for now.

Le 2015-04-09 16:12, Li Ma a écrit :

Hi oslo all,

Currently devstack master relies on 1.8.1 release due to requirements
frozen (=1.8.0  1.9.0), however, ZeroMQ driver is able to run on
1.9.0 release. The result is that you cannot deploy ZeroMQ driver
using devstack master now due to some incompatibility between
oslo.messaging 1.8.1 and devstack master source.

So I try to backport 4 recent reviews [1-4] to stable/kilo to make
sure it is working. I'll appreciate allowing these backports and make
them into 1.8.2.

[1] https://review.openstack.org/172038
[2] https://review.openstack.org/172061
[3] https://review.openstack.org/172062
[4] https://review.openstack.org/172063

Best regards,


- ---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


-BEGIN PGP SIGNATURE-
Version: OpenPGP.js v.1.20131017
Comment: http://openpgpjs.org

wsFcBAEBCAAQBQJVJpTgCRAYkrQvzqrryAAAzbMP/2RSHctvsFRn2qUD/+OU
kO/YIEN7ft5Zm3HM9zWRc3M+oc4ICV4vsiF3Ylyy5NmtbK51pu1ZbKBT3Dxn
8jLsylUbHWBY1oaik4NH46/e3rXcKrK0V0zkbrN+RhzPqQ/fuNtVT1KUlimH
/evZxosRlYByz9ss4d8Lo1mYsDeUuhjnkI6Hmc919vZAlkSPey12INT61/hs
/9xNipWP5eUuzPSovM1nutK56DRl7HDT2PDP1RQ4kU+qUXCg4+gaArVayKE3
OOn9Snrz7PoX4psaiYlhhqIkfT+ULOI6r0Q2wlgS7laaYXfiV95x1gYXdYRW
1Hm6H5Nnvpb+TTpJsl5einyPT/DC5R+fUIHGWI0mEfBBAYhPBnZFlZdEiaP9
Y/QI1m6Qtq7wU0FEBPjzGEzrk2er2NlSvl0Q5vf5YTUMsdaEpIDiqpp8AnL5
5HvtslyPJuVizN2A/gBFajo34j/0jFqc0xiDTk3bdXSPIG96TgCDYFK55iz+
YQzi4XuCW67FNB/wr9nA/XssmiC+BthWB5giS62h4WRlgj75cx3+0Vop7sSX
fSLiKQyID676I/vS7I245JtSSCsk1/yq+snZvQRnmBrZWUAeLwbDpiEvSBXZ
l9OrVIbrSYCpnk3fNHGD6gWTWH99Q5sQglqBDiBr29tUsnVYDftdDHX9rr5g
sSNl
=YFhB
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-09 Thread Kyle Mestery
On Thu, Apr 9, 2015 at 2:13 AM, Matt Grant m...@mattgrant.net.nz wrote:

 Hi!

 I am just wondering what the story is about joining the neutron team.
 Could you tell me if you are looking for new contributors?

 We're always looking for someone new to participate! Thanks for reaching
out!


 Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
 router developer for Allied Telesyn.  I also have extensive Python
 programming experience, having worked on the DNS Management System.

 Sounds like you have extensive experience programming network elements. :)


 I have been experimenting with IPv6 since 2008 on my own home network,
 and I am currently installing a Juno Openstack cluster to learn ho
 things tick.

 Great, this will give you an overview of things.


 Have you guys ever figured out how to do a hybrid L3 North/South Neutron
 router that propagates tenant routes and networks into OSPF/BGP via a
 routing daemon, and uses floating MAC addresses/costed flow rules via
 OVS to fail over to a hot standby router? There are practical use cases
 for such a thing in smaller deployments.

 BGP integration with L3 is something we'll look at again for Liberty. Carl
Baldwin leads the L3 work in Neutron, and would be a good person to sync
with on this work item. I suspect he may be looking for people to help
integrate the BGP work in Liberty, this may be a good place for you to jump
in.

I have a single stand alone example working by turning off
 neutron-l3-agent network name space support, and importing the connected
 interface and static routes into Bird and Birdv6. The AMPQ connection
 back to the neutron-server is via the upstream interface and is secured
 via transport mode IPSEC (just easier than bothering with https/SSL).
 Bird looks easier to run from neutron as they are single process than a
 multi process Quagga implementation.  Incidentally, I am running this in
 an LXC container.

 Nice!


 Could some one please point me in the right direction.  I would love to
 be in Vancouver :-)

 If you're not already on #openstack-neutron on Freenode, jump in there.
Plenty of helpful people abound. Since you're in New Zealand, I would
suggest reaching out to Akihiro Motoki (amotoki) on IRC, as he's in Japan
and closer to your timezone.

Thanks!
Kyle

Best Regards,

 --
 Matt Grant,  Debian and Linux Systems Administration and Consulting
 Mobile: 021 0267 0578
 Email: m...@mattgrant.net.nz


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] [Cinder] FFE for Clear migration_status from a destination volume if migration fails

2015-04-09 Thread Mitsuhiro Tanino
Hi Jay,

Thank you for your cooperation.
This fix was merged successfully today.

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
HITACHI DATA SYSTEMS


 -Original Message-
 From: Jay S. Bryant [mailto:jsbry...@electronicjungle.net]
 Sent: Tuesday, April 07, 2015 11:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [stable] [Cinder] FFE for Clear 
 migration_status from a
 destination volume if migration fails
 
 Mitsuhiro,
 
 I had already put a +2 on this, so I am agreeable to an FFE.
 
 Mike or John, what are your thoughts?
 
 Jay
 
 
 On 04/06/2015 06:27 PM, Mitsuhiro Tanino wrote:
  Hello,
 
  I would like to get a FFE for patch 
  https://review.openstack.org/#/c/161328/.
 
  This patch fixes the volume migration problem which is not executed proper 
  cleanup
 steps if the volume migration is failed. This change only affects cleanup 
 steps and does
 not change normal volume migration steps.
 
  Regards,
  Mitsuhiro Tanino mitsuhiro.tan...@hds.com HITACHI DATA SYSTEMS
  __
   OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron and ACLs

2015-04-09 Thread Juan Antonio Osorio
Hey,

as a matter of fact in Barbican we are also in need of proper ACLs and
there is currently work on-going on implementing them:

http://specs.openstack.org/openstack/barbican-specs/specs/kilo/add-creator-only-option.html

On Wed, Apr 8, 2015 at 7:58 PM, Rich Wellner r...@objenv.com wrote:

  Yeah, sounds like a plan.

 FWIW, our target implementation will be Arista switches.

 rw2


 On 4/8/15 11:52 AM, Kevin Benton wrote:

 My plan is to repropose that for Liberty. I will re upload it to the spec
 repo in the next couple of weeks. When I do that it would be great to get
 your feedback. Perhaps we can divide up the work or you can expand the
 model to things other than subnets.
 On Apr 8, 2015 9:43 AM, Rich Wellner r...@objenv.com wrote:

 On 4/8/15 11:17 AM, Kevin Benton wrote:

 What do you mean by ACLs? Is it anything similar to the following?
 https://review.openstack.org/#/c/132661/

 Yes, our goals are very closely aligned with yours. And the rst doc as
 well as the messages on that thread file in a lot of gaps for me. Thanks.

 What's your plan going forward?

 rw2


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com

All truly great thoughts are conceived by walking.
- F.N.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-09 Thread Neil Jerram

Hi Joe,

Many thanks for your reply!

On 09/04/15 03:34, joehuang wrote:

Hi, Neil,

 From theoretic, Neutron is like a broadcast domain, for example, enforcement of DVR and security 
group has to touch each regarding host where there is VM of this project resides. Even using SDN controller, 
the touch to regarding host is inevitable. If there are plenty of physical hosts, for example, 
10k, inside one Neutron, it's very hard to overcome the broadcast storm issue under concurrent 
operation, that's the bottleneck for scalability of Neutron.


I think I understand that in general terms - but can you be more 
specific about the broadcast storm?  Is there one particular message 
exchange that involves broadcasting?  Is it only from the server to 
agents, or are there 'broadcasts' in other directions as well?


(I presume you are talking about control plane messages here, i.e. 
between Neutron components.  Is that right?  Obviously there can also be 
broadcast storm problems in the data plane - but I don't think that's 
what you are talking about here.)



We need layered architecture in Neutron to solve the broadcast domain bottleneck of 
scalability. The test report from OpenStack cascading shows that through layered architecture 
Neutron cascading, Neutron can supports up to million level ports and 100k level 
physical hosts. You can find the report here: 
http://www.slideshare.net/JoeHuang7/test-report-for-open-stack-cascading-solution-to-support-1-million-v-ms-in-100-data-centers


Many thanks, I will take a look at this.


Neutron cascading also brings extra benefit: One cascading Neutron can have 
many cascaded Neutrons, and different cascaded Neutron can leverage different SDN 
controller, maybe one is ODL, the other one is OpenContrail.

Cascading Neutron---
 / \
--cascaded Neutron--   --cascaded Neutron-
|  |
-ODL--   OpenContrail


And furthermore, if using Neutron cascading in multiple data centers, the DCI 
controller (Data center inter-connection controller) can also be used under 
cascading Neutron, to provide NaaS ( network as a service ) across data centers.

---Cascading Neutron--
 /|  \
--cascaded Neutron--  -DCI controller-  --cascaded Neutron-
| ||
-ODL--   | OpenContrail
  |
--(Data center 1)--   --(DCI networking)--  --(Data center 2)--

Is it possible for us to discuss this in OpenStack Vancouver summit?


Most certainly, yes.  I will be there from mid Monday afternoon through 
to end Friday.  But it will be my first summit, so I have no idea yet as 
to how I might run into you - please can you suggest!



Best Regards
Chaoyi Huang ( Joe Huang )


Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] request to disable xenserver CI account

2015-04-09 Thread Matt Riedemann



On 4/9/2015 4:27 PM, Jeremy Stanley wrote:

On 2015-04-09 16:13:13 -0500 (-0500), Matt Riedemann wrote:

The XenServer/XenProject third party CI job has been voting -1 on
nova changes for over 24 hours without a response from the
maintainers so I'd like to request that we disable for now while
it's being worked since it's a voting job and causing noise at
kind of a hairy point in the release.


According to Gerrit, you personally (as a member of the nova-release
group) have access to remove them from the nova-ci group to stop
them being able to -1/+1 nova changes. You should be able to do it
via https://review.openstack.org/#/admin/groups/511,members but let
me know if that's not working for some reason.



Great, done, thanks.

https://review.openstack.org/#/admin/groups/511,members

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-09 Thread Sean M. Collins
On Wed, Apr 01, 2015 at 01:37:28AM EDT, Dr. Jens Rosenboom wrote:
 FWIW, I think I made some progress in getting [1] to work, though if someone
 could jump in and make a proper patch from my hack, that would be great.
 
 [1] https://review.openstack.org/168423

Hi,

Just wanted to write a quick status update and publicly thank Dr.
Rosenboom. We are making excellent progress, and a majority of tests at the gate
are passing for Linuxbridge as the default in DevStack. 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-04-09 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted. (I apologize for
sending this out late this week.)

Drivers
==

IPA (jroll/JayF/JoshNang)
--

IPA now has documentation. (jayf)

Change merged containing documentation, changes to project-config
to perform a docs build.

Change landing now to fix docs build
(https://review.openstack.org/#/c/170897/), which, when it lands,
should publish IPA docs to docs.openstack.org for the first time.

Once docs build is confirmed fixed, we can land
https://review.openstack.org/#/c/170259/ to prevent any
WARN/ERRORs from being introduced into IPA docs build.


iRMC (naohirot)
-
https://pypi.python.org/pypi/python-scciclient/0.1.0 has been released
for Kilo.



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron scaling datapoints?

2015-04-09 Thread Neil Jerram

Hi Mike,

Many thanks for your reply!

On 08/04/15 17:56, Mike Spreitzer wrote:

Are you looking at scaling the numbers of tenants, Neutron routers, and
tenant networks as you scale hosts and guests?  I think this is a
plausible way to grow.  The compartmentalizations that comes with
growing those things may make a difference in results.


Are you thinking of control plane or data plane limits?  In my email I 
was thinking of control plane points, such as


- how many compute host agents can communicate with the Neutron server

- how many Neutron server instances or threads are needed

- whether there are any limits associated with the Neutron DB (unlikely 
I guess).


Does the use of tenant networks and routers affect those points, in your 
experience?  That would be less obvious to me than simply how many 
compute hosts or Neutron servers there are.


On the data plane side - if that was more what you meant - I can 
certainly see the limits there and how they are alleviated by using 
tenant networks and routers, in the L2 model.  FWIW, my project Calico 
[1] tries to avoid those by not providing a L2 domain at all - which can 
make sense for workloads that only require or provide IP services - and 
instead routing data through the fabric.


To answer your question, then, no, I wasn't thinking of scaling tenant 
networks and routers, per your suggestion, because Calico doesn't do 
things that way (or alternatively because Calico already routes 
everywhere), and because I didn't think that would be relevant to the 
control plane scaling that I had in mind.  But I may be missing 
something, so please do say if so.


Many thanks,
Neil


[1] http://www.projectcalico.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] autoscaling and load balancers

2015-04-09 Thread Zane Bitter

On 08/04/15 21:51, Miguel Grinberg wrote:

Hi Angus,

Regarding this:

  As Zane suggested, you should think of autoscaling as been in a
different service.

It's not that I can't see your point of view. I can imagine an
autoscaling service. I agree with you guys that if we had that, then
none of this would be Heat's concern.

When I think of this separate autoscaling service that can be used
without Heat, I imagine that it provides a mechanism to update
interested entities, such as (but not exclusively) load balancers when
there is a scaling event, or actually any event of interest.


Yeah, and the hooks mechanism that I described in my previous email is 
the one that we have talked about using for this.



So if we had autoscaling as a service running outside of Heat, we would
not be talking about this problem.


Or maybe we would ;) Nothing about making autoscaling a separate service 
gives us this for free. It needs to be implemented whether or not 
autoscaling becomes a separate service.



Heat would not refresh the ASG attributes
and it would not know when the ASG resizes, but it wouldn't need to,
because this imaginary service would have some way to talk to the load
balancer or anybody else that wants to do stuff based on its state. I
would probably not even create my load balancer as a heat resource in
that situation, I would not need to.

For such a service we would have a lightweight wrapper Heat resource
that would take some inputs and invoke the service public APIs. This
resource would not take a heat sub-resource or nested stack as the
scaled entity, since that service runs outside of Heat and manages its
own pool of scaled entities.


No, it would. That's one of our explicit design goals.


The wrapper resource would probably not
expose the pool size, or any properties of the scaled entities, because
none of this would be in heat's domain.


I imagine we probably still would. The plan is to eventually have a 
seamless transition from the existing ASG resource to a separate 
service. (This is why we don't want to reintroduce the old hacks: the 
native ASG's only purpose in life is to establish a clean break from the 
hackery.)


But yeah, you would still not be able to use it in the way that you're 
currently trying to, at _least_ until phase 2 of Convergence is available.



Even if you had that
information, it would not be of much use within a stack. The ASG becomes
sort of a black box, like other openstack native resources. This would
be nice because it moves the notification problem to a dedicated service
where it can be best addressed.

So I do get what you are saying. But looking at things that way does not
help solve my problem.


Yes, I acknowledge that your problem is not solved, and that sucks 
because it really is an important problem. However, there is no quick fix.



LBAAS isn't widely deployed, and the


Right, life would be a lot easier if it were because the autoscaling 
group could just call the Neutron API directly. Providing a completely 
generic, user-configurable notification mechanism is a lot more 
difficult, which is one reason why it hasn't been done yet.



OS::Heat::ASG lacks a good notification mechanism, so I'm still left
with the same choices, which are to either build my own notifications
(using a custom resource or maybe polling the API from the lb instance),
or else not use Heat's ASG and instead build my own autoscaler.

I know you guys don't like the AWS autoscaling mechanism, I agree that
it isn't a well thought out design, but I can't ignore its one good
quality: it works.


It wouldn't say it wasn't well thought out, it was simply the best we 
could do in the absence of a LBaaS API, which the Amazon autoscaling 
design is predicated upon. Everyone was aware that it was a hack, even 
at the time. The problem is that the hacks and hacks on top of hacks are 
actually an obstacle to implementing real, long-term solutions.



Anyway, I ended up writing yet another long email. :)

I'll let you guys know if I come up with any other ideas that don't
disagree with the original design you envisioned for the ASG. Thanks for
spending the time discussing this, it was a really useful discussion.


Agreed, discussion is always good. We have ideas for long-term fixes for 
all of the problems you've raised, but we could really use help on 
implementing them. Tell your friends ;)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-09 Thread Howard, Victor
I prefer Timeslot B, thanks for coordinating.  I would be interested in helping 
out in any way with the design session let me know!

From: Sandhya Dasu (sadasu) sad...@cisco.commailto:sad...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 7, 2015 12:19 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

Hi Miguel,
Both time slots work for me. Thanks for rekindling this effort.

Thanks,
Sandhya

From: Miguel Ángel Ajo majop...@redhat.commailto:majop...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 7, 2015 1:45 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:


On 7 April 2015 at 00:33, Armando M. 
arma...@gmail.commailto:arma...@gmail.com wrote:

On 6 April 2015 at 08:56, Miguel Ángel Ajo 
majop...@redhat.commailto:majop...@redhat.com wrote:
I’d like to co-organized a QoS weekly meeting with Sean M. Collins,

In the last few years, the interest for QoS support has increased, Sean has 
been leading
this effort [1] and we believe we should get into a consensus about how to 
model an extension
to let vendor plugins implement QoS capabilities on network ports and tenant 
networks, and
how to extend agents, and the reference implementation  others [2]

As you surely know, so far every attempt to achieve a consensus has failed in a 
pretty miserable way.
This mostly because QoS can be interpreted in a lot of different ways, both 
from the conceptual and practical perspective.
Yes, I’m fully aware of it, it was also a new feature, so it was out of scope 
for Kilo.
It is important in my opinion to clearly define the goals first. For instance a 
simple extensions for bandwidth limiting could be a reasonable target for the 
Liberty release.
I quite agree here, but IMHO, as you said it’s a quite open field (limiting, 
guaranteeing,
marking, traffic shaping..), we should do our best in trying to define a model 
allowing us
to build that up in the future without huge changes, on the API side I guess 
micro versioning
is going to help in the API evolution.

Also, at some point, we should/could need to involve the nova folks, for 
example, to define
port flavors that can be associated to nova
instance flavors, providing them
1) different types of network port speeds/guarantees/priorities,
2) being able to schedule instance/ports in coordination to be able to met 
specified guarantees.

yes, complexity can sky rocket fast,
Moving things such as ECN into future works is the right thing to do in my 
opinion. Attempting to define a flexible framework that can deal with advanced 
QoS policies specification is a laudable effort, but I am a bit skeptical about 
its feasibility.

++, I think focusing on perhaps bandwidth limiting may make a lot of sense
Yes, I believe we should look into the future , but at the same pick our very 
first feature (or a
very simple set of them) for L, stick to it, and try to make a design that can 
be extended.



As per discussion we’ve had during the last few months [3], I believe we 
should start simple, but
prepare a model allowing future extendibility, to allow for example specific 
traffic rules (per port,
per IP, etc..), congestion notification support [4], …

Simple in my mind is even more extreme then what you're proposing here... I'd 
start with bare APIs for specifying bandwidth limiting, and then phase them out 
once this framework is in place.
Also note that this kind of design bears some overlap with the flavor framework 
which is probably going to be another goal for Liberty.

Indeed, and the flavor framework is something I'm hoping we can land by 
Liberty-1 (yes, I just said Liberty-1).
Yes it’s something I looked at, I must admit I wasn’t able to see it work 
together (It doesn’t
mean it doesn’t play well, but most probably I was silly enough not to see it 
:) ),

I didn’t want to distract attention from the Kilo cycle focus making questions, 
so it should
be a good thing to talk about during the first meetings.

Who are the flavor fathers/mothers? ;)


Morever, consider using common tools such as the specs repo to share and 
discuss design documents.

Also a good idea.
Yes, that was the plan now, we didn’t use it before to avoid creating 
unnecessary noise during this cycle.



It’s the first time I’m trying to organize an 

[openstack-dev] [Neutron] [Ceilometer] Kilo RC1 available

2015-04-09 Thread Thierry Carrez
Hello everyone,

It's Neutron and Ceilometer's turn to reach the release candidate stage.
Their RC1 tarballs, as well as a lists of last-minute features and fixed
bugs since kilo-3 are available at:

https://launchpad.net/neutron/kilo/kilo-rc1
https://launchpad.net/ceilometer/kilo/kilo-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1 will be formally released as the 2015.1.0
final version on April 30. You are therefore strongly encouraged to test
and validate these tarballs !

Alternatively, you can directly test the proposed/kilo branches at:
https://github.com/openstack/neutron/tree/proposed/kilo
https://github.com/openstack/neutron-fwaas/tree/proposed/kilo
https://github.com/openstack/neutron-lbaas/tree/proposed/kilo
https://github.com/openstack/neutron-vpnaas/tree/proposed/kilo
https://github.com/openstack/ceilometer/tree/proposed/kilo

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/neutron/+filebug
or
https://bugs.launchpad.net/ceilometer/+filebug

and tag it *kilo-rc-potential* to bring it to the release crew's attention.

Note that the master branches of Neutron and Ceilometer are now open
for Liberty development, and feature freeze restrictions no longer apply
there !

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Openstack-dev] resource quotas limit per stacks within a project

2015-04-09 Thread Daniel Comnea
Thanks for your reply Kris.

I'd love to but we're forced to by an in-house app we built (in the same
space with Murano to offer a Service catalogue for various services)
deployment.

IT must be a different path to corss the bridge given the circumstances.

Dani

On Wed, Apr 8, 2015 at 3:54 PM, Kris G. Lindgren klindg...@godaddy.com
wrote:

  Why wouldn't you separate you dev/test/productiion via tenants as well?
 That’s what we encourage our users to do.  This would let you create
 flavors that give dev/test less resources under exhaustion conditions and
 production more resources.  You could even pin dev/test to specific
 hypervisors/areas of the cloud and let production have the rest via those
 flavors.
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.

   From: Daniel Comnea comnea.d...@gmail.com
 Date: Wednesday, April 8, 2015 at 3:32 AM
 To: Daniel Comnea comnea.d...@gmail.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, 
 openstack-operat...@lists.openstack.org 
 openstack-operat...@lists.openstack.org
 Subject: Re: [Openstack-operators] [Openstack-dev] resource quotas limit
 per stacks within a project

+ operators

  Hard to believe nobody is facing this problems, even on small shops you
 end up with multiple stacks part of the same tenant/ project.

  Thanks,
  Dani

 On Wed, Apr 1, 2015 at 8:10 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

   Any ideas/ thoughts please?

  In VMware world is basically the same feature provided by the resource
 pool.


  Thanks,
  Dani

 On Tue, Mar 31, 2015 at 10:37 PM, Daniel Comnea comnea.d...@gmail.com
 wrote:

   Hi all,

  I'm trying to understand what options i have for the below use case...

  Having multiple stacks (various number of instances) deployed within 1
 Openstack project (tenant), how can i guarantee that there will be no
 race after the project resources.

  E.g - say i have few stacks like

  stack 1 = production
  stack 2 = development
  stack 3 = integration

  i don't want to be in a situation where stack 3 (because of a need to
 run some heavy tests) will use all of the resources for a short while while
 production will suffer from it.

  Any ideas?

  Thanks,
  Dani

  P.S - i'm aware of the heavy work being put into improving the quotas
 or the CPU pinning however that is at the project level




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Akihiro Motoki
Neutron team has a plan to release a new version of neutornclient for Kilo.
We waited the new release until all granted FFE patches land,
and now we are almost ready to go. (waiting one patch in the gate)

The planned new version is 2.4.0. It is because neutronclient uses 2.3.x version
for a long time (including Kilo) and we would like to have a room for
bug fixing for Juno release.
So we would like to propose the following for Kilo:

  python-neutronclient =2.4.0 2.5.0

I am in the same page with Kyle.
I hope this plan is acceptable.

Thanks,
Akihiro


2015-04-10 0:09 GMT+09:00 Thierry Carrez thie...@openstack.org:
 Doug Hellmann wrote:
 Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
 On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
 wrote:

 The question is, how should we proceed there ? This is new procedure, so
 I'm a bit unclear on the best way forward and would like to pick our
 collective brain. Should we just push requirements cap for all OpenStack
 libs and create stable branches from the last tagged release everywhere
 ? What about other libraries ? Should we push a cap there too ? Should
 we just ignore the whole thing for the Kilo release for all non-Oslo stuff
 ?

 Provided that represents the code being used for testing at this point, and
 I believe it does, this seems like a sensible default action.  Next cycle
 we can make a bit more noise about when this default action will occur,
 probably pick one of the other existing dates late in the cycle such as RC
 or string freeze or whatever. (Maybe that already happened and I can't
 remember?)

 I had hoped to have the spec approved in time to cut releases around
 the time Oslo did (1 week before feature freeze for applications,
 to allow us to merge the requirements cap before applications
 generate their RC1). At this point, I agree that we should go with
 the most recently tagged versions where possible. It sounds like
 we have a couple of libs that need releases, and we should evaluate
 those on a case-by-case basis, defaulting to not updating the stable
 requirements unless absolutely necessary.

 OK, here is a plan, let me know if it makes sense.

 If necessary:
 Cinder releases python-cinderclient 1.1.2
 Designate releases python-designateclient 1.1.2
 Horizon releases django_openstack_auth 1.2.0
 Ironic releases python-ironicclient 0.5.1

 Then we cap in requirements stable/kilo branch (once it's cut, when all
 RC1s are done):

 python-barbicanclient =3.0.1 3.1.0
 python-ceilometerclient =1.0.13 1.1.0
 python-cinderclient =1.1.0 1.2.0
 python-designateclient =1.0.0 1.2.0
 python-heatclient =0.3.0 0.5.0
 python-glanceclient =0.15.0 0.18.0
 python-ironicclient =0.2.1 0.6.0
 python-keystoneclient =1.1.0 1.4.0
 python-neutronclient =2.3.11 2.4.0
 python-novaclient =2.22.0 2.24.0
 python-saharaclient =0.8.0 0.9.0
 python-swiftclient =2.2.0 2.5.0
 python-troveclient =1.0.7 1.1.0
 glance_store =0.3.0 0.5.0
 keystonemiddleware =1.5.0 1.6.0
 pycadf =0.8.0 0.9.0
 django_openstack_auth=1.1.7,!=1.1.8 1.3.0

 As discussed we'll add openstackclient while we are at it:

 python-openstackclient=1.0.0,1.1.0

 That should trickle down to multiple syncs in multiple projects, which
 we'd merge in a RC2. Next time we'll do it all the same time Oslo did
 it, to avoid creating unnecessary respins (live and learn).

 Anything I missed ?

 Bonus question: will the openstack proposal bot actually propose
 stable/kilo g-r changes to proposed/kilo branches ?

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] request to disable xenserver CI account

2015-04-09 Thread Matt Riedemann
The XenServer/XenProject third party CI job has been voting -1 on nova 
changes for over 24 hours without a response from the maintainers so I'd 
like to request that we disable for now while it's being worked since 
it's a voting job and causing noise at kind of a hairy point in the release.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] cinder[stable/juno] Fix the eqlx driver to retry on ssh timeout(Backport)

2015-04-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I am fine with the change itself, but please split into proper pieces.
Squashing patches is the last resort.

On 04/06/2015 11:33 PM, rajini_...@dell.com wrote:
 *Dell - Internal Use - Confidential *
 
 Hi
 
 Can we have a  freeze exception on this CR pl?
 
 https://review.openstack.org/154123
 
 Closes-Bug: #1412940 https://launchpad.net/bugs/1412940 and
 #1417772
 
 
 
 
 
 When the ssh session is timing out, the driver should make attempts
 to retry based on the value in eqlx_cli_max_retries. Instead it was
 raising the exception and bailing out on a single attempt.
 
 
 
 Thanks
 
 
 
 *Rajini Ram*
 
 *IRC: rajinir*
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVJlK5AAoJEC5aWaUY1u57TtIIAISJ+0X/bjdgY/pxKTv8SxRs
UmXwTJrUczzgHemQ/7zEy8qubhZP0gLx0c5fq+3yEsSvZzuX3Et4+Qqa8N/ljyS+
2Wog13m8I1p9hh9WayrQ/FI3GEAllUpsiBTf3A96F4wmWidYnpoJPifB7NdokwqX
5KtMRK4tfzPp1CATLCHlSJcglejXvZ5giY91w4aIx0RnLqJXJWcV1xRRgCGV4dl4
ivMEy3cK/tm4p8EDbKCrBFeJ7OBWGuQjiwjlEnvx69aW99C1nPIEyzPcyMoIi1Zl
4nytFzGr88G1fGwhjcEkQTOq9+6En+qZQ/Nh7QMtQhwHnisj1jQTIbFQqKf0Vfw=
=hLTx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] suggestion for lock/protect stack blueprint

2015-04-09 Thread KOFFMAN, Noa (Noa)
Hey everyone,


Regarding the lock-stack blueprint, Following Steve and Pavlo's
suggestions, I created the following blueprint in heat-specs.

This is the launchpad link:

https://blueprints.launchpad.net/heat/+spec/lock-stack

on Wed, Apr 8, 2015 at 4:54 PM Steve Hardy wrote:
We might consider making this a stack action, e.g like suspend/resume -
actions are intended for stack-wide operations which affect the stack state
but not it's definition, so it seems like potentially a good fit


on Wed, Apr 8, 2015 at 4:59 PM Pavlo Shchelokovskyy wrote:
would you kindly propose this blueprint as a spec in heat-specs project on 
review.openstack.org? It is way easier to discuss specs in a Gerrit review 
format than in ML.


I would appriciate any comment, suggestions and reviews

Thanks

Noa Koffman

Sent from my Android phone using Symantec TouchDown (www.symantec.com)

-Original Message-
From: Pavlo Shchelokovskyy [pshchelokovs...@mirantis.com]
Received: Wednesday, 08 Apr 2015, 16:59
To: OpenStack Development Mailing List (not for usage questions) 
[openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [heat] suggestion for lock/protect stack blueprint

Hi Noa,

would you kindly propose this blueprint as a spec in heat-specs project on 
review.openstack.orghttp://review.openstack.org? It is way easier to discuss 
specs in a Gerrit review format than in ML. If you need a help with submitting 
a spec for a review, come to our IRC channel (#heat at 
freenode.nethttp://freenode.net), we'll gladly help you with that.

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.comhttp://www.mirantis.com

On Wed, Apr 8, 2015 at 3:43 PM, KOFFMAN, Noa (Noa) 
noa.koff...@alcatel-lucent.commailto:noa.koff...@alcatel-lucent.com wrote:
Hey,

I would like to suggest a blueprint to allow locking/protecting a
stack. Similar to: nova server lock or glance-image --is-protected
flag.
Once a stack is locked, the only operation allowed on the stack is
unlock - heat engine should reject any stack operations and ignore
signals that modify the stack (such as scaling).

The lock operation should have a lock_resources flag (default = True):
When True: perform heat lock and enable lock/protect for each stack
resource that supports it (nova server, glance image,...).
when False: perform heat lock - which would lock the stack and all
nested stacks (actions on resources will not be effected).

Use-cases:
1. we received several requests from application vendors, to allow
maintenance mode for the application. When in maintenance no topology
changes are permitted. For example a maintenance mode is required for
a clustered DB app that needs a manual reboot of one of its servers -
when the server reboots all the other servers are redistributing the
data among themselves which causes high CPU levels which in turn might
cause an undesired scale out (which will cause another CPU spike and so
on...).
2. some cloud-admins have a configuration stack that initializes the
cloud (Creating networks, flavors, images, ...) and these resources
should always exist while the cloud exists. Locking these configuration
stacks, will prevent someone from accidently deleting/modifying the
stack or its resources.

This feature might even raise in significance, once convergence phase 2
is in place, and many other automatic actions are performed by heat.
The ability to manually perform admin actions on the stack with no
interruptions is important.

Any thoughts/comments/suggestions are welcome.

Thanks
Noa Koffman.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-09 Thread Russell Bryant
On 04/08/2015 05:07 PM, Michael Still wrote:
 There are still a few idle cores, particularly people who haven’t done
 less than ten reviews in the last 90 days. We should drop those people
 from core and thank them for their work in the past noting once again
 that this is a natural part of the Open Source process -- those people
 are off working on other problems now and that’s cool.

I fit this description.  :-)

I started a bit of a networking kick in Kilo and I see that continuing
for a while.  I started helping with Neutron a bit, and now I've started
helping build OVN and its OpenStack integration [1].

So, I'd like to go ahead and drop myself from nova*-core.  However,
anyone should feel free to ping me about any specific things that I
could provide input on based on previous involvement.  Thanks, everyone!

 nova-net
 ===
 
 OMG, this is still a thing. We need to actually work out what we’re
 doing here, and then do it. The path isn’t particularly clear to me
 any more, I thought I understood what we needed to do in Kilo, but it
 turns out that operators don’t feel that plan meets their needs.
 Somehow we need to get this work done. This is an obvious candidate
 for a summit session, if we can come up with a concrete proposal to
 discuss.

I totally agree with your sentiment here.  I'm very interested in the
future of networking for OpenStack with an open source backend.  :-)

[1]
http://blog.russellbryant.net/2015/04/08/ovn-and-openstack-integration-development-update/

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Utilizing the KMIP plugin

2015-04-09 Thread Christopher N Solis
Hey John.
Thanks for letting me know about the error. But I think my configuration is
not seeing the kmip_plugin selection.
In my barbican-api.conf file in /etc/barbican I have set
enabled_secretstore_plugins = kmip_plugin

However, I don't think it is creating a KMIPSecretStore instance.
I edited the code in kmip_secret_store.py and put a breakpoint at the very
beginning of the init function.
When I make a barbican request to put a secret in there, it did not stop at
the breakpoint at all.
I put another breakpoint in the store_crypto.py file inside the init
function for the StoreCryptoAdapterPlugin and I
was able to enter the code at that breakpoint.

So even though in my barbican-api.conf file I specified kmip_plugin it
seems to be using the store_crypto plugin instead.

Is there something that might cause this to happen?
I also want to note that my code has the most up to date pull from the
community code.

Here's what my /etc/barbican/barbican-api.conf file has in it:

# = Secret Store Plugin ===
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = kmip_plugin
...
...
...
# == KMIP plugin =
[kmip_plugin]
username = '**'
password = '**'
host = 10.0.2.15
port = 5696
keyfile = '/etc/barbican/rootCA.key'
certfile = '/etc/barbican/rootCA.pem'
ca_certs = '/etc/barbican/rootCA.pem'


Regards,
Christopher Solis




From:   John Wood john.w...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   04/08/2015 03:16 PM
Subject:Re: [openstack-dev] [barbican] Utilizing the KMIP plugin



Hello Christopher,

My local configuration is indeed seeing the kmip_plugin selection, but when
stevedore tries to load the KMIP plugin it crashes because required files
are missing in my local environment (see
https://github.com/openstack/barbican/blob/master/barbican/plugin/kmip_secret_store.py#L131
) for example.

Stevedore logs the exception but then doesn’t load this module, so when
Barbican asks for an available plugin it doesn’t see it and crashes as you
see. So the root exception from stevedore isn’t showing up in my logs for
some reason, and probably not in yours as well. We’ll try to put up a CR to
at least expose this exception in logs. In the mean time, make sure the
KMIP values checked via that link above are configured on your machine.

Sorry for the inconvenience,
John


From: Christopher N Solis cnso...@us.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Wednesday, April 8, 2015 at 11:27 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin



Hey John.
I do have the barbican-api.conf file located in the /etc/barbican folder.
But that does not seem to be the one that barbican
reads from. It seems to be reading from the barbican-api.conf file locate
in my home directory.
Either way, both have the exact same configurations.

I also checked the setup.cfg file and it does have the line for
kmip_plugin .

Regards,

  CHRIS SOLIS

Inactive hide details for John Wood ---04/07/2015 10:39:18 AM---Hello
Christopher, Just checking, but is that barbican-api.confJohn Wood
---04/07/2015 10:39:18 AM---Hello Christopher, Just checking, but is that
barbican-api.conf file located in your local system's

From: John Wood john.w...@rackspace.com
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Date: 04/07/2015 10:39 AM
Subject: Re: [openstack-dev] [barbican] Utilizing the KMIP plugin





Hello Christopher,

Just checking, but is that barbican-api.conf file located in your local
system’s /etc/barbican folder? If not that is the preferred place for local
development. Modifying the copy that is in your local git repository will
have no effect.

Also, please double check that your local git repository’s setup.cfg has a
line like this in there (at/around #35):

kmip_plugin = barbican.plugin.kmip_secret_store:KMIPSecretStore

Thanks,
John




From: Christopher N Solis cnso...@us.ibm.com
Reply-To: openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.org
Date: Monday, April 6, 2015 at 10:25 AM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Utilizing the KMIP plugin


Hello!

Sorry to Kaitlin Farr for not responding directly to your e-mail.
My openstack settings were misconfigured and I was not receiving e-mail
from the dev mailing list.
Thanks for looking into the issue.

I double checked the permissions at the bottom of the kmip_plugin part in
the barbican-api.conf file
and they are set to 400.

I would also like to note that I do not think the code ever actually
entered the __init__ function
of KMIPSecretStore. I put a breakpoint in the __init__ function but 

Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-09 Thread Sylvain Bauza



Le 08/04/2015 23:07, Michael Still a écrit :

I just wanted to send a note about John running in the PTL election for Nova.

I want to make it clear that I think having more than one candidate is
a good thing -- its a healthy part of a functional democracy, and it
also means regardless of the outcome we have at least one succession
planning option should a PTL need to step down at some point in the
future.

That said, I think there are a few things we need to do in Liberty,
regardless of who is PTL. I started this as a Google doc to share with
John if he won so that we didn’t drop the ball, but then I realised
that nothing here is secret. So, here is my brain dump of things we
need to do in Liberty, in no particular order:

nova-coresec reboot


The nova-coresec team has been struggling recently to keep up with
their workload. We need to drop people off this team who haven’t had
time recently to work on security bugs, and we need to find new people
to volunteer for this team, noting that the team is kept deliberately
small because of embargoed security vulnerabilities. If I am not
re-elected as PTL, I will probably volunteer for this team.

priorities and specs
===

I think the current spec process is starting to work well for us, and
that priorities was a success. We should continue with specs, but with
an attempt to analyse why so many approved specs don’t land (we have
had about 50% of our approved specs not land in Juno and Kilo). Is
that as simple as code review bandwidth? Or is the problem more
complicated than that? We just don’t know until someone goes digging.


As a reviewer, I think it's sometimes hard to figure out which specs can 
be looked first as we have more than 100 changes. For Kilo, I tried to 
query Gerrit using keywords but I found it was not good because I missed 
some important specs that I discovered once merged.


On the other hand, some specs can be missed while there is a consensus. 
Could we maybe imagine to triage those specs using like we do in 
Launchpad ? I don't think amending the commit msgs is good, I'm more 
thinking about a dynamic etherpad that we can use for finding those.


Now, as a proposer having had 4 specs approved by Kilo but only 2 of 
them landed (all of them part of a priority), I don't even think I can 
give a rule for that. I had a spec which was approved very early in Kilo 
but took most of my engineering effort for Kilo, I had one spec which 
was approved very late with a high number of iterations but whose 
implementation was implemented and merged in less than one week (!) and 
two specs which were kinda rational but failed at the implementation 
stage, mainly because some corner cases were not identified at the spec 
stage.


Based on that experience, I would be tempted to consider that we 
underestimate how long it is necessary to provide a good spec by 
considering the design issues and the implementation details. If a spec 
is really easy to be approved and straightforward to implement, I would 
therefore consider if it would even be worth submitting a spec for it.
I think the Kilo initiative to reduce the number of specs by easing what 
can be merged with only a blueprint move towards the right direction. We 
maybe need to further identify how a spec is really a design document 
which is not only a declaration of intent, but rather a very technical 
document which presents the steps and the change quite precisely.




Priorities worked well. We need to start talking about what should be
a priority in Liberty now, and the first step is to decide as a team
what we think the big problems we’re trying to solve in Liberty are.


++


nova-core


I think there are a couple of things to be done here.

There are still a few idle cores, particularly people who haven’t done
less than ten reviews in the last 90 days. We should drop those people
from core and thank them for their work in the past noting once again
that this is a natural part of the Open Source process -- those people
are off working on other problems now and that’s cool.

We also need to come up with a way to grow more cores. Passive
approaches like asking existing cores to keep an eye out for talent
they trust haven’t worked, so I think its time to actively start
mentoring core candidates.

I am not convinced that just adding cores will solve our review
bandwidth problems though. We have these conversations about why
people’s reviews sit around without a lot of data to back them up, and
I feel like we often jump to conclusions that feel intuitive but that
aren’t supported by the data.

nova-net
===

OMG, this is still a thing. We need to actually work out what we’re
doing here, and then do it. The path isn’t particularly clear to me
any more, I thought I understood what we needed to do in Kilo, but it
turns out that operators don’t feel that plan meets their needs.
Somehow we need to get this work done. This is an obvious candidate
for a summit 

Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
https://review.openstack.org/172045

On Thu, Apr 9, 2015 at 9:21 PM, Li Ma skywalker.n...@gmail.com wrote:
 Hi Doug,

 In the global requirements.txt, oslo.messaging version is still =
 1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
 there when I deploy with devstack master branch.

 I submitted a review for the update.

 On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com wrote:
 We are content to announce the release of:

 oslo.messaging 1.9.0: Oslo Messaging API

 This is the first release of the library for the Liberty development cycle.

 For more details, please see the git log history below and:

 http://launchpad.net/oslo.messaging/+milestone/1.9.0

 Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.messaging

 Changes in oslo.messaging 1.8.0..1.9.0
 --

 8da14f6 Use the oslo_utils stop watch in decaying timer
 ec1fb8c Updated from global requirements
 84c0d3a Remove 'UNIQUE_ID is %s' logging
 9f13794 rabbit: fix ipv6 support
 3f967ef Create a unique transport for each server in the functional tests
 23dfb6e Publish tracebacks only on debug level
 53fde06 Add pluggability for matchmakers
 b92ea91 Make option [DEFAULT]amqp_durable_queues work
 cc618a4 Reconnect on connection lost in heartbeat thread
 f00ec93 Imported Translations from Transifex
 0dff20b cleanup connection pool return
 2d1a019 rabbit: Improves logging
 0ec536b fix up verb tense in log message
 b9e134d rabbit: heartbeat implementation
 72a9984 Fix changing keys during iteration in matchmaker heartbeat
 cf365fe Minor improvement
 5f875c0 ZeroMQ deployment guide
 410d8f0 Fix a couple typos to make it easier to read.
 3aa565b Tiny problem with notify-server in simulator
 0f87f5c Fix coverage report generation
 3be95ad Add support for multiple namespaces in Targets
 513ce80 tools: add simulator script
 0124756 Deprecates the localcontext API
 ce7d5e8 Update to oslo.context
 eaa362b Remove obsolete cross tests script
 1958f6e Fix the bug redis do not delete the expired keys
 9f457b4 Properly distinguish between server index zero and no server
 0006448 Adjust tests for the new namespace

 Diffstat (except docs and test files)
 -

 .coveragerc|   7 +
 openstack-common.conf  |   6 +-
 .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
 .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
 .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
 oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
 oslo_messaging/_drivers/amqp.py|  55 +++-
 oslo_messaging/_drivers/amqpdriver.py  |  15 +-
 oslo_messaging/_drivers/common.py  |  20 +-
 oslo_messaging/_drivers/impl_qpid.py   |   4 +-
 oslo_messaging/_drivers/impl_rabbit.py | 357 
 ++---
 oslo_messaging/_drivers/impl_zmq.py|  32 +-
 oslo_messaging/_drivers/matchmaker.py  |   2 +-
 oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
 oslo_messaging/localcontext.py |  16 +
 oslo_messaging/notify/dispatcher.py|   4 +-
 oslo_messaging/notify/middleware.py|   2 +-
 oslo_messaging/openstack/common/_i18n.py   |  45 +++
 oslo_messaging/openstack/common/versionutils.py| 253 +++
 oslo_messaging/rpc/dispatcher.py   |   6 +-
 oslo_messaging/target.py   |   9 +-
 requirements-py3.txt   |  13 +-
 requirements.txt   |  15 +-
 setup.cfg  |   6 +
 test-requirements-py3.txt  |   4 +-
 test-requirements.txt  |   4 +-
 tools/simulator.py | 207 
 tox.ini|   3 +-
 43 files changed, 1673 insertions(+), 512 deletions(-)


 Requirements updates
 

 diff --git a/requirements-py3.txt b/requirements-py3.txt
 index 05cb050..4ec18c6 100644
 --- a/requirements-py3.txt
 +++ b/requirements-py3.txt
 @@ -5,5 +5,6 @@
 -oslo.config=1.9.0  # Apache-2.0
 -oslo.serialization=1.2.0   # Apache-2.0
 -oslo.utils=1.2.0   # Apache-2.0
 -oslo.i18n=1.3.0  # Apache-2.0
 -stevedore=1.1.0  # Apache-2.0
 +oslo.config=1.9.3,1.10.0  # Apache-2.0
 +oslo.context=0.2.0,0.3.0 # Apache-2.0
 +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
 +oslo.utils=1.4.0,1.5.0   # Apache-2.0
 +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
 +stevedore=1.3.0,1.4.0  # Apache-2.0
 @@ -21 +22 @@ kombu=2.5.0
 -oslo.middleware=0.3.0  # Apache-2.0
 +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
 diff --git a/requirements.txt b/requirements.txt
 index 

Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Doug Hellmann
Excerpts from Li Ma's message of 2015-04-09 21:21:40 +0800:
 Hi Doug,
 
 In the global requirements.txt, oslo.messaging version is still =
 1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
 there when I deploy with devstack master branch.
 
 I submitted a review for the update.

At this point we have frozen the requirements for kilo (still master for
most of the applications, I think). So rather than updating that
requirement, we need to back-port the appropriate fixes to the
stable/kilo branch of oslo.messaging. I'm sure the messaging team would
appreciate your help submitting any of those cherry-picked fixes. Mehdi
put together a list of candidates in [1].

Doug

[1] https://etherpad.openstack.org/p/oslo-messaging-kilo-potential-backports

 
 On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com wrote:
  We are content to announce the release of:
 
  oslo.messaging 1.9.0: Oslo Messaging API
 
  This is the first release of the library for the Liberty development cycle.
 
  For more details, please see the git log history below and:
 
  http://launchpad.net/oslo.messaging/+milestone/1.9.0
 
  Please report issues through launchpad:
 
  http://bugs.launchpad.net/oslo.messaging
 
  Changes in oslo.messaging 1.8.0..1.9.0
  --
 
  8da14f6 Use the oslo_utils stop watch in decaying timer
  ec1fb8c Updated from global requirements
  84c0d3a Remove 'UNIQUE_ID is %s' logging
  9f13794 rabbit: fix ipv6 support
  3f967ef Create a unique transport for each server in the functional tests
  23dfb6e Publish tracebacks only on debug level
  53fde06 Add pluggability for matchmakers
  b92ea91 Make option [DEFAULT]amqp_durable_queues work
  cc618a4 Reconnect on connection lost in heartbeat thread
  f00ec93 Imported Translations from Transifex
  0dff20b cleanup connection pool return
  2d1a019 rabbit: Improves logging
  0ec536b fix up verb tense in log message
  b9e134d rabbit: heartbeat implementation
  72a9984 Fix changing keys during iteration in matchmaker heartbeat
  cf365fe Minor improvement
  5f875c0 ZeroMQ deployment guide
  410d8f0 Fix a couple typos to make it easier to read.
  3aa565b Tiny problem with notify-server in simulator
  0f87f5c Fix coverage report generation
  3be95ad Add support for multiple namespaces in Targets
  513ce80 tools: add simulator script
  0124756 Deprecates the localcontext API
  ce7d5e8 Update to oslo.context
  eaa362b Remove obsolete cross tests script
  1958f6e Fix the bug redis do not delete the expired keys
  9f457b4 Properly distinguish between server index zero and no server
  0006448 Adjust tests for the new namespace
 
  Diffstat (except docs and test files)
  -
 
  .coveragerc|   7 +
  openstack-common.conf  |   6 +-
  .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
  .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
  .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
  oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
  oslo_messaging/_drivers/amqp.py|  55 +++-
  oslo_messaging/_drivers/amqpdriver.py  |  15 +-
  oslo_messaging/_drivers/common.py  |  20 +-
  oslo_messaging/_drivers/impl_qpid.py   |   4 +-
  oslo_messaging/_drivers/impl_rabbit.py | 357 
  ++---
  oslo_messaging/_drivers/impl_zmq.py|  32 +-
  oslo_messaging/_drivers/matchmaker.py  |   2 +-
  oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
  oslo_messaging/localcontext.py |  16 +
  oslo_messaging/notify/dispatcher.py|   4 +-
  oslo_messaging/notify/middleware.py|   2 +-
  oslo_messaging/openstack/common/_i18n.py   |  45 +++
  oslo_messaging/openstack/common/versionutils.py| 253 +++
  oslo_messaging/rpc/dispatcher.py   |   6 +-
  oslo_messaging/target.py   |   9 +-
  requirements-py3.txt   |  13 +-
  requirements.txt   |  15 +-
  setup.cfg  |   6 +
  test-requirements-py3.txt  |   4 +-
  test-requirements.txt  |   4 +-
  tools/simulator.py | 207 
  tox.ini|   3 +-
  43 files changed, 1673 insertions(+), 512 deletions(-)
 
 
  Requirements updates
  
 
  diff --git a/requirements-py3.txt b/requirements-py3.txt
  index 05cb050..4ec18c6 100644
  --- a/requirements-py3.txt
  +++ b/requirements-py3.txt
  @@ -5,5 +5,6 @@
  -oslo.config=1.9.0  # Apache-2.0
  -oslo.serialization=1.2.0   # Apache-2.0
  -oslo.utils=1.2.0   # Apache-2.0
  -oslo.i18n=1.3.0  # Apache-2.0
  

Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
OK. I didn't notice because requirements project doesn't have stable/kilo yet.
Thanks for explanation.

On Thu, Apr 9, 2015 at 9:37 PM, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from Li Ma's message of 2015-04-09 21:21:40 +0800:
 Hi Doug,

 In the global requirements.txt, oslo.messaging version is still =
 1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
 there when I deploy with devstack master branch.

 I submitted a review for the update.

 At this point we have frozen the requirements for kilo (still master for
 most of the applications, I think). So rather than updating that
 requirement, we need to back-port the appropriate fixes to the
 stable/kilo branch of oslo.messaging. I'm sure the messaging team would
 appreciate your help submitting any of those cherry-picked fixes. Mehdi
 put together a list of candidates in [1].

 Doug

 [1] https://etherpad.openstack.org/p/oslo-messaging-kilo-potential-backports


 On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
  We are content to announce the release of:
 
  oslo.messaging 1.9.0: Oslo Messaging API
 
  This is the first release of the library for the Liberty development cycle.
 
  For more details, please see the git log history below and:
 
  http://launchpad.net/oslo.messaging/+milestone/1.9.0
 
  Please report issues through launchpad:
 
  http://bugs.launchpad.net/oslo.messaging
 
  Changes in oslo.messaging 1.8.0..1.9.0
  --
 
  8da14f6 Use the oslo_utils stop watch in decaying timer
  ec1fb8c Updated from global requirements
  84c0d3a Remove 'UNIQUE_ID is %s' logging
  9f13794 rabbit: fix ipv6 support
  3f967ef Create a unique transport for each server in the functional tests
  23dfb6e Publish tracebacks only on debug level
  53fde06 Add pluggability for matchmakers
  b92ea91 Make option [DEFAULT]amqp_durable_queues work
  cc618a4 Reconnect on connection lost in heartbeat thread
  f00ec93 Imported Translations from Transifex
  0dff20b cleanup connection pool return
  2d1a019 rabbit: Improves logging
  0ec536b fix up verb tense in log message
  b9e134d rabbit: heartbeat implementation
  72a9984 Fix changing keys during iteration in matchmaker heartbeat
  cf365fe Minor improvement
  5f875c0 ZeroMQ deployment guide
  410d8f0 Fix a couple typos to make it easier to read.
  3aa565b Tiny problem with notify-server in simulator
  0f87f5c Fix coverage report generation
  3be95ad Add support for multiple namespaces in Targets
  513ce80 tools: add simulator script
  0124756 Deprecates the localcontext API
  ce7d5e8 Update to oslo.context
  eaa362b Remove obsolete cross tests script
  1958f6e Fix the bug redis do not delete the expired keys
  9f457b4 Properly distinguish between server index zero and no server
  0006448 Adjust tests for the new namespace
 
  Diffstat (except docs and test files)
  -
 
  .coveragerc|   7 +
  openstack-common.conf  |   6 +-
  .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
  .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
  .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
  oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
  oslo_messaging/_drivers/amqp.py|  55 +++-
  oslo_messaging/_drivers/amqpdriver.py  |  15 +-
  oslo_messaging/_drivers/common.py  |  20 +-
  oslo_messaging/_drivers/impl_qpid.py   |   4 +-
  oslo_messaging/_drivers/impl_rabbit.py | 357 
  ++---
  oslo_messaging/_drivers/impl_zmq.py|  32 +-
  oslo_messaging/_drivers/matchmaker.py  |   2 +-
  oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
  oslo_messaging/localcontext.py |  16 +
  oslo_messaging/notify/dispatcher.py|   4 +-
  oslo_messaging/notify/middleware.py|   2 +-
  oslo_messaging/openstack/common/_i18n.py   |  45 +++
  oslo_messaging/openstack/common/versionutils.py| 253 +++
  oslo_messaging/rpc/dispatcher.py   |   6 +-
  oslo_messaging/target.py   |   9 +-
  requirements-py3.txt   |  13 +-
  requirements.txt   |  15 +-
  setup.cfg  |   6 +
  test-requirements-py3.txt  |   4 +-
  test-requirements.txt  |   4 +-
  tools/simulator.py | 207 
  tox.ini|   3 +-
  43 files changed, 1673 insertions(+), 512 deletions(-)
 
 
  Requirements updates
  
 
  diff --git a/requirements-py3.txt b/requirements-py3.txt
  index 05cb050..4ec18c6 100644
  --- a/requirements-py3.txt
  +++ b/requirements-py3.txt
  @@ -5,5 +5,6 @@

Re: [openstack-dev] [Nova] Things to tackle in Liberty

2015-04-09 Thread Neil Jerram

On 08/04/15 22:07, Michael Still wrote:


priorities and specs
===

I think the current spec process is starting to work well for us, and
that priorities was a success. We should continue with specs, but with
an attempt to analyse why so many approved specs don’t land [...]



nova-net
===

OMG, this is still a thing. We need to actually work out what we’re
doing here, and then do it. [...]



conclusion


I make no claim that my list is exhaustive. What else do you think we
should be tackling in Liberty?


Something kind of related to two of the strands above, from the point of 
view of someone who had an approved networking-related Nova spec that 
failed to land for Kilo...


Basically, should Nova try harder to get out of the networking business? 
 Currently the situation is that OpenStack networking experimentation 
is mostly in Neutron (as I assume it should be) but also often requires 
changes to the VIF type code in Nova.  Should we try to close off that 
situation, I would guess through some structural solution that puts all 
the required code changes firmly into Neutron's domain?


I don't want to prejudge what the solution might be.  My point here is 
to suggest discussing and deciding whether this could be a worthwhile 
priority.  If it sounds of interest, I could add something to the 
etherpad for Nova design session ideas.


(I appreciate that the nova-net question is way bigger and more 
practically important overall than my specific point about the VIF type 
code.  However it is possible that the VIF type code contributes to a 
continuing lack of clarity about where networking function lies in 
OpenStack.)


Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] request to disable xenserver CI account

2015-04-09 Thread Jeremy Stanley
On 2015-04-09 16:13:13 -0500 (-0500), Matt Riedemann wrote:
 The XenServer/XenProject third party CI job has been voting -1 on
 nova changes for over 24 hours without a response from the
 maintainers so I'd like to request that we disable for now while
 it's being worked since it's a voting job and causing noise at
 kind of a hairy point in the release.

According to Gerrit, you personally (as a member of the nova-release
group) have access to remove them from the nova-ci group to stop
them being able to -1/+1 nova changes. You should be able to do it
via https://review.openstack.org/#/admin/groups/511,members but let
me know if that's not working for some reason.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-09 Thread Min Pae
I would agree that the reason monasca had to roll their own notification
system is likely because one wasn't available, but whether that should be a
separate (stand alone separate project) vs integrated service (part of
something like monasca) is, to some extent, debatable.

No argument on the fact that this is a cross-cutting concern, and as
previous posts said it would be great to have a common mechanism for
publishing notifications to users.

Whether the notification system/service is separated or integrated, what
would be the best method to use?  Angus started the thread asking whether
such a service should be something that pushes to other existing endpoints
that the user supplies (syslog), or a publicly available message queue
(zaqar), and there was a suggestion to use something like AMQP as well.

I assert a public/user facing notification system should be web
centric/native for the reason I cited before, one of the consumers of such
a notification system will very likely be web browsers (perhaps even
horizon itself).  If there’s agreement on this point (which I guess there
isn’t, or nobody has really chimed in on it yet), then the next thing would
be to identify the best protocol/transport for communicating the
notifications.  I threw out Atom as a potential method, but by no means am
I advocating Atom, just that it be a web centric protocol.

Also, I’m of the mind that notification/event messaging there should be a
multi-tiered approach to notification/event messaging, where there’s
probably an internal message bus (be it rabbitmq, kafka, activemq, or what
have you) that all services publish to for consumption by other services,
and a consumer of said internal message bus that then publishes the events
publicly to users in a web native protocol.  BTW, I don’t mean open access
public, I mean public facing public.  There should still be access controls
on consuming the public facing notifications.

- Min

On Wed, Apr 8, 2015, 5:46 PM Halterman, Jonathan jonathan.halter...@hp.com
wrote:

 The ability to send general purpose notifications is clearly a
 cross-cutting concern. The absence of an AWS SNS like service in OpenStack
 is the reason that services like Monasca had to roll their own
 notifications. This has been a gaping hole in the OpenStack portfolio for a
 while, and I I think the right way to think of a solution is as a new
 service built around a pub/sub notification API (again, see SNS) as opposed
 to something which merely exposes OpenStack’s internal messaging
 infrastructure in some way (that would be inappropriate).

 Cheers,
 Jonathan

 From: Vipul Sabhaya vip...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, April 8, 2015 at 5:18 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] [all] how to send messages (and events) to
 our users

 On Wed, Apr 8, 2015 at 4:45 PM, Min Pae sputni...@gmail.com wrote:



 an under-the-clould service ? - That is not what I am after here.

 I think the thread went off on a tangent and this point got lost.  A
 user facing notification system absolutely should be a web centric
 protocol, as I imagine one of the big consumers of such a system will be
 monitoring dashboards which is trending more and more toward rich client
 side “Single Page Applications”.  AMQP would not work well in such cases.



 So is the yagi + atom hopper solution something we can point end-users
 to?
 Is it per-tenant etc...


 While I haven’t seen it yet, if that solution provides a means to expose
 the atom events to end users, it seems like a promising start.  The thing
 that’s required, though, is authentication/authorization that’s tied in to
 keystone, so that notification regarding a tenant’s resource is available
 only to that tenant.


 Sandy, do you have a write up somewhere on how to set this up so I can
 experiment a bit?

 Maybe this needs to be a part of Cue?


 Sorry, Cue’s goal is to provision Message Queue/Broker services and
 manage them, just like Trove provisions and manages databases.  Cue would
 be ideally used to stand up and scale the RabbitMQ cluster providing
 messaging for an application backend, but it does not provide messaging
 itself (that would be Zaqar).



 Agree — I don’t think a multi-tenant notification service (which we seem
 to be after here) is the goal of Cue.

 That said, Monasca https://wiki.openstack.org/wiki/Monasca seems have
 implemented the collection, aggregation, and notification of these events.
 What may be missing is in Monasca is a mechanism for the tenant to consume
 these events via something other than AMQP.



 - Min

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 

Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Morgan Fainberg
I am also looking at a python-keystoneclient release still pending as well.
This is being added to the ML topic based on the IRC conversation we just
had.

On Thu, Apr 9, 2015 at 8:20 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Keystonemiddleware is pending a minor fix to sync g-r in a sane way to
 match the rest of kilo (what we have for keystone et al).

 However we are blocked because there is no stable Juno and icehouse
 branches. I'd like to release the Python-keystone client with the
 requirements update for kilo.

 So keystonemiddleware would receive one more release before the cap.


 On Thursday, April 9, 2015, Thierry Carrez thie...@openstack.org wrote:

 Doug Hellmann wrote:
  Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
  On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  The question is, how should we proceed there ? This is new procedure,
 so
  I'm a bit unclear on the best way forward and would like to pick our
  collective brain. Should we just push requirements cap for all
 OpenStack
  libs and create stable branches from the last tagged release
 everywhere
  ? What about other libraries ? Should we push a cap there too ? Should
  we just ignore the whole thing for the Kilo release for all non-Oslo
 stuff
  ?
 
  Provided that represents the code being used for testing at this
 point, and
  I believe it does, this seems like a sensible default action.  Next
 cycle
  we can make a bit more noise about when this default action will occur,
  probably pick one of the other existing dates late in the cycle such
 as RC
  or string freeze or whatever. (Maybe that already happened and I can't
  remember?)
 
  I had hoped to have the spec approved in time to cut releases around
  the time Oslo did (1 week before feature freeze for applications,
  to allow us to merge the requirements cap before applications
  generate their RC1). At this point, I agree that we should go with
  the most recently tagged versions where possible. It sounds like
  we have a couple of libs that need releases, and we should evaluate
  those on a case-by-case basis, defaulting to not updating the stable
  requirements unless absolutely necessary.

 OK, here is a plan, let me know if it makes sense.

 If necessary:
 Cinder releases python-cinderclient 1.1.2
 Designate releases python-designateclient 1.1.2
 Horizon releases django_openstack_auth 1.2.0
 Ironic releases python-ironicclient 0.5.1

 Then we cap in requirements stable/kilo branch (once it's cut, when all
 RC1s are done):

 python-barbicanclient =3.0.1 3.1.0
 python-ceilometerclient =1.0.13 1.1.0
 python-cinderclient =1.1.0 1.2.0
 python-designateclient =1.0.0 1.2.0
 python-heatclient =0.3.0 0.5.0
 python-glanceclient =0.15.0 0.18.0
 python-ironicclient =0.2.1 0.6.0
 python-keystoneclient =1.1.0 1.4.0
 python-neutronclient =2.3.11 2.4.0
 python-novaclient =2.22.0 2.24.0
 python-saharaclient =0.8.0 0.9.0
 python-swiftclient =2.2.0 2.5.0
 python-troveclient =1.0.7 1.1.0
 glance_store =0.3.0 0.5.0
 keystonemiddleware =1.5.0 1.6.0
 pycadf =0.8.0 0.9.0
 django_openstack_auth=1.1.7,!=1.1.8 1.3.0

 As discussed we'll add openstackclient while we are at it:

 python-openstackclient=1.0.0,1.1.0

 That should trickle down to multiple syncs in multiple projects, which
 we'd merge in a RC2. Next time we'll do it all the same time Oslo did
 it, to avoid creating unnecessary respins (live and learn).

 Anything I missed ?

 Bonus question: will the openstack proposal bot actually propose
 stable/kilo g-r changes to proposed/kilo branches ?

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [stringfreeze] request for SFE for live-migration on system z (review 166130)

2015-04-09 Thread Markus Zoeller
Without the mentioned qemu change the live migration is still fully
functional with volume backed instances just like on x86 hosts, with
the following caveat: The safety check whether the target host CPU is
compatible with the source host CPU is not performed. Therefore, it is
a responsibility of the OpenStack admin triggering the migration to
make sure they are compatible. An example of such an incompatibility
is where the target machine is an older generation than the source
machine, and the guest is using features of the newer machine that are
not available on the older machine.

For OpenStack with System z we think this behavior is acceptable, because
we assume that the typical OpenStack with System z cloud will be a private 

one where the OpenStack admin is the same person as the System z admin,
or at least in close contact. These admins know their environment very
well. So we think that this caveat would not prevent any customer from
using live migration on System z.

As you may have noticed I changed the logging from warning to debug.
I'm a bit conflicted between these two possibilities. What's your 
opinion on that?

Regards,
Markus Zoeller (markus_z)

Michael Still mi...@stillhq.com wrote on 04/08/2015 12:33:40 AM:

 From: Michael Still mi...@stillhq.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 04/08/2015 12:39 AM
 Subject: Re: [openstack-dev] [nova] [stringfreeze] request for SFE for
 live-migration on system z (review 166130)
 
 How many users are likely to use live migration without the changes to
 qemu you mention? How functional would the live migration be?
 
 Thanks,
 Michael
 
 On Wed, Apr 8, 2015 at 2:45 AM, Markus Zoeller mzoel...@de.ibm.com 
wrote:
  I'd like to request a string freeze exception for this review:
 
  https://review.openstack.org/#/c/166130/
 
  Justification:
  This fix would enable the live-migration on the system z platform. 
This
  platform (arch=s390x) doesn't currently support the CPU comparison but
  is working on patch sets in qemu to enable this [1]. With the patch 
set
  above a customer would at least have the possibility to use the live-
  migration feature with Kilo on system z.
 
  [1] http://lists.gnu.org/archive/html/qemu-devel/2014-05/msg04296.html
 
 
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Rackspace Australia
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-09 Thread Zane Bitter

On 06/04/15 22:55, Angus Salkeld wrote:

Hi all

For quite some time we (Heat team) have wanted to be able to send
messages to our
users (by user I do not mean the Operator, but the User that is
interacting with the client).

What do I mean by user messages, and how do they differ from our
current log messages
and notifications?
- Our current logs are for the operator and have information that the
user should not have
   (ip addresses, hostnames, configuration options, other tenant info etc..)
- Our notifications (that Ceilometer uses) *could* be used, but I am not
sure if it quite fits.
   (they seem a bit heavy weight for a log message and aimed at higher
level events)

These messages could be (based on Heat's use case):

- Specific user oriented log messages (distinct from our normal operator
logs)
- Deprecation messages (if they are using old resource
properties/template features)
- Progress and resource state changes (an application doesn't want to
poll an api for a state change)
- Automated actions (autoscaling events, time based actions)
- Potentially integrated server logs (from in guest agents)

I wanted to raise this to [all] as it would be great to have a general
solution that
all projects can make use of.

What do we have now:
- The user can not get any kind of log message from services. The
closest thing
   ATM is the notifications in Ceilometer, but I have the feeling that
these have a different aim.
- nova console log
- Heat has a DB event table for users (we have long wanted to get rid
of this)

What do other clouds provide:
- https://devcenter.heroku.com/articles/logging
- https://cloud.google.com/logging/docs/
- https://aws.amazon.com/blogs/aws/cloudwatch-log-service/
- http://aws.amazon.com/cloudtrail/
(other examples...)

What are some options we could investigate:
1. remote syslog
 The user provides a rsyslog server IP/port and we send their
messages to that.
 [pros] simple, and the user could also send their server's log
messages to the same
   rsyslog - great visibility into what is going on.

   There are great tools like loggly/logstash/papertrailapp
that source logs from remote syslog
   It leaves the user in control of what tools they get to use.

 [cons] Would we become a spam agent (just sending traffic to an
IP/Port) - I guess that's how remote syslog
works. I am not sure if this is an issue or not?

   This might be a lesser solution for the use case of an
application doesn't want to poll an api for a state change

   I am not sure how we would integrate this with horizon.

2. Zaqar
 We send the messages to a queue in Zaqar.
 [pros] multi tenant OpenStack project for messaging!

 [cons] I don't think Zaqar is installed in most installations (tho'
please correct me here if this
is wrong). I know Mirantis does not currently support


I think you're correct for now, but I also think that the ability to 
send messages to the user side is a valuable enough feature to convince 
many cloud operators to deploy it. Everybody wins in that case.



Zaqar, so that would be a problem for me.

   There is not the level of external tooling like in option
1 (logstash and friends)


IMO whatever solution we choose is going to end up requiring the same 
semantics as Zaqar: durable storage, timeouts of stale messages, 
arbitrary scale-out, multi-tenancy with Keystone auth, pub-sub, and so 
on. That leaves us with 3 options:


1) Use Zaqar

2) Write a ground-up replacement for Zaqar. I hope we can agree that 
this is insane. Particularly if our reason for not using Zaqar is that 
it isn't widely deployed enough yet.


3) Write or make use of something much simpler than Zaqar that 
implements only the exact subset of Zaqar's semantics that we need. 
However, IMHO that is very likely to turn out to be substantially all of 
them, and once again this is unlikely to solve the problem of wide 
deployment before Zaqar.


So, in conclusion, +1 Zaqar.

However, there are some other interesting questions posed by your email.

- Do we need a separate service to condition the output to a different 
protocol? IMHO no - or, rather, not beyond the ones already proposed for 
Zaqar (long polling, WebSockets, SNS-style notifications). Even if there 
was some better protocol (in principle I'm not opposed to Atom, for 
example), I think we'd benefit more by just adding support for it in 
Zaqar - if it works for this use case then it will work for others that 
users commonly have.


- What should be feeding inputs into the queue? Some service that 
consumes the existing oslo messaging notifications and sanitises them 
for the user? Or would every service publish its user notifications 
directly to the queue? I think this might vary on a case-by-case basis. 
For things like log messages, warnings and the like, I can see that 
having a single place to configure it would be valuable. For 

Re: [openstack-dev] [all] Design Summit - Cross-project track - Session suggestions

2015-04-09 Thread Geoff Arnold
Thanks Thierry. I really like the Google Docs form to submit new proposals. Now 
if someone could please fix the form so that the Topic fields in new entries 
are word-wrapped correctly…   ;-)

Geoff

 On Apr 9, 2015, at 7:33 AM, Thierry Carrez thie...@openstack.org wrote:
 
 Hi everyone,
 
 If you have ideas for nice cross-project topics to discuss at the Design
 Summit in Vancouver, now is the time to propose them.
 
 Given the number of people involved, the etherpad last time ended up as
 a pile of unusable junk that took a while for the track lead to process,
 so for this time we opted for an open Google form with results on a
 Google spreadsheet where anyone can post comments.
 
 Here are the suggestions already posted:
 https://docs.google.com/spreadsheets/d/1vCTZBJKCMZ2xBhglnuK3ciKo3E8UMFo5S5lmIAYMCSE/edit?usp=sharing
 (You should be able to post comments there.)
 
 Here is the form to suggest new ideas:
 http://goo.gl/forms/S69HM6XEeb
 
 We expect to process those starting the week of April 27, so it would be
 great to submit your suggestions before EOD April 26.
 
 Regards,
 
 -- 
 Thierry Carrez (ttx)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] [nova] The risk of hanging when shutdown instance.

2015-04-09 Thread Daniel P. Berrange
On Tue, Mar 31, 2015 at 11:37:04AM +0800, zhang bo wrote:
 On 2015/3/31 4:36, Eric Blake wrote:
 
  On 03/30/2015 06:08 AM, Michal Privoznik wrote:
  On 30.03.2015 11:28, zhang bo wrote:
  On 2015/3/28 18:06, Rui Chen wrote:
 
  snip/
 
The API virDomainShutdown's description is out of date, it's not 
  correct.
In fact, virDomainShutdown would block or not, depending on its mode. 
  If it's in mode *agent*, then it would be blocked until qemu founds that 
  the guest actually got down.
  Otherwise, if it's in mode *acpi*, then it would return immediately.
Thus, maybe further more work need to be done in Openstack.
 
What's your opinions, Michal and Daniel (from libvirt.org), and Chris 
  (from openstack.org) :)
 
 
 
  Yep, the documentation could be better in that respect. I've proposed a
  patch on the libvirt upstream list:
 
  https://www.redhat.com/archives/libvir-list/2015-March/msg01533.html
  
  I don't think a doc patch is right.  If you don't pass any flags, then
  it is up to the hypervisor which method it will attempt (agent or ACPI).
   Yes, explicitly requesting an agent as the only method to attempt might
  be justifiable as a reason to block, but the overall API contract is to
  NOT block indefinitely.  I think that rather than a doc patch, we need
  to fix the underlying bug, and guarantee that we return after a finite
  time even when the agent is involved.
  
 
 So, may we get to a final decision? :) Shall we timeout in 
 virDomainShutdown() or leave it to openstack?
 The 2 solutions I can see are:
 1) timeout in virDomainShutdown() and virDomainReboot(). in libvirt.
 2) spawn a new thread to monitor the guest's status, if it's not shutoff 
 after dom.shutdown() for a while,
call dom.destroy() to force shut it down.  in openstack.

We should probably do both.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Kilo stable branches for other libraries

2015-04-09 Thread Kyle Mestery
On Thu, Apr 9, 2015 at 11:23 AM, Akihiro Motoki amot...@gmail.com wrote:

 Neutron team has a plan to release a new version of neutornclient for Kilo.
 We waited the new release until all granted FFE patches land,
 and now we are almost ready to go. (waiting one patch in the gate)

 The planned new version is 2.4.0. It is because neutronclient uses 2.3.x
 version
 for a long time (including Kilo) and we would like to have a room for
 bug fixing for Juno release.
 So we would like to propose the following for Kilo:

   python-neutronclient =2.4.0 2.5.0

 I am in the same page with Kyle.
 I hope this plan is acceptable.

 Can we request a requirements FFE for the following patch [1]? This will
set Liberty up to use the 2.5.x series for python-neutronclient, per what
Akihiro and I have planned. The Juno patch should hopefully merge soon [2],
which caps Juno to something appropriate as well.

Thanks
Kyle

[1] https://review.openstack.org/#/c/172149/
[2] https://review.openstack.org/#/c/172150/


 Thanks,
 Akihiro


 2015-04-10 0:09 GMT+09:00 Thierry Carrez thie...@openstack.org:
  Doug Hellmann wrote:
  Excerpts from Dean Troyer's message of 2015-04-08 09:42:31 -0500:
  On Wed, Apr 8, 2015 at 8:55 AM, Thierry Carrez thie...@openstack.org
  wrote:
 
  The question is, how should we proceed there ? This is new procedure,
 so
  I'm a bit unclear on the best way forward and would like to pick our
  collective brain. Should we just push requirements cap for all
 OpenStack
  libs and create stable branches from the last tagged release
 everywhere
  ? What about other libraries ? Should we push a cap there too ? Should
  we just ignore the whole thing for the Kilo release for all non-Oslo
 stuff
  ?
 
  Provided that represents the code being used for testing at this
 point, and
  I believe it does, this seems like a sensible default action.  Next
 cycle
  we can make a bit more noise about when this default action will occur,
  probably pick one of the other existing dates late in the cycle such
 as RC
  or string freeze or whatever. (Maybe that already happened and I can't
  remember?)
 
  I had hoped to have the spec approved in time to cut releases around
  the time Oslo did (1 week before feature freeze for applications,
  to allow us to merge the requirements cap before applications
  generate their RC1). At this point, I agree that we should go with
  the most recently tagged versions where possible. It sounds like
  we have a couple of libs that need releases, and we should evaluate
  those on a case-by-case basis, defaulting to not updating the stable
  requirements unless absolutely necessary.
 
  OK, here is a plan, let me know if it makes sense.
 
  If necessary:
  Cinder releases python-cinderclient 1.1.2
  Designate releases python-designateclient 1.1.2
  Horizon releases django_openstack_auth 1.2.0
  Ironic releases python-ironicclient 0.5.1
 
  Then we cap in requirements stable/kilo branch (once it's cut, when all
  RC1s are done):
 
  python-barbicanclient =3.0.1 3.1.0
  python-ceilometerclient =1.0.13 1.1.0
  python-cinderclient =1.1.0 1.2.0
  python-designateclient =1.0.0 1.2.0
  python-heatclient =0.3.0 0.5.0
  python-glanceclient =0.15.0 0.18.0
  python-ironicclient =0.2.1 0.6.0
  python-keystoneclient =1.1.0 1.4.0
  python-neutronclient =2.3.11 2.4.0
  python-novaclient =2.22.0 2.24.0
  python-saharaclient =0.8.0 0.9.0
  python-swiftclient =2.2.0 2.5.0
  python-troveclient =1.0.7 1.1.0
  glance_store =0.3.0 0.5.0
  keystonemiddleware =1.5.0 1.6.0
  pycadf =0.8.0 0.9.0
  django_openstack_auth=1.1.7,!=1.1.8 1.3.0
 
  As discussed we'll add openstackclient while we are at it:
 
  python-openstackclient=1.0.0,1.1.0
 
  That should trickle down to multiple syncs in multiple projects, which
  we'd merge in a RC2. Next time we'll do it all the same time Oslo did
  it, to avoid creating unnecessary respins (live and learn).
 
  Anything I missed ?
 
  Bonus question: will the openstack proposal bot actually propose
  stable/kilo g-r changes to proposed/kilo branches ?
 
  --
  Thierry Carrez (ttx)
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Akihiro Motoki amot...@gmail.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Neutron] The specs process, effective operators feedback and product management

2015-04-09 Thread Kyle Mestery
On Thu, Apr 9, 2015 at 9:52 AM, Assaf Muller amul...@redhat.com wrote:

 The Neutron specs process was introduced during the Juno timecycle. At the
 time it
 was mostly a bureaucratic bottleneck (The ability to say no) to ease the
 pain of cores
 and manage workloads throughout a cycle. Perhaps this is a somewhat naive
 outlook,
 but I see other positives, such as more upfront design (Some is better
 than none),
 less high level talk during the implementation review process and more
 focus on the details,
 and 'free' documentation for every major change to the project (Some would
 say this
 is kind of a big deal; What better way to write documentation than to
 force the developers
 to do it in order for their features to get merged).

 Right. Keep in mind that for Liberty we're making changes to this process.
For instance, I've already indicated specs which were approved for Kilo but
failed were moved to kilo-backlog. To get them into Liberty, you just
propose a patch which moves the patch in the liberty directory. We already
have a bunch that have taken this path. I hope we can merge the patches for
these specs in Liberty-1.


 That being said, you can only get a feature merged if you propose a spec,
 and the only
 people largely proposing specs are developers. This ingrains the open
 source culture of
 developer focused evolution, that, while empowering and great for
 developers, is bad
 for product managers, users (That are sometimes under-presented, as is the
 case I'm trying
 to make) and generally causes a lack of a cohesive vision. Like it or not,
 the specs process
 and the driver's team approval process form a sort of product management,
 deciding what
 features will ultimately go in to Neutron and in what time frame.

 We haven't done anything to limit reviews of specs by these other users,
and in fact, I would love for more users to review these specs.


 We shouldn't ignore the fact that we clearly have people and product
 managers pulling the strings
 in the background, often deciding where developers will spend their time
 and what specs to propose,
 for the purpose of this discussion. I argue that managers often don't have
 the tools to understand
 what is important to the project, only to their own customers. The Neutron
 drivers team, on the other hand,
 don't have a clear incentive (Or I suspect the will) to spend enormous
 amounts of time doing 'product management',
 as being a driver is essentially your third or fourth job by this point,
 and are the same people
 solving gate issues, merging code, triaging bugs and so on. I'd like to
 avoid to go in to a discussion of what's
 wrong with the current specs process as I'm sure people have heard me
 complain about this in
 #openstack-neutron plenty of times before. Instead, I'd like to suggest a
 system that would perhaps
 get us to implement specs that are currently not being proposed, and give
 an additional form of
 input that would make sure that the development community is spending it's
 time in the right places.

 While these are valid points, the fact that a spec merges isn't an
indication that hte code will merge. We have plenty of examples of that in
the past two releases. Thus, there are issues beyond the specs process
which may prevent your code from merging for an approved spec. That said, I
admire your guile in proposing some changes. :)


 While 'super users' have been given more exposure, and operators summits
 give operators
 an additional tool to provide feedback, from a developer's point of view,
 the input is
 non-empiric and scattered. I also have a hunch that operators still feel
 their voice is not being heard.

 Agreed.


 I propose an upvote/downvote system (Think Reddit), where everyone
 (Operators especially) would upload
 paragraph long explanations of what they think is missing in Neutron. The
 proposals have to be actionable
 (So 'Neutron sucks', while of great humorous value, isn't something I can
 do anything about),
 and I suspect the downvote system will help self-regulate that anyway. The
 proposals are not specs, but are
 like product RFEs, so for example there would not be a 'testing' section,
 as these proposals will not
 replace the specs process anyway but augment it as an additional form of
 input. Proposals can range
 from new features (Role based access control for Neutron resources,
 dynamic routing,
 Neutron availability zones, QoS, ...) to quality of life improvements
 (Missing logs, too many
 DEBUG level logs, poor trouble shooting areas with an explanation of what
 could be improved, ...)
 to long standing bugs, Nova network parity issues, and whatever else may
 be irking the operators community.
 The proposals would have to be moderated (Closing duplicates, low quality
 submissions and implemented proposals
 for example) and if that is a concern then I volunteer to do so.

 Anytime you introduce a voting system you provide incentive to game the
system. I am not in favor of a voting system 

Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-09 Thread Mathieu Rohon
Hi Matt,

Jaume did an awesome work at proposing and implementing a framework for
announcing public IP with a BGP speaker [1].
Unfortunately, the spec hasn't been merged in kilo. Hope it will be
resubmitted in L.
Your proposal seems to be a mix of Jaume proposal and HA router design?

We also play with a BGP speaker (BagPipe[3], derived from ExaBGP, written
in python) for IPVPN attachment [2].

[1]https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
[2]https://launchpad.net/bgpvpn
[3]https://github.com/Orange-OpenSource/bagpipe-bgp

On Thu, Apr 9, 2015 at 3:54 PM, Kyle Mestery mest...@mestery.com wrote:

 On Thu, Apr 9, 2015 at 2:13 AM, Matt Grant m...@mattgrant.net.nz wrote:

 Hi!

 I am just wondering what the story is about joining the neutron team.
 Could you tell me if you are looking for new contributors?

 We're always looking for someone new to participate! Thanks for reaching
 out!


 Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
 router developer for Allied Telesyn.  I also have extensive Python
 programming experience, having worked on the DNS Management System.

 Sounds like you have extensive experience programming network elements. :)


 I have been experimenting with IPv6 since 2008 on my own home network,
 and I am currently installing a Juno Openstack cluster to learn ho
 things tick.

 Great, this will give you an overview of things.


 Have you guys ever figured out how to do a hybrid L3 North/South Neutron
 router that propagates tenant routes and networks into OSPF/BGP via a
 routing daemon, and uses floating MAC addresses/costed flow rules via
 OVS to fail over to a hot standby router? There are practical use cases
 for such a thing in smaller deployments.

 BGP integration with L3 is something we'll look at again for Liberty.
 Carl Baldwin leads the L3 work in Neutron, and would be a good person to
 sync with on this work item. I suspect he may be looking for people to help
 integrate the BGP work in Liberty, this may be a good place for you to jump
 in.

 I have a single stand alone example working by turning off
 neutron-l3-agent network name space support, and importing the connected
 interface and static routes into Bird and Birdv6. The AMPQ connection
 back to the neutron-server is via the upstream interface and is secured
 via transport mode IPSEC (just easier than bothering with https/SSL).
 Bird looks easier to run from neutron as they are single process than a
 multi process Quagga implementation.  Incidentally, I am running this in
 an LXC container.

 Nice!


 Could some one please point me in the right direction.  I would love to
 be in Vancouver :-)

 If you're not already on #openstack-neutron on Freenode, jump in there.
 Plenty of helpful people abound. Since you're in New Zealand, I would
 suggest reaching out to Akihiro Motoki (amotoki) on IRC, as he's in Japan
 and closer to your timezone.

 Thanks!
 Kyle

 Best Regards,

 --
 Matt Grant,  Debian and Linux Systems Administration and Consulting
 Mobile: 021 0267 0578
 Email: m...@mattgrant.net.nz


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-09 Thread Matt Grant
Hi!

I am just wondering what the story is about joining the neutron team.
Could you tell me if you are looking for new contributors?

Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
router developer for Allied Telesyn.  I also have extensive Python
programming experience, having worked on the DNS Management System.

I have been experimenting with IPv6 since 2008 on my own home network,
and I am currently installing a Juno Openstack cluster to learn ho
things tick.

Have you guys ever figured out how to do a hybrid L3 North/South Neutron
router that propagates tenant routes and networks into OSPF/BGP via a
routing daemon, and uses floating MAC addresses/costed flow rules via
OVS to fail over to a hot standby router? There are practical use cases
for such a thing in smaller deployments.

I have a single stand alone example working by turning off
neutron-l3-agent network name space support, and importing the connected
interface and static routes into Bird and Birdv6. The AMPQ connection
back to the neutron-server is via the upstream interface and is secured
via transport mode IPSEC (just easier than bothering with https/SSL).
Bird looks easier to run from neutron as they are single process than a
multi process Quagga implementation.  Incidentally, I am running this in
an LXC container.
  
Could some one please point me in the right direction.  I would love to
be in Vancouver :-)

Best Regards,

-- 
Matt Grant,  Debian and Linux Systems Administration and Consulting
Mobile: 021 0267 0578
Email: m...@mattgrant.net.nz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-09 Thread Sandy Walsh
From: Angus Salkeld asalk...@mirantis.com
Sent: Wednesday, April 8, 2015 8:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] how to send messages (and events) to our 
users


I also want to point out that what I'd actually rather see is that all
of the services provide functionality like this. Users would be served
by having an event stream from Nova telling them when their instances
are active, deleted, stopped, started, error, etc.

Also, I really liked Sandy's suggestion to use the notifications on the
backend, and then funnel them into something that the user can consume.
The project they have, yagi, for putting them into atom feeds is pretty
interesting. If we could give people a simple API that says subscribe
to Nova/Cinder/Heat/etc. notifications for instance X, and put them
in an atom feed, that seems like something that would make sense as
an under-the-cloud service that would be relatively low cost and would
ultimately reduce load on API servers.

an under-the-clould service ? - That is not what I am after here.


Yeah, we're using this as an under cloud service. Our notifications are only
consumed internally, so it's not a multi-tenant/SaaS solution.


What I am really after is a general OpenStack solution for how end users can
consume service notifications (and replace heat event-list).


Right now there is ceilometer event-list, but as some Ceilometer devs have 
said,
they don't want to store every notification that comes.

So is the yagi + atom hopper solution something we can point end-users to?
Is it per-tenant etc...

However, there is a team within Rax working on this SaaS offering:
Peter Kazmir and Joe Savak. I'll let them respond with their lessons on
AtomHopper, etc.

Sandy, do you have a write up somewhere on how to set this up so I can 
experiment a bit?

Yagi: https://github.com/rackerlabs/yagi
AtomHopper: http://atomhopper.org/  (java warning)

The StackTach.v3 sandbox is DevStack-for-Notifications. It simulates
notifications (no openstack deploy needed) and it has Yagi set up to
consume them. There's also Vagrant scripts to get you going.

http://www.stacktach.com/install.html
https://github.com/stackforge/stacktach-sandbox

and some, slightly older, screencasts on the Sandbox here:
http://www.stacktach.com/screencasts.html?

We're in the #stacktach channel, by all means ping us if you run into problems.
Or if a Hangout works better for you, just scream :)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2015-04-09 Thread John Garbutt
On 8 April 2015 at 21:19, Anita Kuno ante...@anteaya.info wrote:
 On 04/08/2015 02:25 PM, John Garbutt wrote:
 Hi,

 I am johnthetubaguy on IRC.

 I would like to run for the OpenStack Compute (Nova) PTL position.

 I currently work as a Principal Engineer at Rackspace, focusing on
 software development for the Rackspace public cloud.

 Background
 ==

 I started working with Nova in late 2010, working on a private cloud
 style packaging of XenServer and OpenStack at Citrix. Later in 2010,
 my efforts moved towards helping maintain the upstream XenServer
 support. In early 2013 I moved to Rackspace to work on their public
 cloud.

 Over the last few releases, I have been helping with some of the
 release management, running some nova meetings, blueprint/specs
 management and in various other Nova relating activities.

 I would like to build on this experience and help continue Nova’s evolution.

 Code Contributions
 ==

 Its no secret that many contributors are finding it harder and harder
 to get their code merged into Nova.

 We need to ensure we maintain (ideally increase) code quality and
 consistency, but we also need to scale out our processes. Its a hard
 problem, but I am sure we can do better.

 I support the idea of moving to a kind of “tick-tock” release for
 Nova. Adopting this would mean Liberty has more room for new
 ‘features’, and the M release will have a similar focus on stability
 to Kilo.

 During Kilo, the focus on fixing bugs and working on fixing up some of
 the technical debt we have accrued. That of course, meant there were
 many features we were unable to merge, because we were focusing more
 on other things.

 There are some really promising ideas, and we need to start trying out
 some of these solutions very soon. I think a key part of why its hard
 to expand nova-core is because it currently means too much to be
 dropped from nova-core. We need that group to be more fluid.

 Process
 ===

 Not all process is good, but some can be helpful to communication
 between such a large community.

 We are now better at agreeing priorities for a release, and following
 through on that. We are better at reviving, agreeing and documenting
 plans for features in specs. We are now making better use of dev ref
 to capture longer term work streams, and their aims.

 More importantly, we relaxed a lot of the nova-spec process for
 blueprints that don’t need that level of overhead.

 When we focus our review effort, such as with the trivial patch list,
 we have seen great results. I think we need to expand the groups of
 reviews that need immediate attention to include reviews that a sub
 group feels is now “ready”. As trust builds between the central team
 and the sub group, we can look at how much that can evolve to a more
 formal federated system, as the sub group gains trust. But I hope
 better ideas will come along that we can consider and look at
 adopting.

 The key thing, lets continue this evolution, so we can scale out the
 community, keep the quality high, but while keeping everyone
 productive.

 Users and Operators
 ===

 The API v2.1 effort is a great start on the road towards better
 interoperability. This is a key step towards ensuring the compute API
 looks and feels the same regardless of what Nova deployment you are
 pointing at.

 I feel we need to be more upfront about what is known to work, and
 what is unknown. We started down this path for Hypervisor drivers, I
 feel we need to revive this effort, and look at other combinations:
 https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Driver_Testing_Status

 We can look at defining how well tested particular combinations are,
 using a similar methodology to devcore. But the important thing is
 having open information on what is known to work.

 We are getting clear feedback from our users about some areas of the
 code that need attention. We need to continue to be responsive to
 those requests, and look at ways to improve that process.

 Conclusion
 ==

 This email has got too long and writing is not my strong point. But
 for those who don’t know me, I hope it gives you a good idea about
 where I stand on some of the key issues facing the Nova community.

 Thanks for reading.

 johnthetubaguy

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 In the interest of fairness in the getting to know the PTL candidate,
 a question was posed by a community member to one of the Nova PTL
 candidates who posted their nomination to the mailing list previously.
 Would you be willing to address the same issue, which I believe is Nova
 code review workflow, for the benefit of the electorate?

Thanks for asking about this.

I covered some of that in the Code 

Re: [openstack-dev] [stable] [nova] FFE for libvirt: proper monitoring of live migration progress

2015-04-09 Thread Daniel P. Berrange
On Tue, Apr 07, 2015 at 12:57:40PM +0200, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 +344 new lines of code for an exception patch? No way. Let's take time
 to consider the patch outside the very next release.

Agreed, I think it needs a little more time to soak in master and get
some real world exposure before we push it to stable. Ideally some
operators would give us some positive feedback that it is working
correctly for them.

 
 On 04/04/2015 01:31 AM, Billy Olsen wrote:
  Hello,
  
  I would like to get a FFE for patch 
  https://review.openstack.org/#/c/162113/ which fixes an important
  bug (https://bugs.launchpad.net/nova/+bug/1414065) in the handling
  of the VM state during live migration.
  
  This patch fixes the libvirt monitoring of a live migration
  process, which is often needed for the use case of applying
  maintenance to a hypervisor. This patch changes the behavior of the
  live migration code from relying on the success of the migrateToURI
  call to actively monitoring the state of the libvirt domains to
  determine the status of the live migration.
  
  Regards,
  
  -- Billy Olsen

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release for Kilo: 2.4.0

2015-04-09 Thread Kyle Mestery
On Thu, Apr 9, 2015 at 4:00 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 4/9/2015 3:14 PM, Kyle Mestery wrote:

 The Neutron team is proud to announce the release of the latest version
 of python-neutronclient. This release includes the following bug fixes
 and improvements:

 aa1215a Merge Fix one remaining E125 error and remove it from ignore
 list
 cdfcf3c Fix one remaining E125 error and remove it from ignore list
 b978f90 Add Neutron subnetpool API
 d6cfd34 Revert Remove unused AlreadyAttachedClient
 5b46457 Merge Fix E265 block comment should start with '# '
 d32298a Merge Remove author tag
 da804ef Merge Update hacking to 0.10
 8aa2e35 Merge Make secgroup rules more readable in security-group-show
 a20160b Merge Support fwaasrouterinsertion extension
 ddbdf6f Merge Allow passing None for subnetpool
 5c4717c Merge Add Neutron subnet-create with subnetpool
 c242441 Allow passing None for subnetpool
 6e10447 Add Neutron subnet-create with subnetpool
 af3fcb7 Adding VLAN Transparency support to neutronclient
 052b9da 'neutron port-create' missing help info for --binding:vnic-type
 6588c42 Support fwaasrouterinsertion extension
 ee929fd Merge Prefer argparse mutual exclusion
 f3e80b8 Prefer argparse mutual exclusion
 9c6c7c0 Merge Add HA router state to l3-agent-list-hosting-router
 e73f304 Add HA router state to l3-agent-list-hosting-router
 07334cb Make secgroup rules more readable in security-group-show
 639a458 Merge Updated from global requirements
 631e551 Fix E265 block comment should start with '# '
 ed46ba9 Remove author tag
 e2ca291 Update hacking to 0.10
 9b5d397 Merge security-group-rule-list: show all info of rules briefly
 b56c6de Merge Show rules in handy format in security-group-list
 c6bcc05 Merge Fix failures when calling list operations using Python
 binding
 0c9cd0d Updated from global requirements
 5f0f280 Fix failures when calling list operations using Python binding
 c892724 Merge Add commands from extensions to available commands
 9f4dafe Merge Updates pool session persistence options
 ce93e46 Merge Added client calls for the lbaas v2 agent scheduler
 c6c788d Merge Updating lbaas cli for TLS
 4e98615 Updates pool session persistence options
 a3d46c4 Merge Change Creates to Create in help text
 4829e25 security-group-rule-list: show all info of rules briefly
 5a6e608 Show rules in handy format in security-group-list
 0eb43b8 Add commands from extensions to available commands
 6e48413 Updating lbaas cli for TLS
 942d821 Merge Remove unused AlreadyAttachedClient
 a4a5087 Copy functional tests from tempest cli
 dd934ce Merge exec permission to port_test_hook.sh
 30b198e Remove unused AlreadyAttachedClient
 a403265 Merge Reinstate Max URI length checking to V2_0 Client
 0e9d1e5 exec permission to port_test_hook.sh
 4b6ed76 Reinstate Max URI length checking to V2_0 Client
 014d4e7 Add post_test_hook for functional tests
 9b3b253 First pass at tempest-lib based functional testing
 09e27d0 Merge Add OS_TEST_PATH to testr
 7fcb315 Merge Ignore order of query parameters when compared in
 MyUrlComparator
 ca52c27 Add OS_TEST_PATH to testr
 aa0042e Merge Fixed pool and health monitor create bugs
 45774d3 Merge Honor allow_names in *-update command
 17f0ca3 Ignore order of query parameters when compared in MyUrlComparator
 aa0c39f Fixed pool and health monitor create bugs
 6ca9a00 Added client calls for the lbaas v2 agent scheduler
 c964a12 Merge Client command extension support
 e615388 Merge Fix lbaas-loadbalancer-create with no --name
 c61b1cd Merge Make some auth error messages more verbose
 779b02e Client command extension support
 e5e815c Fix lbaas-loadbalancer-create with no --name
 7b8c224 Honor allow_names in *-update command
 b9a7d52 Updated from global requirements
 62a8a5b Make some auth error messages more verbose
 8903cce Change Creates to Create in help text

 For more details on the release, please see the LP page and the detailed
 git log history.

 https://launchpad.net/python-neutronclient/2.4/2.4.0

 Please report any bugs in LP.

 Thanks!
 Kyle


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 And the gate has exploded on kilo-rc1:

 http://goo.gl/dnfSPC

 Proposed: https://review.openstack.org/#/c/172150/

 The proposed patch is in the merge queue now that fixes this. Hopefully we
can prevent the plague from spreading too much and this merges in  30
minutes.

Thanks,
Kyle


 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-09 Thread Victor Stinner
Hi,

During the last OpenStack Summit at Paris, we discussed how we can port 
OpenStack to Python 3, because eventlet was not compatible with Python 3. There 
are multiple approaches: port eventlet to Python 3, replace eventlet with 
asyncio, replace eventlet with threads, etc. We decided to not take a decision 
and instead investigate all options.

I fixed 4 issues with monkey-patching in Python 3 (importlib, os.open(), 
threading.RLock, threading.Thread). Good news: the just released eventlet 
0.17.3 includes these fixes and it is now fully compatible with Python 3! For 
example, the Oslo Messaging test suite now pass with this eventlet version! 
Currently, eventlet is disabled in Oslo Messaging on Python 3 (eventlet tests 
are skipped).

I just sent a patch for requirements and Oslo Messaging to bump to eventlet 
0.17.3, but it will have to wait until everyone has master as Liberty.

   https://review.openstack.org/#/c/172132/
   https://review.openstack.org/#/c/172135/

It becomes possible to port more projects depending on eventlet to Python 3!

Liberty cycle will be a good opportunity to port more OpenStack components to 
Python 3. Most OpenStack clients and Common Libraries are *already* Python 3 
compatible, see the wiki page:

   https://wiki.openstack.org/wiki/Python3

--

To replace eventlet, I wrote a spec to replace it with asyncio:

   https://review.openstack.org/#/c/153298/

Joshua Harlow wrote a spec to replace eventlet with threads:

   https://review.openstack.org/#/c/156711/

But then he wrote a single spec Replace eventlet + monkey-patching with ?? 
which covers threads and asyncio:

   https://review.openstack.org/#/c/164035/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] RC1 state of play

2015-04-09 Thread Michael Still
There are a few bugs still outstanding for nova's RC1. Here's a quick
summary. For each of these we need to either merge the fix, or bump
the bug from being release blocking.

-

https://bugs.launchpad.net/nova/+bug/1427351
cells: hypervisor API extension can't find compute_node services

This still has review https://review.openstack.org/#/c/160506/
outstanding, but a related review has landed. Do we need to land the
outstanding review as well?

-

https://bugs.launchpad.net/nova/+bug/1430239
Hyper-V: *DataRoot paths are not set for instances

This one has https://review.openstack.org/#/c/162999 proposed as a
fix, which has one +2. Does anyone want to review a Hyper-V driver
change?

-

https://bugs.launchpad.net/nova/+bug/1431291
Scheduler Failures are no longer logged with enough detail for a site
admin to do problem determination

Two reviews outstanding here --
https://review.openstack.org/#/c/170421/ and its dependent (and WIP)
https://review.openstack.org/#/c/170472/ -- these seem to be not
really ready. What's the plan here?

-

https://bugs.launchpad.net/nova/+bug/1313573
nova backup fails to backup an instance with attached volume
(libvirt, LVM backed)

For this we've merged a change which raises an exception if you try to
do this, so I think this is no longer release critical? It's still a
valid bug though so this shouldn't be closed.

-

https://bugs.launchpad.net/nova/+bug/1438238
Several concurent scheduling requests for CPU pinning may fail due to
racy host_state handling

The fix is https://review.openstack.org/#/c/169245/, which needs more reviews.




Michael

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible

2015-04-09 Thread Joe Gordon
On Thu, Apr 9, 2015 at 9:25 AM, Victor Stinner vstin...@redhat.com wrote:

 Hi,

 During the last OpenStack Summit at Paris, we discussed how we can port
 OpenStack to Python 3, because eventlet was not compatible with Python 3.
 There are multiple approaches: port eventlet to Python 3, replace eventlet
 with asyncio, replace eventlet with threads, etc. We decided to not take a
 decision and instead investigate all options.

 I fixed 4 issues with monkey-patching in Python 3 (importlib, os.open(),
 threading.RLock, threading.Thread). Good news: the just released eventlet
 0.17.3 includes these fixes and it is now fully compatible with Python 3!
 For example, the Oslo Messaging test suite now pass with this eventlet
 version! Currently, eventlet is disabled in Oslo Messaging on Python 3
 (eventlet tests are skipped).

 I just sent a patch for requirements and Oslo Messaging to bump to
 eventlet 0.17.3, but it will have to wait until everyone has master as
 Liberty.

https://review.openstack.org/#/c/172132/
https://review.openstack.org/#/c/172135/

 It becomes possible to port more projects depending on eventlet to Python
 3!


Awesome!



 Liberty cycle will be a good opportunity to port more OpenStack components
 to Python 3. Most OpenStack clients and Common Libraries are *already*
 Python 3 compatible, see the wiki page:

https://wiki.openstack.org/wiki/Python3


https://wiki.openstack.org/wiki/Python3#Dependencies appears to be fairly
out of date. For example hacking works under python34 as does oslo
messaging as per this email etc.

Also what is the status of all the dependencies in
https://github.com/openstack/nova/blob/master/requirements.txt and more
generally
https://github.com/openstack/requirements/blob/master/global-requirements.txt

It would be nice to get a better sense of what the remaining libraries to
port over are before the summit so we can start planning how to do the
python34 migration.



 --

 To replace eventlet, I wrote a spec to replace it with asyncio:

https://review.openstack.org/#/c/153298/

 Joshua Harlow wrote a spec to replace eventlet with threads:

https://review.openstack.org/#/c/156711/

 But then he wrote a single spec Replace eventlet + monkey-patching with
 ?? which covers threads and asyncio:

https://review.openstack.org/#/c/164035/

 Victor

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Advice on a Neutron ACL kludge

2015-04-09 Thread Rich Wellner


On 4/9/15 8:08 AM, Neil Jerram wrote:

I think that people often mean different things by ACLs, so can you be
more precise?

Yeah, you're absolutely right.

What we are trying to do is really simple. We run an HPC facility and 
some of our workload needs port mapping and some needs public IP 
routing. Currently we use static iptables rules to manage this, but 
obviously that means a human is in the loop when things need to change. 
We are trying to get to a point where our switches are reconfigured on 
the fly when VMs are provisioned.


rw2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Regarding deleting snapshot when instance is OFF

2015-04-09 Thread Kashyap Chamarthy
On Wed, Apr 08, 2015 at 11:31:40PM +0530, Deepak Shetty wrote:
 Hi,
 Cinder w/ GlusterFS backend is hitting the below error as part of
 test_volume_boot_pattern tempest testcase

[Meta comment: Since main components that are triggering this errors are
Cinder with GlusterFS, adding Cinder tag would be useful to raise the
right folks' attention.]

 (at the end of testcase when it deletes the snap)
 
 /usr/local/
 
 lib/python2.7/dist-packages/libvirt.py, line 792, in blockRebase
 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver if ret == -1:
 raise libvirtError ('virDomainBlockRebase() failed', dom=self)
 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver
 libvirtError: *Requested
 operation is not valid: domain is not running*
 2015-04-08 07:22:44.376 32701 TRACE nova.virt.libvirt.driver

You'll likely see more details in libvirt's log why virDomainBlockRebase
fails. If you hit this failure on any of the recent Gate runs, then the
libvirt debug logs (now enabled by default) might give some clue.

Also, it would be useful if you can reproduce this issue outside of
Tempest (and its timing issues). Even better, if you can reproduce this
failure w/ just plain Cinder (or even w/o Cinder) to isolate the issue.

 More details in the LP bug [1]

The details in the bug does not provide a reproducer. As always,
providing a crystal clear reproducer (e.g. a script, or sequence of
`virsh`/libvirt API calls or exact Nova/Cinder commands) leading to the
failure will allow people to take a look at the bug much more quicker
Instead of leaving the Burden of Proof on the bug triagers to have a
reproducer.

 In looking closely at the testcase, it waits for the Instance to turn
 OFF post which the cleanup starts which tried to delete the snap, but
 since the cinder volume is attached state (in-use) it lets nova take
 control of the snap del operation, and nova fails as it cannot do
 blockRebase as domain is offline.


blockRebase (in short, it populates a disk image with data from its
:backing image chain, and can act on different flags you provide to it)
cannot operate on an offline image (nor on a persistent libvirt domain,
but Nova deals with it by temporarily undefining it and later redefining
it). So, first you might want to figure out why the guest is offline
before blockRebase call is invoked to get an understanding of your
questions below.

 Questions:
 
 1) Is this a valid scenario being tested ? Some say yes, I am not
 sure, since the test makes sure that instance is OFF before snap is
 deleted and this doesn't work for fs-backed drivers as they use hyp
 assisted snap which needs domain to be active.
 
 
 2) If this is valid scenario, then it means libvirt.py in nova should
 be modified NOT to raise error, but continue with the snap delete (as
 if volume was not attached) and take care of the dom xml (so that
 domain is still bootable post snap deletion), is this the way to go ?
 
 
 Appreciate suggestions/comments


-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Tempest] Regarding deleting snapshot when instance is OFF

2015-04-09 Thread Eric Blake
On 04/08/2015 11:22 PM, Deepak Shetty wrote:
 + [Cinder] and [Tempest] in the $subject since this affects them too
 
 On Thu, Apr 9, 2015 at 4:22 AM, Eric Blake ebl...@redhat.com wrote:
 
 On 04/08/2015 12:01 PM, Deepak Shetty wrote:

 Questions:

 1) Is this a valid scenario being tested ? Some say yes, I am not sure,
 since the test makes sure that instance is OFF before snap is deleted and
 this doesn't work for fs-backed drivers as they use hyp assisted snap
 which
 needs domain to be active.

 Logically, it should be possible to delete snapshots when a domain is
 off (qemu-img can do it, but libvirt has not yet been taught how to
 manage it, in part because qemu-img is not as friendly as qemu in having
 a re-connectible Unix socket monitor for tracking long-running progress).

 
 Is there a bug/feature already opened for this ?

Libvirt has this bug: https://bugzilla.redhat.com/show_bug.cgi?id=987719
which tracks generic ability of libvirt to delete snapshots; ideally,
the code to manage snapshots will work for both online and persistent
offline guests, but it may result in splitting the work into multiple bugs.

 I didn't understand much
 on what you
 mean by re-connectible unix socket :)... are you hinting that qemu-img
 doesn't have
 ability to attach to a qemu / VM process for long time over unix socket ?

For online guest control, libvirt normally creates a Unix socket, then
starts qemu with its -qmp monitor pointing to that socket.  That way, if
libvirtd goes away and then restarts, it can reconnect as a client to
the existing socket file, and qemu never has to know that the person on
the other end changed.  With that QMP monitor, libvirt can query qemu's
current state at will, get event notifications when long-running jobs
have finished, and issue commands to terminate long-running jobs early,
even if it is a different libvirtd issuing a later command than the one
that started the command.

qemu-img, on the other hand, only has the -p option or SIGUSR1 signal
for outputting progress to stderr on a long-running operation (not the
most machine-parseable), but is not otherwise controllable.  It does not
have a management connection through a Unix socket.  I guess in thinking
about it a bit more, a Unix socket is not essential; as long as the old
libvirtd starts qemu-img in a manner that tracks its pid and collects
stderr reliably, then restarting libvirtd can send SIGUSR1 to the pid
and track the changes to stderr to estimate how far along things are.

Also, the idea has been proposed that qemu-img is not necessary; libvirt
could use qemu -M none to create a dummy machine with no CPUs and JUST
disk images, and then use the qemu QMP monitor as usual to perform block
operations on those disks by reusing the code it already has working for
online guests.  But even this approach needs coding into libvirt.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Doug Hellmann
Excerpts from Li Ma's message of 2015-04-09 21:53:03 +0800:
 OK. I didn't notice because requirements project doesn't have stable/kilo yet.
 Thanks for explanation.
 

No problem, this is a somewhat new process. The stable branch for
the requirements repository should be created pretty soon when all
of the release candidates for the application projects are ready.

Doug

 On Thu, Apr 9, 2015 at 9:37 PM, Doug Hellmann d...@doughellmann.com wrote:
  Excerpts from Li Ma's message of 2015-04-09 21:21:40 +0800:
  Hi Doug,
 
  In the global requirements.txt, oslo.messaging version is still =
  1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
  there when I deploy with devstack master branch.
 
  I submitted a review for the update.
 
  At this point we have frozen the requirements for kilo (still master for
  most of the applications, I think). So rather than updating that
  requirement, we need to back-port the appropriate fixes to the
  stable/kilo branch of oslo.messaging. I'm sure the messaging team would
  appreciate your help submitting any of those cherry-picked fixes. Mehdi
  put together a list of candidates in [1].
 
  Doug
 
  [1] https://etherpad.openstack.org/p/oslo-messaging-kilo-potential-backports
 
 
  On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com 
  wrote:
   We are content to announce the release of:
  
   oslo.messaging 1.9.0: Oslo Messaging API
  
   This is the first release of the library for the Liberty development 
   cycle.
  
   For more details, please see the git log history below and:
  
   http://launchpad.net/oslo.messaging/+milestone/1.9.0
  
   Please report issues through launchpad:
  
   http://bugs.launchpad.net/oslo.messaging
  
   Changes in oslo.messaging 1.8.0..1.9.0
   --
  
   8da14f6 Use the oslo_utils stop watch in decaying timer
   ec1fb8c Updated from global requirements
   84c0d3a Remove 'UNIQUE_ID is %s' logging
   9f13794 rabbit: fix ipv6 support
   3f967ef Create a unique transport for each server in the functional tests
   23dfb6e Publish tracebacks only on debug level
   53fde06 Add pluggability for matchmakers
   b92ea91 Make option [DEFAULT]amqp_durable_queues work
   cc618a4 Reconnect on connection lost in heartbeat thread
   f00ec93 Imported Translations from Transifex
   0dff20b cleanup connection pool return
   2d1a019 rabbit: Improves logging
   0ec536b fix up verb tense in log message
   b9e134d rabbit: heartbeat implementation
   72a9984 Fix changing keys during iteration in matchmaker heartbeat
   cf365fe Minor improvement
   5f875c0 ZeroMQ deployment guide
   410d8f0 Fix a couple typos to make it easier to read.
   3aa565b Tiny problem with notify-server in simulator
   0f87f5c Fix coverage report generation
   3be95ad Add support for multiple namespaces in Targets
   513ce80 tools: add simulator script
   0124756 Deprecates the localcontext API
   ce7d5e8 Update to oslo.context
   eaa362b Remove obsolete cross tests script
   1958f6e Fix the bug redis do not delete the expired keys
   9f457b4 Properly distinguish between server index zero and no server
   0006448 Adjust tests for the new namespace
  
   Diffstat (except docs and test files)
   -
  
   .coveragerc|   7 +
   openstack-common.conf  |   6 +-
   .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
   .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
   .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
   oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
   oslo_messaging/_drivers/amqp.py|  55 +++-
   oslo_messaging/_drivers/amqpdriver.py  |  15 +-
   oslo_messaging/_drivers/common.py  |  20 +-
   oslo_messaging/_drivers/impl_qpid.py   |   4 +-
   oslo_messaging/_drivers/impl_rabbit.py | 357 
   ++---
   oslo_messaging/_drivers/impl_zmq.py|  32 +-
   oslo_messaging/_drivers/matchmaker.py  |   2 +-
   oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
   oslo_messaging/localcontext.py |  16 +
   oslo_messaging/notify/dispatcher.py|   4 +-
   oslo_messaging/notify/middleware.py|   2 +-
   oslo_messaging/openstack/common/_i18n.py   |  45 +++
   oslo_messaging/openstack/common/versionutils.py| 253 +++
   oslo_messaging/rpc/dispatcher.py   |   6 +-
   oslo_messaging/target.py   |   9 +-
   requirements-py3.txt   |  13 +-
   requirements.txt   |  15 +-
   setup.cfg  |   6 +
   test-requirements-py3.txt  |   4 +-
   test-requirements.txt  |   4 +-
   tools/simulator.py   

[openstack-dev] [Security] Meeting agenda

2015-04-09 Thread Clark, Robert Graham
Reminder to all, our meeting is today at 1700 UTC on Freenode 
#openstack-meeting-alt

The agenda can be found here: 
https://wiki.openstack.org/wiki/Meetings/OpenStackSecurity#Agenda_for_next_meeting

* Roll Call
* Reminder that the agenda exists
* Update on project status
* security.openstack.org
** Potential as a home for OSSN
** Potential as a home for Developer Security Guidelines
** Security Blog
*OpenStack Security Guide
* Previous Action: nkinder to review sec guide identity section (tmcpeak, 
17:23:33)
* Previous Action: shelleea007 to review sec guide network section
* Update
* Open Bugs Requiring Review
* OpenStack Summit
** Developer space / fish bowls
** Entitlement organisation
* OSSN YAML Update
* Elections
** Changes to election terms
** Doing things the OpenStack way

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New version of python-neutronclient release for Kilo: 2.4.0

2015-04-09 Thread Matt Riedemann



On 4/9/2015 3:14 PM, Kyle Mestery wrote:

The Neutron team is proud to announce the release of the latest version
of python-neutronclient. This release includes the following bug fixes
and improvements:

aa1215a Merge Fix one remaining E125 error and remove it from ignore list
cdfcf3c Fix one remaining E125 error and remove it from ignore list
b978f90 Add Neutron subnetpool API
d6cfd34 Revert Remove unused AlreadyAttachedClient
5b46457 Merge Fix E265 block comment should start with '# '
d32298a Merge Remove author tag
da804ef Merge Update hacking to 0.10
8aa2e35 Merge Make secgroup rules more readable in security-group-show
a20160b Merge Support fwaasrouterinsertion extension
ddbdf6f Merge Allow passing None for subnetpool
5c4717c Merge Add Neutron subnet-create with subnetpool
c242441 Allow passing None for subnetpool
6e10447 Add Neutron subnet-create with subnetpool
af3fcb7 Adding VLAN Transparency support to neutronclient
052b9da 'neutron port-create' missing help info for --binding:vnic-type
6588c42 Support fwaasrouterinsertion extension
ee929fd Merge Prefer argparse mutual exclusion
f3e80b8 Prefer argparse mutual exclusion
9c6c7c0 Merge Add HA router state to l3-agent-list-hosting-router
e73f304 Add HA router state to l3-agent-list-hosting-router
07334cb Make secgroup rules more readable in security-group-show
639a458 Merge Updated from global requirements
631e551 Fix E265 block comment should start with '# '
ed46ba9 Remove author tag
e2ca291 Update hacking to 0.10
9b5d397 Merge security-group-rule-list: show all info of rules briefly
b56c6de Merge Show rules in handy format in security-group-list
c6bcc05 Merge Fix failures when calling list operations using Python
binding
0c9cd0d Updated from global requirements
5f0f280 Fix failures when calling list operations using Python binding
c892724 Merge Add commands from extensions to available commands
9f4dafe Merge Updates pool session persistence options
ce93e46 Merge Added client calls for the lbaas v2 agent scheduler
c6c788d Merge Updating lbaas cli for TLS
4e98615 Updates pool session persistence options
a3d46c4 Merge Change Creates to Create in help text
4829e25 security-group-rule-list: show all info of rules briefly
5a6e608 Show rules in handy format in security-group-list
0eb43b8 Add commands from extensions to available commands
6e48413 Updating lbaas cli for TLS
942d821 Merge Remove unused AlreadyAttachedClient
a4a5087 Copy functional tests from tempest cli
dd934ce Merge exec permission to port_test_hook.sh
30b198e Remove unused AlreadyAttachedClient
a403265 Merge Reinstate Max URI length checking to V2_0 Client
0e9d1e5 exec permission to port_test_hook.sh
4b6ed76 Reinstate Max URI length checking to V2_0 Client
014d4e7 Add post_test_hook for functional tests
9b3b253 First pass at tempest-lib based functional testing
09e27d0 Merge Add OS_TEST_PATH to testr
7fcb315 Merge Ignore order of query parameters when compared in
MyUrlComparator
ca52c27 Add OS_TEST_PATH to testr
aa0042e Merge Fixed pool and health monitor create bugs
45774d3 Merge Honor allow_names in *-update command
17f0ca3 Ignore order of query parameters when compared in MyUrlComparator
aa0c39f Fixed pool and health monitor create bugs
6ca9a00 Added client calls for the lbaas v2 agent scheduler
c964a12 Merge Client command extension support
e615388 Merge Fix lbaas-loadbalancer-create with no --name
c61b1cd Merge Make some auth error messages more verbose
779b02e Client command extension support
e5e815c Fix lbaas-loadbalancer-create with no --name
7b8c224 Honor allow_names in *-update command
b9a7d52 Updated from global requirements
62a8a5b Make some auth error messages more verbose
8903cce Change Creates to Create in help text

For more details on the release, please see the LP page and the detailed
git log history.

https://launchpad.net/python-neutronclient/2.4/2.4.0

Please report any bugs in LP.

Thanks!
Kyle


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



And the gate has exploded on kilo-rc1:

http://goo.gl/dnfSPC

Proposed: https://review.openstack.org/#/c/172150/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][tc] Adding the Puppet modules to OpenStack

2015-04-09 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Just adding the tc tag so this doesn't get overlooked by the relevant
people.

On 04/09/2015 10:33 AM, Emilien Macchi wrote:
 It has been quite some time now that Puppet OpenStack contributors
 have wanted to be part of the big tent so we would become an
 official project.
 
 We talked about that over our last IRC meetings and decided to
 elect a PTL so we would fit OpenStack requirements.
 
 Today, we officially ask to the OpenStack TC to consider our
 candidacy to be an official project:
 https://review.openstack.org/#/c/172112/
 
 Please let us know any feedback in the review,
 
 
 
 __


 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJVJq+XAAoJEDehGd0Fy7uqGjIH/jx6K+cp9Y5hMqh9xqiSyOim
uOE+aoGT8owHS1gz/SX11/KS0KKBMOcvOrUFHif06cDdZ4RtV28YO/6ADXkS0usu
CJ4LGzllqdlzBIObtsV/X3h/VlOQ2ELO9W8fuafFVhtUR8er5bN5SLAHvPJewE2I
96XVIINhO7aT8Tb8KMJsQGpiGzWs7NywX7rlQv6myhoLlZp0zPqXRpWifTPHjRMa
+33cCLfvt/mwsvf88Jmd7SFNVXrAkfSRZoMuJKuFdpduSJjTl2OYilq5udgkRCMh
UXBzer3ACK6T+PD/We6ZOML93gWFolzMb2wN9LaTKgSrhPNhLZdgjxQcJLtGXcw=
=bG39
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help needed

2015-04-09 Thread Abhishek Shrivastava
*CAUSE:*

This error has happened because of new pip 6.1.1 release. In this release
they have remove the attribute *url* and used *link* in that place, so,
devstack guys who were using *url* in the *openstack/requirements*
project also need to change the variable. And I think you don't have the
updated *openstack/requirements* project.

*SOLUTION:*

​Following steps you can use to solve this issue:

1. Do ./unstack.sh
2. Go to /op​t/stack/ and delete the requirements folder.
3. Go to devstack folder and run stack.sh again.

On Thu, Apr 9, 2015 at 3:46 PM, Deepika Agrawal deepika...@gmail.com
wrote:

 This is the full log :-
  python update.py /opt/stack/keystone
 Traceback (most recent call last):
   File update.py, line 274, in module
 main(options, args)
   File update.py, line 259, in main
 _copy_requires(options.suffix, options.softupdate, args[0])
   File update.py, line 219, in _copy_requires
 source_reqs = _parse_reqs('global-requirements.txt')
   File update.py, line 140, in _parse_reqs
 reqs[_parse_pip(pip)] = pip
   File update.py, line 101, in _parse_pip
 elif install_require.url:
   File /usr/local/lib/python2.7/dist-packages/pip/req/req_install.py,
 line 128, in url
 return self.link.url
 AttributeError: 'NoneType' object has no attribute 'url'

 On Thu, Apr 9, 2015 at 3:03 PM, Abhishek Shrivastava 
 abhis...@cloudbyte.com wrote:

 Can you give the full log.


 On Thu, Apr 9, 2015 at 2:57 PM, Deepika Agrawal deepika...@gmail.com
 wrote:

 hi guys!
 i am geting attribute error nonetype has no attribute URL in python
 update.py /opt/stack/keystone when i am going to run stack.sh.
  Please help!
 --
 Deepika Agrawal



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --


 *Thanks  Regards,*
 *Abhishek*
 *Cloudbyte Inc. http://www.cloudbyte.com*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Deepika Agrawal


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][barbican] default certificate manager

2015-04-09 Thread Brandon Logan
Hi Ihar,
So that decision was indeed hastily done but I still think it was the right 
one.  Let me break down the reasons:

1) To use the local cert manager, the API would have to accept the raw secrets 
(certificate, private key, etc).  We'd have to store that some place, but it 
would have been explicitly documented that the local cert manager was an 
insecure option and should not be used in a production environment.  So that's 
not a huge deal, but still a factor.  Without these fields, the local cert 
manager is useless because a user can't store anything.  

2) If #1 was allowed then the listener would have to accept those fields along 
with a tls_container_id.  That in itself can be confusing, but it could be 
overcome with documentation.  

3) If barbican was in use then it would be expected that the neutron-lbaas API 
would accept the raw secrets, and then its up to the system to store those 
secrets in barbican.  Who should those secrets be owned by?
a) If we make them owned by the user then you run into the issue of 
them re-using the secrets in some other system.  What happens when the user 
deletes the listener that the secrets were originally created for?
b) If we make them owned by the system then a user can't reuse the same 
secrets, which is a big reason to use barbican.

4) Time.  The options above could definitely have been done, but along with not 
being clear as to which is the best option (if there is one), there wasn't much 
time to actually implement them. 

So given all of that, I think defaulting to barbican was the lesser of many 
evils.  LBaaS v2 is marked as experimental in the docs so that gives us some 
leeway to make some backwards incompatible changes, though the options above 
wouldn't be backwards incompatible.  It's still a signal to users/operators 
that its experimental.

Thanks,
Brandon  



From: Ihar Hrachyshka ihrac...@redhat.com
Sent: Thursday, April 9, 2015 10:29 AM
To: openstack-dev
Subject: [openstack-dev] [neutron][lbaas][barbican] default certificate manager

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi lbaas folks,

I've realized recently that the default certificate manager for lbaas
advanced service is now barbican based. Does it mean that to make
default configuration working as is, users will need to deploy
barbican service? If that's really the case, the default choice seems
to be unfortunate. I think it would be better not to rely on external
service in default setup, using local certificate manager.

Is there a particular reason behind the default choice?

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVJprKAAoJEC5aWaUY1u57OEcIANdh8uBUcHKxBqjYFwQWoJRx
jLLlH6uxivP3i9nBiYFTZG8uwFhwCzL5rl9uatB7+Wsu41uOTJZeUlCM4dN+xOIz
J9KujLv1oGD/FvgpVGP/arJ6SoCeiINmezwQAziid6dmtH1iYePFCCTCJedbMmND
KampF+RXmHIwXvwVN1jK/tDfGsMHOoGKjy4jmgw48jBWFch1PBWQnRn4ooxZDbmI
VGQvSbpDwkQ3+N3ELZHx0m7l9kGmRKQl/8Vwml6pJKtcrGObkQGGGPeTPYj8Y/NO
Peht83x+HkrIupXZpkm3ybyHWSQdJw+RdKquGWKPTrcNGL1zZTl46rHWF79rhxA=
=C8+6
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help needed

2015-04-09 Thread Abhishek Shrivastava
Can you give the full log.


On Thu, Apr 9, 2015 at 2:57 PM, Deepika Agrawal deepika...@gmail.com
wrote:

 hi guys!
 i am geting attribute error nonetype has no attribute URL in python
 update.py /opt/stack/keystone when i am going to run stack.sh.
  Please help!
 --
 Deepika Agrawal


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Design Summit - Cross-project track - Session suggestions

2015-04-09 Thread Thierry Carrez
Hi everyone,

If you have ideas for nice cross-project topics to discuss at the Design
Summit in Vancouver, now is the time to propose them.

Given the number of people involved, the etherpad last time ended up as
a pile of unusable junk that took a while for the track lead to process,
so for this time we opted for an open Google form with results on a
Google spreadsheet where anyone can post comments.

Here are the suggestions already posted:
https://docs.google.com/spreadsheets/d/1vCTZBJKCMZ2xBhglnuK3ciKo3E8UMFo5S5lmIAYMCSE/edit?usp=sharing
(You should be able to post comments there.)

Here is the form to suggest new ideas:
http://goo.gl/forms/S69HM6XEeb

We expect to process those starting the week of April 27, so it would be
great to submit your suggestions before EOD April 26.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev