Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
Thomas Goirand z...@debian.org writes: Hi Thomas, On 11/15/2014 05:34 PM, Martin Geisler wrote: I'm sorry if I came across as being hostile towards packagers and distros. I've been running Debian for 15 years and that is because of the work the Debian developers put into making the system work well together at a whole. When it comes to installing software, I only use apt to touch paths outside my home directory. That is to ensure that the integrity of the system isn't compromised. That means that software not yet packaged for Debian has a low change of being installed by me. However, the chances of me installing it improve significantly if I can install it with pip or npm. Simply because this allows me to do a local installation in a home directory -- I know then that I can easily remove the sofware later. Sorry to say it this way, and it's not about you in particular, You're quite right, it's not about me! I'm not about to deploy OpenStack anytime soon so you don't have to sell the packaging solution to me :) My main goal in this discussion was to bring some web development knowledge to the table. It's clear to me that you have a very strong background in Debian packaging -- and (I'm guessing here) not a very strong background in web development. What we care is to find a system that will satisfy both worlds: distributions upstream fast moving development. It is looking like NPM has the best feature and that it would be a winner against Bower and Grunt. As Richard said, npm and bower are not competitors. You use npm to install bower, and you use bower to download Angular, jQuery, Bootstrap and other static files. These are the static files that you will want to include when you finally deploy the web app to your server. Before using Bower, people would simply download Angular from the projects homepage and check it into version control. Bower is not doing much, but using it avoids this bad practice. There is often a kind of compilation step between bower downloading a dependency and the deployment on the webserver: minification and compression of the JavaScript and CSS. Concatenating and minifying the files serve to reduce the number of HTTP requests -- which can make an app much faster. Finally, you use Grunt/Gulp to execute other tools during development. These tools could be a local web server, it could be running the unit tests. Grunt is only a convenience tool here -- think of it as a kind of Makefile that tells you how to lunch various tasks. Again, I'm just trying to bring information to light and let you know the tools of the trade -- how you and OpenStack as a whole decide to use and package them is not my concern. -- Martin Geisler http://google.com/+MartinGeisler pgpU3qC4oRLtU.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Ceilometer memory.usage can not get info from libvirt
Hi all, 2014-11-17 16:04:01.563 5162 INFO ceilometer.agent [-] Polling pollster memory.usage in the context of meter_source 14 2014-11-17 16:04:01.564 5162 DEBUG ceilometer.compute.pollsters.memory [-] Checking memory usage for instance 7e53172c-f05f-4fda-9855-af6775c1f4a8 get_samples /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py:31 140002 2014-11-17 16:04:01.573 5162 WARNING ceilometer.compute.virt.libvirt.inspector [-] Failed to inspect memory usage of instance-0002, can not get info from libvirt 140003 2014-11-17 16:04:01.574 5162 ERROR ceilometer.compute.pollsters.memory [-] Could not get Memory Usage for 7e53172c-f05f-4fda-9855-af6775c1f4a8: 'NoneType' object has no attribute 'usage' 140004 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory Traceback (most recent call last): 140005 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory File /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py, line 37, in get_samples 140006 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory 'usage': memory_info.usage})) 140007 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory AttributeError: 'NoneType' object has no attribute 'usage' When ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] v2 or v3 for new api
Thank you for the clarification, yes I know about the blueprint/specification, I submitted yet them and the spec is currently under review :) I noticed there are several steps one has always to do to enable and make a v3 api to work and pass the tests. It would be awesome to have a guideline or something similar that explain these steps, but I didn't find anything in wiki or documentation. In particular I noticed I had to modify the file nova/nova.egg-info/entry_points.txt to make my v3 api to load, but this file seems not to be under versioning, is this file modified only after the changes are merged? On 11/16/14 23:55, Christopher Yeoh wrote: On Thu, Nov 13, 2014 at 12:14 AM, Pasquale Porreca pasquale.porr...@dektech.com.au mailto:pasquale.porr...@dektech.com.au wrote: Hello I am working on an api for a new feature in nova, but I am wondering what is the correct way to add a new extension: should it be supported by v2, v3 or both? You need now to have at least a v2.1 (formerly known as v3) extension. V2 support if you want but I think once v2.1 is fully merged and tested (which may not be that far away at all) we should freeze v2 and rely just on v2.1 for new features. Otherwise the interaction between v2.1 being exactly equivalent to v2 plus having microversion support for v2.1 will get a bit confusing. As the other Chris mentioned, the first step however is to get a nova-spec submitted which needs to fully describe the API additions that you want to make. Regards, Chris BR -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 tel:%2B39%203394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 30/10/14 13:13, Matthias Runge wrote: Hi, tl;dr: how to progreed in separating horizon and openstack_dashboard Options so far: horizon_lib/openstack_horizon horizon_lib/horizon_dashboard horizon_lib/horizon horizon/openstack_dashboard did I miss something? If not, I'll create a poll. Matthias ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository distinct from neutron. Can you confirm that? Salvatore On 13 November 2014 13:26, Kamat, Maruti Haridas maruti.ka...@hp.com wrote: Hi Friends, As discussed during the summit, I have uploaded the spec for review at https://review.openstack.org/#/c/134179/ Thanks, Maruti ___ OpenStack-dev mailing list
[openstack-dev] [Cinder][DR]replication/CG support in driver
Hi all, We want to add replication/CG related support in huawei driver, would this also be bound to the Dec.19th deadline? There is confusion on whether the deadline is for new drivers or for all the proposals for Kilo. As I remembered in the design summit meetup, it was set for the new drivers coming up in K. Thanks! -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
On Nov 17, 2014, at 9:13 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. I think we need to be careful of the natural tendency to view the new project as a place to put everything that is moving too slowly in neutron. Certainly advanced services is one of the most obvious use cases of this functionality, but that doesn't mean that the notion of an SDN trunk port should live anywhere but neutron, IMO. Thanks, doug Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service
[openstack-dev] 答复: Ceilometer memory.usage can not get info from libvirt
As described in the document: http://docs.openstack.org/developer/ceilometer/measurements.html#measurements “”” Note To enable the libvirt memory.usage supporting, you need libvirt version 1.1.1+, qemu version 1.5+, and you need to prepare suitable balloon driver in the image, particularly for Windows guests, most modern Linuxes have it built in. The memory.usage meters can’t be fetched without image balloon driver. “”” :) _ E_mail: mailto:raodingy...@chinacloud.com.cn raodingy...@chinacloud.com.cn 发件人: Du Jun [mailto:dj199...@gmail.com] 发送时间: 2014年11月17日 16:57 收件人: OpenStack Development Mailing List (not for usage questions) 主题: [openstack-dev] Ceilometer memory.usage can not get info from libvirt Hi all, 2014-11-17 16:04:01.563 5162 INFO ceilometer.agent [-] Polling pollster memory.usage in the context of meter_source 14 2014-11-17 16:04:01.564 5162 DEBUG ceilometer.compute.pollsters.memory [-] Checking memory usage for instance 7e53172c-f05f-4fda-9855-af6775c1f4a8 get_samples /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py:31 140002 2014-11-17 16:04:01.573 5162 WARNING ceilometer.compute.virt.libvirt.inspector [-] Failed to inspect memory usage of instance-0002, can not get info from libvirt 140003 2014-11-17 16:04:01.574 5162 ERROR ceilometer.compute.pollsters.memory [-] Could not get Memory Usage for 7e53172c-f05f-4fda-9855-af6775c1f4a8: 'NoneType' object has no attribute 'usage' 140004 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory Traceback (most recent call last): 140005 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory File /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py, line 37, in get_samples 140006 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory 'usage': memory_info.usage})) 140007 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory AttributeError: 'NoneType' object has no attribute 'usage' When ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] mapping of hypervisor support matrix to driver functions - agenda?
On Fri Nov 14 16:32:59 UTC 2014 Daniel P. Berrange wrote: One of the items that came out of the design summit is to produce a formal document to detail so called capabilities of virtualization drivers, to replace what's currently in the wiki page you quote, and ultimately provide a much higher level of detail. Myself Mikal had volunteered to do this, and expect that it will finally take the form of a structured document living in the Nova git repository docs directory. Your mapping might prove to be useful input. Perhaps just create a new wiki page and upload it in whatever format you currently have it in, and we'll work forwards from there. To the best of my belief: https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI Regards, Markus Zoeller IRC: markus_z Launchpad: mzoeller ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
I think this thread is about the L2 gateway service... how's that related with the notion of trunk port? I know that the spec under review adds a component which is tantamount to a L2 gateway, but while the functionality is similar, the use case, and therefore the API exposed, are rather different. Am I missing something here? Salvatore On 17 November 2014 10:40, Doug Wiegley do...@a10networks.com wrote: On Nov 17, 2014, at 9:13 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. I think we need to be careful of the natural tendency to view the new project as a place to put everything that is moving too slowly in neutron. Certainly advanced services is one of the most obvious use cases of this functionality, but that doesn't mean that the notion of an SDN trunk port should live anywhere but neutron, IMO. Thanks, doug Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
This is in testtools 1.4.0 but I can't upload it to pypi atm - its 500ing. :( -Rob On 17 November 2014 18:41, Nikhil Manchanda nik...@manchanda.me wrote: Thanks Robert! Looks like it failed the Travis CI job due to an intermittent connectivity issue and I don't have the rights to kick-off the job again. I would appreciate it if you could kick it off again when you get a chance. Cheers, Nikhil On Sun, Nov 16, 2014 at 6:44 PM, Robert Collins robe...@robertcollins.net wrote: On 17 November 2014 11:29, Alan Pevec ape...@gmail.com wrote: 2014-11-15 23:06 GMT+01:00 Robert Collins robe...@robertcollins.net: We did find a further issue, which was due to the use of setUpClass in tempest (a thing that testtools has never supported per se - its always been a happy accident that it worked). I've hopefully fixed that in 1.3.0 and we're babysitting tempest now to see. Trove stable/juno py26 (py27 works) unit tests are failing with testtools 1.3.0 http://logs.openstack.org/periodic-stable/periodic-trove-python26-juno/fcf4db2/testr_results.html.gz ... File /home/jenkins/workspace/periodic-trove-python26-juno/trove/tests/unittests/mgmt/test_models.py, line 60, in setUpClass super(MockMgmtInstanceTest, cls).setUpClass() AttributeError: 'super' object has no attribute 'setUpClass' pip freeze diff since last good report is: -testtools==1.1.0 +testtools==1.3.0 +unittest2==0.8.0 Any ideas? https://github.com/testing-cabal/testtools/pull/125 Will fix that, and I'll cut 1.4.0 with that in it once I get a peer review. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Zero MQ remove central broker. Architecture change.
Hi, all! We want to discuss opportunity of implementation of the p-2-p messaging model in oslo.messaging for ZeroMQ driver. Actual architecture uses uncharacteristic single broker architecture model. In this way we are ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ documentation: - ZeroMQ has the core technical goals of simplicity and scalability, the core social goal of gathering together the best and brightest minds in distributed computing to build real, lasting solutions, and the political goal of breaking the old hegemony of centralization, as represented by most existing messaging systems prior to ZeroMQ. - The ZeroMQ Message Transport Protocol (ZMTP) is a transport layer protocol for exchanging messages between two peers over a connected transport layer such as TCP. - The two peers agree on the version and security mechanism of the connection by sending each other data and either continuing the discussion, or closing the connection. - The two peers handshake the security mechanism by exchanging zero or more commands. If the security handshake is successful, the peers continue the discussion, otherwise one or both peers closes the connection. - Each peer then sends the other metadata about the connection as a final command. The peers may check the metadata and each peer decides either to continue, or to close the connection. - Each peer is then able to send the other messages. Either peer may at any moment close the connection. From the current code docstring: ZmqBaseReactor(ConsumerBase): A consumer class implementing a centralized casting broker (PULL-PUSH). This approach is pretty unusual for ZeroMQ. Fortunately we have a bit of raw developments around the problem. These changes can introduce performance improvement. But to proof it we need to implement all new features, at least at WIP status. So, I need to be sure that the community doesn't avoid such of improvements. Regards, Ilya, Oleksii. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon
On 17/11/14 09:53, Martin Geisler wrote: [...] As Richard said, npm and bower are not competitors. You use npm to install bower, and you use bower to download Angular, jQuery, Bootstrap and other static files. These are the static files that you will want to include when you finally deploy the web app to your server. Before using Bower, people would simply download Angular from the projects homepage and check it into version control. Bower is not doing much, but using it avoids this bad practice. There is often a kind of compilation step between bower downloading a dependency and the deployment on the webserver: minification and compression of the JavaScript and CSS. Concatenating and minifying the files serve to reduce the number of HTTP requests -- which can make an app much faster. Finally, you use Grunt/Gulp to execute other tools during development. These tools could be a local web server, it could be running the unit tests. Grunt is only a convenience tool here -- think of it as a kind of Makefile that tells you how to lunch various tasks. Thank you for your explanations. The way I see it, we would need: - Bower in the development environment, - Grunt both in the development environment and packaged (to run tests, etc.), - Bower configuration file in two copies, one for global-requirements, and one for the Horizon's local requirements. Plus a gate job that makes sure no new library or version gets included in the Horizon's before getting into the global-requirements, - A tool, probably a script, that would help packaging the Bower packages into DEB/RPM packages. I suspect the Debian/Fedora packagers already have a semi-automatic solution for that. - A script that would generate a file with all the paths to those packaged libraries, that would get included in Horizon's settings.py What do you think? -- Radomir Dopieralski ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] [nova] Consistency groups?
Hello Xing, Do you have a libvirt volume driver on the Nova side for DRBD? No, we don't. We'd just use the existing DRBD 9 kernel module to provide the local block devices. Regarding getting consistency group information to the Nova nodes, can you help me understand the steps you need to go through? 1. Create a consistency group 2. Create a volume and add volume to the group Repeat the above step until all volumes are created and added to the group 3. Attach volume in the group 4. Create a snapshot of the consistency group The question I'm asking right now isn't about snapshots. Do you setup the volume on the Nova side at step 3? We currently don't have a group level API that setup all volumes in a group. Is it possible for you to detect whether a volume is in a group or not when attaching one volume and setup all volumes in the same group? Well, our Cinder driver passes some information to the Nova nodes; within that information block we can pass the consistency group (which will be the DRBD resource name) as well, to detect that case. Otherwise, it sounds like we need to add a group level API for this purpose. Perhaps just adding a volume is in consistency group X data item would be enough, too? Sorry about being so vague; I'm just not familiar enough with all the interdependencies from Cinder to Nova. Regards, Phil -- : Ing. Philipp Marek : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com : DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] A mascot for Ironic
On Sun, Nov 16, 2014 at 2:27 PM, Assaf Muller amul...@redhat.com wrote: - Original Message - Hi Ironickers, I was thinking this weekend: All the cool projects does have a mascot so I thought that we could have one for Ironic too. The idea about what the mascot would be was easy because the RAX guys put bear metal their presentation[1] and that totally rocks! So I drew a bear. It also needed an instrument, at first I thought about a guitar, but drums is actually my favorite instrument so I drew a pair of drumsticks instead. The drawing thing wasn't that hard, the problem was to digitalize it. So I scanned the thing and went to youtube to watch some tutorials about gimp and inkspace to learn how to vectorize it. Magic, it worked! Attached in the email there's the original draw, the vectorized version without colors and the final version of it (with colors). Of course, I know some people does have better skills than I do, so I also attached the inkspace file of the final version in case people want to tweak it :) So, what you guys think about making this little drummer bear the mascot of the Ironic project? Totally awesome. Ahh he also needs a name. So please send some suggestions and we can vote on the best name for him. Metal is kind of obvious. What would it take for you to give Neutron a mascot as well? We need all the PR we can get :) Hah I'm not a designer or anything. But I don't know if you guys help with the idea I can try to come up with something :) [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 Lucas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] A mascot for Ironic
I'm adding all the names suggestions to the Ironic WhiteBoard etherpad[1] (at the bottom). Feel free to include more names there and then we can call a vote. [1] https://etherpad.openstack.org/p/IronicWhiteBoard Lucas On Mon, Nov 17, 2014 at 10:56 AM, Lucas Alvares Gomes lucasago...@gmail.com wrote: On Sun, Nov 16, 2014 at 2:27 PM, Assaf Muller amul...@redhat.com wrote: - Original Message - Hi Ironickers, I was thinking this weekend: All the cool projects does have a mascot so I thought that we could have one for Ironic too. The idea about what the mascot would be was easy because the RAX guys put bear metal their presentation[1] and that totally rocks! So I drew a bear. It also needed an instrument, at first I thought about a guitar, but drums is actually my favorite instrument so I drew a pair of drumsticks instead. The drawing thing wasn't that hard, the problem was to digitalize it. So I scanned the thing and went to youtube to watch some tutorials about gimp and inkspace to learn how to vectorize it. Magic, it worked! Attached in the email there's the original draw, the vectorized version without colors and the final version of it (with colors). Of course, I know some people does have better skills than I do, so I also attached the inkspace file of the final version in case people want to tweak it :) So, what you guys think about making this little drummer bear the mascot of the Ironic project? Totally awesome. Ahh he also needs a name. So please send some suggestions and we can vote on the best name for him. Metal is kind of obvious. What would it take for you to give Neutron a mascot as well? We need all the PR we can get :) Hah I'm not a designer or anything. But I don't know if you guys help with the idea I can try to come up with something :) [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 Lucas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] A mascot for Ironic
Bear Metal +1 On 17 Nov 2014, at 12:00, Lucas Alvares Gomes lucasago...@gmail.com wrote: I'm adding all the names suggestions to the Ironic WhiteBoard etherpad[1] (at the bottom). Feel free to include more names there and then we can call a vote. [1] https://etherpad.openstack.org/p/IronicWhiteBoard Lucas On Mon, Nov 17, 2014 at 10:56 AM, Lucas Alvares Gomes lucasago...@gmail.com wrote: On Sun, Nov 16, 2014 at 2:27 PM, Assaf Muller amul...@redhat.com wrote: - Original Message - Hi Ironickers, I was thinking this weekend: All the cool projects does have a mascot so I thought that we could have one for Ironic too. The idea about what the mascot would be was easy because the RAX guys put bear metal their presentation[1] and that totally rocks! So I drew a bear. It also needed an instrument, at first I thought about a guitar, but drums is actually my favorite instrument so I drew a pair of drumsticks instead. The drawing thing wasn't that hard, the problem was to digitalize it. So I scanned the thing and went to youtube to watch some tutorials about gimp and inkspace to learn how to vectorize it. Magic, it worked! Attached in the email there's the original draw, the vectorized version without colors and the final version of it (with colors). Of course, I know some people does have better skills than I do, so I also attached the inkspace file of the final version in case people want to tweak it :) So, what you guys think about making this little drummer bear the mascot of the Ironic project? Totally awesome. Ahh he also needs a name. So please send some suggestions and we can vote on the best name for him. Metal is kind of obvious. What would it take for you to give Neutron a mascot as well? We need all the PR we can get :) Hah I'm not a designer or anything. But I don't know if you guys help with the idea I can try to come up with something :) [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 Lucas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On 11/15/2014 02:51 PM, Robert Collins wrote: It probably needs to be backed out of stable/icehouse. The issue is that we were installing unittest2 via distro packages *and* testtools new dependency on unittest2 did not express a minimum version. We're just about to issue 1.2.1 which will have such a minimum version. And for the record, this was released on saturday, not friday :). Damn you and your living in the future! :) Honestly, though, the requirements change was !=1.2.0, so it does the right thing of just blocking the 1 version that depending on new unittest2 but isn't explicit about it. I made it very narrow for a reason, as I assumed it would get fixed in the next released version, and we'd be rolling again. It looks like 1.3.0 did just that, which is great. Thanks for looking at this and getting a fix out Robert. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
On Mon, Nov 17, 2014 at 10:13:50AM +0100, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi Hi. On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. I, Isaku, talked with Maruti and has agreed to pursue [1] with a hope to unify [2] and we will see the outcome in kilo cycle. I'm willing to review/help [1] (and corresponding patches). thanks, All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of the design summit discussion in the ML? I see at least 3 specs which would be under such an umbrella spec : https://review.openstack.org/#/c/93329/ (BGPVPN) https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN) https://review.openstack.org/#/c/134179/ (l2 gw aas) On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando sorla...@nicira.com wrote: Thanks Maruti, I have some comments and questions which I've posted on gerrit. There are two things I would like to discuss on the mailing list concerning this effort. 1) Is this spec replacing https://review.openstack.org/#/c/100278 and https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds even more complexity. 2) It sounds like you should be able to implement this service plugin in either a feature branch or a repository
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On 11/16/2014 06:11 PM, Robert Collins wrote: On 17 November 2014 11:29, Alan Pevec ape...@gmail.com wrote: 2014-11-15 23:06 GMT+01:00 Robert Collins robe...@robertcollins.net: We did find a further issue, which was due to the use of setUpClass in tempest (a thing that testtools has never supported per se - its always been a happy accident that it worked). I've hopefully fixed that in 1.3.0 and we're babysitting tempest now to see. Trove stable/juno py26 (py27 works) unit tests are failing with testtools 1.3.0 http://logs.openstack.org/periodic-stable/periodic-trove-python26-juno/fcf4db2/testr_results.html.gz ... File /home/jenkins/workspace/periodic-trove-python26-juno/trove/tests/unittests/mgmt/test_models.py, line 60, in setUpClass super(MockMgmtInstanceTest, cls).setUpClass() AttributeError: 'super' object has no attribute 'setUpClass' pip freeze diff since last good report is: -testtools==1.1.0 +testtools==1.3.0 +unittest2==0.8.0 Any ideas? The use of unittest2 in the plumbing means we're now calling setUpClass on 2.6 which we were not doing before. However there is no implementation of setUpClass in the testtools base class, which leads to the error you are seeing. We can fix that by subclassing unittest2.TestCase in testtools' TestCase. I'll put a patch together today. -Rob We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. Fixing it is also fine. But I wouldn't mind just moving on here and letting the 2.6 bits die with vigor. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found
There needs to be a lot more context than that provided. As seen here - https://review.openstack.org/#/c/134379/ this seems to be working fine upstream. -Sean On 11/16/2014 09:16 PM, Wan, Sam wrote: Hi Sean, Seems once I unset ' DEVSTACK_PROJECT_FROM_GIT=python-keystoneclient,python-openstackclient', devstack will fail with ' ERROR: openstack The plugin token_endpoint could not be found'. How should I overcome this issue then? Thanks and regards Sam -Original Message- From: Sean Dague [mailto:s...@dague.net] Sent: Saturday, November 15, 2014 12:28 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found On 11/14/2014 09:09 AM, Jeremy Stanley wrote: On 2014-11-14 00:34:14 -0500 (-0500), Wan, Sam wrote: Seems we need to use python-keystoneclient and python-openstackclient from git.openstack.org because those on pip don’t work. That's a bug we're (collectively) trying to prevent in the future. Services, even under development, should not depend on features only available in unreleased versions of libraries. But in latest update of stack.sh, it’s to use pip by default [...] And this is intentional, implemented specifically so that we can keep it from happening again. Patrick actually got to the bottom of a bug we had in devstack around this, we merged the fixes this morning. As Jeremy said, installing from pypi released versions is intentional. If something wants to use features in a library, the library needs to cut a release. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] v2 or v3 for new api
On Mon, Nov 17, 2014 at 7:30 PM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Thank you for the clarification, yes I know about the blueprint/specification, I submitted yet them and the spec is currently under review :) I noticed there are several steps one has always to do to enable and make a v3 api to work and pass the tests. It would be awesome to have a guideline or something similar that explain these steps, but I didn't find anything in wiki or documentation. Yes, sorry documentation has been on our todo list for too long. Could I get you to submit a bug report about the lack of developer documentation for api plugins? It might hurry us up :-) In the meantime, off the top of my head. you'll need to create or modify the following files in a typical plugin: setup.cfg - add an entry in at least the nova.api.v3.extensions section etc/nova/policy.json - an entry for the permissions for you plugin, perhaps one per api method for maximum flexibility. Also will need a discoverable entry (lots of examples in this file) nova/tests/unit/fake_policy.json (similar to policy.json) nova/api/openstack/compute/plugins/v3/your_plugin.py - please make the alias name something os-scheduler-hints rather than OS-SCH-HNTS. No skimping on vowels. Probably the easiest way at this stage without more doco is look for for a plugin in that directory that does the sort of the thing you want to do. nova/tests/unit/nova/api/openstack/compute/contrib/test_your_plugin.py - we have been combining the v2 and v2.1(v3) unittests to share as much as possible, so please do the same here for new tests as the v3 directory will be eventually removed. There's quite a few examples now in that directory of sharing unittests between v2.1 and v2 but with a new extension the customisation between the two should be pretty minimal (just a bit of inheritance to call the right controller) nova/tests/unit/integrated/v3/test_your_plugin.py nova/tests/unit/integrated/test_api_samples.py Sorry the api samples tests are not unified yet. So you'll need to create two. All of the v2 api sample tests are in one directory, whilst the the v2.1 are separated into different files by plugin. There's some rather old documentation on how to generate the api samples themselves (hint: directories aren't made automatically) here: https://blueprints.launchpad.net/nova/+spec/nova-api-samples Personally I wouldn't bother with any xml support if you do decide to support v2 as its deprecated anyway. Hope this helps. Feel free to add me as a reviewer for the api parts of your changesets. Regards, Chris In particular I noticed I had to modify the file nova/nova.egg-info/entry_points.txt to make my v3 api to load, but this file seems not to be under versioning, is this file modified only after the changes are merged? On 11/16/14 23:55, Christopher Yeoh wrote: On Thu, Nov 13, 2014 at 12:14 AM, Pasquale Porreca pasquale.porr...@dektech.com.au wrote: Hello I am working on an api for a new feature in nova, but I am wondering what is the correct way to add a new extension: should it be supported by v2, v3 or both? You need now to have at least a v2.1 (formerly known as v3) extension. V2 support if you want but I think once v2.1 is fully merged and tested (which may not be that far away at all) we should freeze v2 and rely just on v2.1 for new features. Otherwise the interaction between v2.1 being exactly equivalent to v2 plus having microversion support for v2.1 will get a bit confusing. As the other Chris mentioned, the first step however is to get a nova-spec submitted which needs to fully describe the API additions that you want to make. Regards, Chris BR -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Solving the client libs stable support dilemma
On 11/16/2014 11:23 AM, Doug Hellmann wrote: On Nov 16, 2014, at 9:54 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-11-16 09:06:02 -0500 (-0500), Doug Hellmann wrote: So we would pin the client libraries used by the servers and installed globally, but then install the more recent client libraries in a virtualenv and test using those versions? That's what I was thinking anyway, yes. I like that. Honestly I don't, but it sucks less than the other solutions which sprang to mind. Hopefully someone will come along with a more elegant suggestion... in the meantime I don't see any obvious reasons why it wouldn't work. Really, it’s a much more accurate test of what we want. We have, as an artifact of our test configuration, to install everything on a single box. But what we’re trying to test is that a user can install the new clients and talk to an old cloud. We don’t expect deployers of old clouds to install new clients — at least we shouldn’t, and by pinning the requirements we can make that clear. Using the virtualenv for the new clients gives us separation between the “user” and “cloud” parts of the test configuration that we don’t have now. Anyway, if we’re prepared to go along with this I think it’s safe for us to stop using alpha version numbers for Oslo libraries as a matter of course. We may still opt to do it in cases where we aren’t sure of a new API or feature, but we won’t have to do it for every release. Doug I think this idea sounds good on the surface, though what a working system looks like is going to be a little interesting to make sure you are in / out of the venv. I actually think you might find it simpler to invert this. Create 1 global venv for servers, specify the venv before launching a service. Install all the clients into system level space, then running nova list doesn't require that it is put inside the venv. This should have the same results, but be less confusing for people poking at devstacks manually. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Trove] Clustering API next steps
Good day, Stackers/Trovers. During Paris Design Summit Clustering session [1] we, as community, came up with need of changing existing clustering API. By changing an API i mean deprecating existing clustering action because of its close connection to MongoDB datastore. There were no disagreement or concerns about deprecating existing “add_shard” action in favor of something more generic. But here comes question about an API compatibility. To ensure that we wouldn’t break it we would need to add another action (that would eventually substitute “add_shard” action) and maintain “add_shard” action for N releases (IIRC one more release would be enough). In terms of given suggestions during Clustering session i’ve made a spec [2] that reflects all needed changes in Clustering framework. I’d like to collect all suggestions/concerns about given spec and i’d like to discuss it during next BP meeting [3]. [1] Session etherpad https://etherpad.openstack.org/p/kilo-summit-trove-clusters [2] Spec proposal https://review.openstack.org/#/c/134583/ [3] BP review schedule https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting#Nov._24_Meeting Kind regards, Denis M. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Team meeting - 11/17/2014
This is a reminder about our team meeting scheduled for today 16.00 UTC. Agenda: Review action items Paris Summit results Current status (progress, issues, roadblocks, further plans) Release 0.2 progress Open discussion (can also be seen at https://wiki.openstack.org/wiki/Meetings/MistralAgenda https://wiki.openstack.org/wiki/Meetings/MistralAgenda as well as the meeting archive) Renat Akhmerov @ Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
Sorry, it's early, I was being imprecise and using trunk to mean, methods to connect x to a neutron (l2 segment. Doug On Nov 17, 2014, at 10:35 AM, Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com wrote: I think this thread is about the L2 gateway service... how's that related with the notion of trunk port? I know that the spec under review adds a component which is tantamount to a L2 gateway, but while the functionality is similar, the use case, and therefore the API exposed, are rather different. Am I missing something here? Salvatore On 17 November 2014 10:40, Doug Wiegley do...@a10networks.commailto:do...@a10networks.com wrote: On Nov 17, 2014, at 9:13 AM, Mathieu Rohon mathieu.ro...@gmail.commailto:mathieu.ro...@gmail.com wrote: Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.commailto:arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. I think we need to be careful of the natural tendency to view the new project as a place to put everything that is moving too slowly in neutron. Certainly advanced services is one of the most obvious use cases of this functionality, but that doesn't mean that the notion of an SDN trunk port should live anywhere but neutron, IMO. Thanks, doug Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.commailto:igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.commailto:mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those connectivity would be defined through service plugin API. Ian do you plan to define such an umbrella spec? or at least, could you sum up the agreement of
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On 11/17/2014 06:09 AM, Sean Dague wrote: On 11/16/2014 06:11 PM, Robert Collins wrote: On 17 November 2014 11:29, Alan Pevec ape...@gmail.com wrote: 2014-11-15 23:06 GMT+01:00 Robert Collins robe...@robertcollins.net: We did find a further issue, which was due to the use of setUpClass in tempest (a thing that testtools has never supported per se - its always been a happy accident that it worked). I've hopefully fixed that in 1.3.0 and we're babysitting tempest now to see. Trove stable/juno py26 (py27 works) unit tests are failing with testtools 1.3.0 http://logs.openstack.org/periodic-stable/periodic-trove-python26-juno/fcf4db2/testr_results.html.gz ... File /home/jenkins/workspace/periodic-trove-python26-juno/trove/tests/unittests/mgmt/test_models.py, line 60, in setUpClass super(MockMgmtInstanceTest, cls).setUpClass() AttributeError: 'super' object has no attribute 'setUpClass' pip freeze diff since last good report is: -testtools==1.1.0 +testtools==1.3.0 +unittest2==0.8.0 Any ideas? The use of unittest2 in the plumbing means we're now calling setUpClass on 2.6 which we were not doing before. However there is no implementation of setUpClass in the testtools base class, which leads to the error you are seeing. We can fix that by subclassing unittest2.TestCase in testtools' TestCase. I'll put a patch together today. -Rob We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. Fixing it is also fine. But I wouldn't mind just moving on here and letting the 2.6 bits die with vigor. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [barbican] Secret store API validation
Hello Barbican folks, Recently I was experimenting with the KMIPSecretStore and observed the following behaviour. Issuing the API call: curl -X POST -H 'content-type:application/json' -H 'X-Project-Id:12345' -d '{payload: my-secret-here, payload_content_type: text/plain, algorithm: aes, bit_length:256}' http://localhost:9311/v1/secrets”http://localhost:9311/v1/secrets%22 worked to store a secret in the backend HSM, but upon retrieving the secret I was presented with “mysecrethere”, instead of the expected value “my-secret-here”. This corruption of the secret occurs because internally it is assumed to be encoded as base64 and the base64 decoder drops invalid bytes, in this case the “-“ characters. For more discussion please see the comments on this review: https://review.openstack.org/#/c/133725/ It seems we need to add some validation to the process so I would like to get a discussion going on what we should be validating and where in the pipeline it might fit best. Im happy to code up a patch to make this happen but want to get some input and a consensus on things first. -- Tim Kelsey Cloud Security Engineer HP Helion ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Default templates for Sahara feature in 6.0
I'm fine with getting this patch in... though a few things should be fixed first, in my opinion: 1. I don't see a header in the blueprint, who is responsible for what. See example in the blueprint: https://blueprints.launchpad.net/fuel/+spec/send-anon-usage (Feature Lead, QA, etc.) 2. There is no fuel-spec associated. If this is very minor thing to address, which doesn't affect anything beyond, then we probably better to track this as bug. 3. Currently, it's not very clear to me from description what this is for. Can you please provide more information, ideally what are the use cases, and acceptance criteria for proposed functionality? We have FF deadline not just because features can affect other features stability, but also because we will need time to assure they are of production quality by themselves. Slipping this to the end of the release naturally increases risks for quality. Thanks, On Sat, Nov 15, 2014 at 3:51 AM, Dmitry Mescheryakov dmescherya...@mirantis.com wrote: Oops, the last line should be read as On the other side, it is a nice UX feature we really want to have 6.0 Dmitry 2014-11-15 3:50 GMT+03:00 Dmitry Mescheryakov dmescherya...@mirantis.com : Dmitry, Lets review the CR from the point of danger to current deployment process: in the essence it is 43 lines of change in puppet module. The module calls a shell script which always returns 0. So whatever happens inside, the deployment will not fail. The only changes (non-get requests) the script does, it does to Sahara. It tries to upload cluster and node-group templates. That is not dangerous operation for Sahara - in the worst case the templates will just not be created and that is all. It will not affect Sahara correctness in any way. On the other side, it is a nice UX feature we really want to have 5.1.1. Thanks, Dmitry 2014-11-15 3:04 GMT+03:00 Dmitry Borodaenko dborodae...@mirantis.com: +286 lines a week after Feature Freeze, IMHO it's too late to make an exception for this one. On Wed, Nov 12, 2014 at 7:37 AM, Dmitry Mescheryakov dmescherya...@mirantis.com wrote: Hello fuelers, I would like to request you merging CR [1] which implements blueprint [2]. It is a nice UX feature we really would like to have in 6.0. On the other side, the implementation is really small: it is a small piece of puppet which runs a shell script. The script always exits with 0, so the change should not be dangerous. Other files in the change are used in the shell script only. Please consider reviewing and merging this though we've already reached FF. Thanks, Dmitry [1] https://review.openstack.org/#/c/132196/ [2] https://blueprints.launchpad.net/mos/+spec/sahara-create-default-templates ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Dmitry Borodaenko ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Separate code freeze for repos
I believe that we need to do this, and agree with Vitaly. Basically, when we are getting low amount of review requests, it's easy enough to do backports to stable branch. So criteria should be based on this, and I believe it can be even more soft, than what Vitaly suggests. I suggest the following: ___ If no more than 3 new High / Critical priority bugs appeared in the passed day, and no more than 10 High/Critical over the past 3 days appeared - then stable branch can be created. ___ HCF criteria remain the same. We will just have stable branch earlier. It might be a bit of headache for our DevOps team: it means that - 6.1 ISO should appear immediately after first stable branch created (we need ISO and all set of tests working on master) - 6.0 ISO has to be build on master branches from some repos, but stable/6.0 from other. Likely it means whether switching to stable/6.0 in fuel-main and hacking config.mk, or something else. DevOps team, what do you think? On Fri, Nov 14, 2014 at 5:24 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: There is a proposal to consider a repo as stable if there are no high/critical bugs and there were no such new bugs with this priority for the last 3 days. I'm ok with it. 2014-11-14 17:16 GMT+03:00 Igor Kalnitsky ikalnit...@mirantis.com: Guys, The idea of separate unfreezing is cool itself, but we have to define some rules how to define that fuel-web is stable. I mean, in fuel-web we have different projects, so when Fuel UI is stable, the fuel_upgrade or Nailgun may be not. - Igor On Fri, Nov 14, 2014 at 3:52 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Evgeniy, That means that the stable branch can be created for some repos earlier. For example, fuel-web repo seems not to have critical issues for now and I'd like master branch of that repo to be opened for merging various stuff which shouldn't go to 6.0 and do not wait until all other repos stabilize. 2014-11-14 16:42 GMT+03:00 Evgeniy L e...@mirantis.com: Hi, There was an idea to make a separate code freeze for repos Could you please clarify what do you mean? I think we should have a way to merge patches for the next release event if it's code freeze for the current. Thanks, On Tue, Nov 11, 2014 at 2:16 PM, Vitaly Kramskikh vkramsk...@mirantis.com wrote: Folks, There was an idea to make a separate code freeze for repos, but we decided not to do it. Do we plan to try it this time? It is really painful to maintain multi-level tree of dependent review requests and wait for a few weeks until we can merge new stuff in master. -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Vitaly Kramskikh, Software Engineer, Mirantis, Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. What Andreas said, also testtools claims testtools gives you the very latest in unit testing technology in a way that will work with Python 2.6, 2.7, 3.1 and 3.2. so it should be fixed, OpenStack or not. Cheers, Alan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder][DR]replication/CG support in driver
The K1 deadline is for new drivers, not extended functionality on existing drivers, however work submitted after K2 is likely to struggle to get reviewer time as we work on the details of our structural code changes - early submission is therefore strongly advised. On 17 November 2014 11:19, Zhipeng Huang zhipengh...@gmail.com wrote: Hi all, We want to add replication/CG related support in huawei driver, would this also be bound to the Dec.19th deadline? There is confusion on whether the deadline is for new drivers or for all the proposals for Kilo. As I remembered in the design summit meetup, it was set for the new drivers coming up in K. Thanks! -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
Le 11/11/2014 09:30, Jiri Tomasek a écrit : From what was discussed on contributors meetup, keeping the names 'horizon' for the lib (framework) and 'openstack_dashboard' for dashboard seemed most convenient. And I happen to aggree with that. +1 We also discussed the fact that we could keep the names to the modules, but rename only the packages. pip install horizon_lib - installs the horizon module pip install horizon - installs openstack_dashboard. This would allow using the new names for the packages without braking the existing code depending on the libraries which import them. -- Yves-Gwenaël Bourhis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Fuel] CentOS falls into interactive mode: Unsupported hardware
Hi all, I was skimming through a nicely written blogpost about Fuel experience [1], and noticed This hardware ... not supported by CentOS [2] on one of the screenshots. Looks like CentOS goes into interactive mode and complains about unsupported hardware. Can we do anything with this? I can hardly imagine clicking Ok for 100 nodes deployment... It will be fixed by image based provisioning of course, but the question is can we do anything to fix it in the current release? [1] http://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-mirantis-fuel-5-1/ [2] http://ehaselwanter.com/images/article-images/mirantis-35-blog-780x.png -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On 11/17/2014 07:26 AM, Alan Pevec wrote: We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. What Andreas said, also testtools claims testtools gives you the very latest in unit testing technology in a way that will work with Python 2.6, 2.7, 3.1 and 3.2. so it should be fixed, OpenStack or not. Well, the python 2.6 support was only added for OpenStack. And I think it's fine to drop that burden now that we don't need it (as long as we pin appropriately). -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Cinder][DR]replication/CG support in driver
Hi Duncan, thanks for the clear up :) On Mon, Nov 17, 2014 at 8:26 PM, Duncan Thomas duncan.tho...@gmail.com wrote: The K1 deadline is for new drivers, not extended functionality on existing drivers, however work submitted after K2 is likely to struggle to get reviewer time as we work on the details of our structural code changes - early submission is therefore strongly advised. On 17 November 2014 11:19, Zhipeng Huang zhipengh...@gmail.com wrote: Hi all, We want to add replication/CG related support in huawei driver, would this also be bound to the Dec.19th deadline? There is confusion on whether the deadline is for new drivers or for all the proposals for Kilo. As I remembered in the design summit meetup, it was set for the new drivers coming up in K. Thanks! -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Zhipeng (Howard) Huang Standard Engineer IT Standard Patent/IT Prooduct Line Huawei Technologies Co,. Ltd Email: huangzhip...@huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipe...@uci.edu Office: Calit2 Building Room 2402 OpenStack, OpenDaylight, OpenCompute affcienado ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 17/11/14 13:28, Yves-Gwenaël Bourhis wrote: Le 11/11/2014 09:30, Jiri Tomasek a écrit : From what was discussed on contributors meetup, keeping the names 'horizon' for the lib (framework) and 'openstack_dashboard' for dashboard seemed most convenient. And I happen to aggree with that. +1 We also discussed the fact that we could keep the names to the modules, but rename only the packages. pip install horizon_lib - installs the horizon module pip install horizon - installs openstack_dashboard. This would allow using the new names for the packages without braking the existing code depending on the libraries which import them. There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Removing Nova V2 API xml support
Sean, Joe, Dean, all, Here's the Nova change to disable the V2 XML support: https://review.openstack.org/#/c/134332/ To keep all the jobs happy, we'll need changes in tempest, devstack, devstack-gate as well: Tempest : https://review.openstack.org/#/c/134924/ Devstack : https://review.openstack.org/#/c/134685/ Devstack-gate : https://review.openstack.org/#/c/134714/ Please see if i am on the right track. thanks, dims -- Davanum Srinivas :: https://twitter.com/dims ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Designate] New meeting time survey
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi All, We discussed moving the weekly meeting recently - I have put a survey up so we can choose the right time. https://www.surveymonkey.com/r/FJRBSVF If you could please fill this in, it would be great! Graham -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQEcBAEBAgAGBQJUafb/AAoJEPRBUqpJBgIiPUoH+gOCd+WPuovBfmUyQfg3eiZp h972CSCAqI331agVdtLIoa/zcG9olNIQW0QB0BtZSsySiBIQFg5Pkvofu94R/cqT rIzko64rV+IImu3D2lWPcnbSi/Z8E4S6ctqgubHMHfTCs9Fdh/kNCd/cAL5AuuOh 2s+xoDgwH/eQaqmGg2uI/NXvm3GR6hvXufl6MA2WB1GKazKNuNH5egRGjhwkZQRF rlmfcsrV9MUx+eqsPQ94faiGwj+2a3k9dFZidmV9wdWDPU7PXRydVn5IG5Ha3ha6 s1PEZ1jiajjDnCGn+L0VynxrhALqTToId5oQgZugYJR2nqcCSCamwud1Exj5DjU= =DIDH -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .
On 14.11.14 20:43, Tim Bell wrote: It would need to be tiered (i.e. migrate whole collections rather than files) and a local catalog would be needed to map containers to tapes. Timeouts would be an issue since we are often waiting hours for recall (to ensure that multiple recalls for the same tape are grouped). It is not an insolvable problem but it is not just a 'use LTFS' answer. There were some ad-hoc discussions during the last summit about using Swift (API) to access data that stored on tape. At the same time we talked about possible data migrations from one storage policy to another, and this might be an option to think about. Something like this: 1. Data is stored in a container with a Storage Policy (SP) that defines a time-based data migration to some other place 2. After some time, data is migrated to tape, and only some stubs (zero-byte objects) are left on disk. 3. If a client requests such an object the clients gets an error stating that the object is temporarily not available (unfortunately there is no suitable http response code for this yet) 4. At this time the object is scheduled to be restored from tape 5. Finally the object is read from tape and stored on disk again. Will be deleted again from disk after some time. Using this approach there are only smaller modifications inside Swift required, for example to send a notification to an external consumer to migrate data forth and back and to handle requests for empty stub files. The migration itself should be done by an external worker, that works with existing solutions from tape vendors. Just an idea, but might be worth to investigate further (because more and more people seem to be interested in this, and especially from the science community). Christian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] CentOS falls into interactive mode: Unsupported hardware
Hi Mike, I actually reported this to CentOS back in May: https://bugs.centos.org/view.php?id=7136 It's a bug/feature in Anaconda. It can be worked around quite easily by adding unsupported_hardware to kernel params or to the kickstart file. I reported the bug because there's no support for CentOS (except from the community), so this error message has no true value in a non-commercial OS. Best Regards, Matthew Mosesohn On Mon, Nov 17, 2014 at 4:30 PM, Mike Scherbakov mscherba...@mirantis.com wrote: Hi all, I was skimming through a nicely written blogpost about Fuel experience [1], and noticed This hardware ... not supported by CentOS [2] on one of the screenshots. Looks like CentOS goes into interactive mode and complains about unsupported hardware. Can we do anything with this? I can hardly imagine clicking Ok for 100 nodes deployment... It will be fixed by image based provisioning of course, but the question is can we do anything to fix it in the current release? [1] http://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-mirantis-fuel-5-1/ [2] http://ehaselwanter.com/images/article-images/mirantis-35-blog-780x.png -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
Good day, Stackers. During Paris Design summit oslo.messaging session was raised good question about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all know, good thoughts are comming always after. I’d like to propose several improvements in process of maintaining and developing of ZeroMQ driver in upstream. Contribution focus. As we all see, that there are enough patches that are trying to address certain problems related to ZeroMQ driver. Few of them trying to add functional tests, which is definitely good, but … there’s always ‘but’, they are not “gate”-able. My proposal for this topic is to change contribution focus from oslo.messaging by itself to OpenStack/Infra project and DevStack (subsequently to devstack-gate too). I guess there would be questions “why?”. I think the answer is pretty obvious: we have driver that is not being tested at all within DevStack and project integration. Also i’d say that such focus re-orientation would be very useful as source of use cases and bugs eventually. Here’s a list of what we, as team, should do first: 1. Ensure that DevStack can successfully: 1. Install ZeroMQ. 2. Configure each project to work with zmq driver from oslo.messaging. 2. Ensure that we can run successfully simple test plan for each project (like boot VM, fill object store container, spin up volume, etc.). ZeroMQ driver maintainers community organization. During design session was raised question about who uses zmq driver in production. I’ve seen folks from Canonical and few other companies. So, here’s my proposals around improving process of maintaining of given driver: 1. With respect to best practices of driver maintaining procedure, we might need to set up community sub-group. What would it give to us and to the project subsequently? It’s not pretty obvious, at least for now, but i’d try to light out couple moments: 1. continuous driver stability 2. continuous community support (across all OpenStack Project that are using same model: driver should have maintaining team, would it be a company or community sub-group) 2. As sub-group we would need to have our own weekly meeting. Separate meeting would keep us, as sub-group, pretty focused on zmq driver only (but it doesn’t mean that we should not participate in regular meetings). Same question. What it would give us and to the project? I’d say that the only one valid answer is: we’d not disturb other folk that are not actually interested in given topic and in zqm drive too. So, in the end, taking into account words above i’d like to get feedback from all folks. I’m pretty open for discussion, and if needed i can commit myself for driving such activities in oslo.messaging. [1] https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging Kind regards, Denis M. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Zaqar][all] Getting rid of queues: Feedback needed
Greetings, I'd like to discuss in a wider space the change proposed here[0]. I'll explain briefly what the proposal is about for those not familiar with the project. As of now, queue's are a first-class resouce. They need to exist in the backend in order to post messages. In v1.1 of the API queues became lazy resources, therefore they don't have to be created beforehand. Although I believe this is a good step forward, there's still an issue with the current implementation. Queues are wasting space. Each queue needs to be created and depending on the backend this is space being wasted since no valuable information is currently stored there except for the queue's metadata, which is required just when using flavors. At the Juno summit, we discussed the idea of dropping queues as a first-citizen resource and move towards a topic-based messaging system. At that summit, the community feedback was that queue's metadata is important and we shouldn't drop it. At the Kilo summit, this topic came up again and supported by the need of having a v2 of the API, we agreed that it may be a good time to do so. The metadata field will still be kept outside the queue (now called topic) and the topic will be part of the message instead. All previous operations will remain but the API endpoints will have to change. This is quite a big change and it'll break backwards compatibility. We could find a way to change the way the service works under-the-hood without changing the API but that'll bring in lots of inconsistencies that we don't want. I had written a post before the Juno summit, which you can find here[1]. There are other motivations behind this change besides the one mentioned above. For example, moving to topics will emphasize the fact that this is a messaging service and that not all the guarantees that are implicit to queues are guaranteed. The change also allows for future optimizations like message broadcasting without hacking awful things into the API - like comma separated queues in the URL's path. [0] https://review.openstack.org/#/c/134015/2/specs/kilo/migrate-to-topics.rst,cm [1] http://blog.flaper87.com/post/people-dont-like-to-queue-up/ Thoughts? Feedback? Thanks in advance, Flavio -- @flaper87 Flavio Percoco pgpvnbHdHMkNe.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. -- Yves-Gwenaël Bourhis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Default templates for Sahara feature in 6.0
Hi guys, I've created spec for this patch: https://review.openstack.org/#/c/134937/ Please, check out it. 2014-11-17 15:41 GMT+04:00 Mike Scherbakov mscherba...@mirantis.com: I'm fine with getting this patch in... though a few things should be fixed first, in my opinion: 1. I don't see a header in the blueprint, who is responsible for what. See example in the blueprint: https://blueprints.launchpad.net/fuel/+spec/send-anon-usage (Feature Lead, QA, etc.) 2. There is no fuel-spec associated. If this is very minor thing to address, which doesn't affect anything beyond, then we probably better to track this as bug. 3. Currently, it's not very clear to me from description what this is for. Can you please provide more information, ideally what are the use cases, and acceptance criteria for proposed functionality? We have FF deadline not just because features can affect other features stability, but also because we will need time to assure they are of production quality by themselves. Slipping this to the end of the release naturally increases risks for quality. Thanks, On Sat, Nov 15, 2014 at 3:51 AM, Dmitry Mescheryakov dmescherya...@mirantis.com wrote: Oops, the last line should be read as On the other side, it is a nice UX feature we really want to have 6.0 Dmitry 2014-11-15 3:50 GMT+03:00 Dmitry Mescheryakov dmescherya...@mirantis.com : Dmitry, Lets review the CR from the point of danger to current deployment process: in the essence it is 43 lines of change in puppet module. The module calls a shell script which always returns 0. So whatever happens inside, the deployment will not fail. The only changes (non-get requests) the script does, it does to Sahara. It tries to upload cluster and node-group templates. That is not dangerous operation for Sahara - in the worst case the templates will just not be created and that is all. It will not affect Sahara correctness in any way. On the other side, it is a nice UX feature we really want to have 5.1.1. Thanks, Dmitry 2014-11-15 3:04 GMT+03:00 Dmitry Borodaenko dborodae...@mirantis.com: +286 lines a week after Feature Freeze, IMHO it's too late to make an exception for this one. On Wed, Nov 12, 2014 at 7:37 AM, Dmitry Mescheryakov dmescherya...@mirantis.com wrote: Hello fuelers, I would like to request you merging CR [1] which implements blueprint [2]. It is a nice UX feature we really would like to have in 6.0. On the other side, the implementation is really small: it is a small piece of puppet which runs a shell script. The script always exits with 0, so the change should not be dangerous. Other files in the change are used in the shell script only. Please consider reviewing and merging this though we've already reached FF. Thanks, Dmitry [1] https://review.openstack.org/#/c/132196/ [2] https://blueprints.launchpad.net/mos/+spec/sahara-create-default-templates ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Dmitry Borodaenko ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Egorenko Denis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Removing Nova V2 API xml support
On Mon, Nov 17, 2014 at 08:24:47AM -0500, Davanum Srinivas wrote: Sean, Joe, Dean, all, Here's the Nova change to disable the V2 XML support: https://review.openstack.org/#/c/134332/ To keep all the jobs happy, we'll need changes in tempest, devstack, devstack-gate as well: Tempest : https://review.openstack.org/#/c/134924/ Devstack : https://review.openstack.org/#/c/134685/ Devstack-gate : https://review.openstack.org/#/c/134714/ Please see if i am on the right track. So this approach will work, but the direction that neutron took and that keystone is in the process of undertaking for doing the same was basically the opposite. Instead of just overriding the default tempest value on master devstack to disable xml testing the devstack stable branches are update to ensure xml_api is True when running tempest from the stable branches, and then the default in tempest is switched to False. The advantage of this approach is that the default value for tempest running against any cloud will always work. The patches which landed for neutron doing this: Devstack: https://review.openstack.org/#/c/130368/ https://review.openstack.org/#/c/130367/ Tempest: https://review.openstack.org/#/c/127667/ Neutron: https://review.openstack.org/#/c/128095/ Ping me on IRC and we can work through the process, because things need to land in a particular order to make this approach work. But, basically the approach is first the stable devstack changes need to land which enable the testing on stable, followed by a +2 on the failing Nova patch saying the approach is approved, and then we can land the tempest patch which switches the default, which will let the Nova change get through the gate. -Matt Treinish pgp14iBB_1qmp.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] 答复: Ceilometer memory.usage can not get info from libvirt
libvirt cannot inspect memory usage because some condition is not satisfied. But the AttributeError exception is ugly, Ceilometer should add basic check for return value, otherwise the unnecessary exception will bother cloud operator. I have reported a bug in Ceilometer for this issue, see https://bugs.launchpad.net/ceilometer/+bug/1393415 On Mon, Nov 17, 2014 at 5:51 PM, Rao Dingyuan raodingy...@chinacloud.com.cn wrote: As described in the document: http://docs.openstack.org/developer/ceilometer/measurements.html#measurements “”” *Note* To enable the libvirt memory.usage supporting, you need libvirt version 1.1.1+, qemu version 1.5+, and you need to prepare suitable balloon driver in the image, particularly for Windows guests, most modern Linuxes have it built in. The memory.usage meters can’t be fetched without image balloon driver. “”” J -- E_mail: raodingy...@chinacloud.com.cn *发件人:* Du Jun [mailto:dj199...@gmail.com] *发送时间:* 2014年11月17日 16:57 *收件人:* OpenStack Development Mailing List (not for usage questions) *主题:* [openstack-dev] Ceilometer memory.usage can not get info from libvirt Hi all, 2014-11-17 16:04:01.563 5162 INFO ceilometer.agent [-] Polling pollster memory.usage in the context of meter_source 14 2014-11-17 16:04:01.564 5162 DEBUG ceilometer.compute.pollsters.memory [-] Checking memory usage for instance 7e53172c-f05f-4fda-9855-af6775c1f4a8 get_samples /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py:31 140002 2014-11-17 16:04:01.573 5162 WARNING ceilometer.compute.virt.libvirt.inspector [-] Failed to inspect memory usage of instance-0002, can not get info from libvirt 140003 2014-11-17 16:04:01.574 5162 ERROR ceilometer.compute.pollsters.memory [-] Could not get Memory Usage for 7e53172c-f05f-4fda-9855-af6775c1f4a8: 'NoneType' object has no attribute 'usage' 140004 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory Traceback (most recent call last): 140005 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory File /opt/stack/ceilometer/ceilometer/compute/pollsters/memory.py, line 37, in get_samples 140006 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory 'usage': memory_info.usage})) 140007 2014-11-17 16:04:01.574 5162 TRACE ceilometer.compute.pollsters.memory AttributeError: 'NoneType' object has no attribute 'usage' When ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- blog: zqfan.github.com git: github.com/zqfan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] CentOS falls into interactive mode: Unsupported hardware
Can we add it to our kickstart file? On Mon, Nov 17, 2014 at 4:41 PM, Matthew Mosesohn mmoses...@mirantis.com wrote: Hi Mike, I actually reported this to CentOS back in May: https://bugs.centos.org/view.php?id=7136 It's a bug/feature in Anaconda. It can be worked around quite easily by adding unsupported_hardware to kernel params or to the kickstart file. I reported the bug because there's no support for CentOS (except from the community), so this error message has no true value in a non-commercial OS. Best Regards, Matthew Mosesohn On Mon, Nov 17, 2014 at 4:30 PM, Mike Scherbakov mscherba...@mirantis.com wrote: Hi all, I was skimming through a nicely written blogpost about Fuel experience [1], and noticed This hardware ... not supported by CentOS [2] on one of the screenshots. Looks like CentOS goes into interactive mode and complains about unsupported hardware. Can we do anything with this? I can hardly imagine clicking Ok for 100 nodes deployment... It will be fixed by image based provisioning of course, but the question is can we do anything to fix it in the current release? [1] http://ehaselwanter.com/en/blog/2014/10/15/deploying-openstack-with-mirantis-fuel-5-1/ [2] http://ehaselwanter.com/images/article-images/mirantis-35-blog-780x.png -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mike Scherbakov #mihgen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] [Devstack]
Hello everyone, I have bumped into an article from one of the memcached creators where he discuss the use of memcached to store session data: http://dormando.livejournal.com/495593.html Maybe we need to take it into consideration. Maybe it'll bring more problems than solutions for a future, scallable HA environment. Best regards, On 24-10-2014 17:10, Gabriel Hurley wrote: SQLite doesn't introduce any additional dependencies, memcached requires installation of memcached (admittedly it's not hard on most distros, but it *is* yet another step) and in most cases the installation of another python module to interface with it. Memcached might be a good choice for devstack, but it may or may not be the right thing to recommend for Horizon by default. - Gabriel -Original Message- From: Yves-Gwenaël Bourhis [mailto:yves-gwenael.bour...@cloudwatt.com] Sent: Friday, October 24, 2014 7:06 AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Horizon] [Devstack] Le 24/10/2014 13:30, Chmouel Boudjnah a écrit : On Fri, Oct 24, 2014 at 12:27 PM, Yves-Gwenaël Bourhis yves-gwenael.bour...@cloudwatt.com mailto:yves-gwenael.bour...@cloudwatt.com wrote: memcache can be distributed (so usable in HA) and has far better performances then db sessions. Why not use memcache by default? I guess for the simple reason that if you restart your memcache you loose all the sessions? Indeed, and for devstack that's an easy way do do a cleanup of old sessions :-) We are well talking about devstack in this thread, where loosing sessions after a memcache restart is not an issue and looks more like a very handy feature. For production it's another mater, and operators have the choice. -- Yves-Gwenaël Bourhis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thiago Paiva Brito Software Engineer Advanced OpenStack Brazil Team ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
+1 for Yves. On Mon, Nov 17, 2014 at 10:43 PM, Yves-Gwenaël Bourhis yves-gwenael.bour...@cloudwatt.com wrote: Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. -- Yves-Gwenaël Bourhis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Akihiro Motoki amot...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Zero MQ remove central broker. Architecture change.
On 11/17/2014 05:44 AM, Ilya Pekelny wrote: Hi, all! We want to discuss opportunity of implementation of the p-2-p messaging model in oslo.messaging for ZeroMQ driver. On a related note, have you looked into AMQP 1.0 at all? I have been hopeful about the development to support it because of these same reasons. The AMQP 1.0 driver is now merged. I'd really like to see some work around trying it out with the dispatch router [1]. It seems like using amqp 1.0 + a distributed network of disaptch routers could be a very scalable approach. We still need to actually try it out and do some scale and performance testing, though. [1] http://qpid.apache.org/components/dispatch-router/ -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Denis On 17/11/14 07:43, Denis Makogon wrote: During Paris Design summit oslo.messaging session was raised good question about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all know, good thoughts are comming always after. I’d like to propose several improvements in process of maintaining and developing of ZeroMQ driver in upstream. Contribution focus. As we all see, that there are enough patches that are trying to address certain problems related to ZeroMQ driver. Few of them trying to add functional tests, which is definitely good, but … there’s always ‘but’, they are not “gate”-able. I'm not sure I understand you statement about them not being gate-able - the functional/unit tests currently proposed for the zmq driver run fine as part of the standard test suite execution - maybe the confusion is over what 'functional' actually means, but in my opinion until we have some level of testing of this driver, we can't effectively make changes and fix bugs. My proposal for this topic is to change contribution focus from oslo.messaging by itself to OpenStack/Infra project and DevStack (subsequently to devstack-gate too). I guess there would be questions “why?”. I think the answer is pretty obvious: we have driver that is not being tested at all within DevStack and project integration. This was discussed in the oslo.messaging summit session, and re-enabling zeromq support in devstack is definately on my todo list, but I don't think the should block landing of the currently proposed unit tests on oslo.messaging. Also i’d say that such focus re-orientation would be very useful as source of use cases and bugs eventually. Here’s a list of what we, as team, should do first: 1. Ensure that DevStack can successfully: 1. Install ZeroMQ. 2. Configure each project to work with zmq driver from oslo.messaging. 2. Ensure that we can run successfully simple test plan for each project (like boot VM, fill object store container, spin up volume, etc.). A better objective would be able to run a full tempest test as conducted with the RabbitMQ driver IMHO. ZeroMQ driver maintainers communityorganization. During design session was raised question about who uses zmq driver in production. I’ve seen folks from Canonical and few other companies. So, here’s my proposals around improving process of maintaining of given driver: 1. With respect to best practices of driver maintaining procedure, we might need to set up community sub-group. What would it give to us and to the project subsequently? It’s not pretty obvious, at least for now, but i’d try to light out couple moments: 1. continuous driver stability 2. continuous community support (across all OpenStack Project that are using same model: driver should have maintaining team, would it be a company or community sub-group) 2. As sub-group we would need to have our own weekly meeting. Separate meeting would keep us, as sub-group, pretty focused on zmq driver only (but it doesn’t mean that we should not participate in regular meetings). Same question. What it would give us and to the project? I’d say that the only one valid answer is: we’d not disturb other folk that are not actually interested in given topic and in zqm drive too. I'd prefer that we continue to discuss ZMQ on the broader oslo.messaging context; I'm keen that the OpenStack community understands that we want ZMQ to be a first tier driver like qpid and rmq, and I'm not convinced that pushing discussion out to a separate sub-group enforces that message... So, in the end, taking into account words above i’d like to get feedback from all folks. I’m pretty open for discussion, and if needed i can commit myself for driving such activities in oslo.messaging. - -- James Page Ubuntu and Debian Developer james.p...@ubuntu.com jamesp...@debian.org -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQIcBAEBCAAGBQJUagWDAAoJEL/srsug59jDWncP/2PVkA3tDHxLILjdyXKdLLy6 fsj5yovho45T9LtSrLXD1Y+CT3pQqGDnglB+J8kUBkX56zJLWzSH1szWfRo5Y4Ms kI0c3K8LxJ6PT4+j/A5JzNt37IhAwBTJ25QcRdzAUgV06IZ3F9ctz9F9lW1GDx/q u5XvctYacKWhXH/Z/5Y2g3VE2aJSZNlgLA3PxLZeUEQaREj7XeC5x77FZbBYHVI6 E8E8B2H5nf+wln80zIm5rax3vzGh0rZVT/fgUgVcQan33XaFl64zrimjhEUXHUVF QHIVJ4PNVklqMAEliAq0JMe6ewo1rgbS6DOcB8yOD3RWNo+d/MwSbYiwM/iXI9ya DpqXK0HVfSbXgoAAqNl2eP5TfvZCtlRk1h3hqhc+843c7i+i2psMZ2mVN6LeJKdt 7EvwY8xQErjKSbsmEjtV069ajjipP3hnmhyUwwJiFM2q8eKMIWRn+WDol88+f4Ke NmguGjNzKkqqvWSS/IJVT+qHYEsm3GalLT1ZTDaagHpreIJ7vcXxSZTcoGLO6Nhs 445cPZcek6jS+lhf81S13+hmfA1ZgQW2f4Yz5hv15xn90K/OaWE2/Z9AFfsOGWOA 0FoyNY5FSGsNCG/km1BlfVSMWzB4wWpWzunMPFmwme/FoqAjvD4kt6kFKLu9DI1g /L5WRZfi7Cu7eCC/X6c7 =NBH2 -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] Zero MQ remove central broker. Architecture change.
On Mon, Nov 17, 2014 at 4:26 PM, Russell Bryant rbry...@redhat.com wrote: On 11/17/2014 05:44 AM, Ilya Pekelny wrote: Hi, all! We want to discuss opportunity of implementation of the p-2-p messaging model in oslo.messaging for ZeroMQ driver. On a related note, have you looked into AMQP 1.0 at all? I have been hopeful about the development to support it because of these same reasons. The AMQP 1.0 driver is now merged. I'd really like to see some work around trying it out with the dispatch router [1]. It seems like using amqp 1.0 + a distributed network of disaptch routers could be a very scalable approach. We still need to actually try it out and do some scale and performance testing, though. Russel, thanks for pointing it out. We'd definitely would take a look at this. But the question about perfomance and integration/functional testing still tough one. We, as oslo.messaging, community trying to do our best on it. [1] http://qpid.apache.org/components/dispatch-router/ -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best regards, Denis M. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit
Thanks, Josh, I’ll subscribe to the issue to keep up to date. On Nov 16, 2014, at 6:58 PM, Joshua Harlow harlo...@outlook.com wrote: I started the following issue on kombu's github page (to see if there is any interest on there side to such an effort): https://github.com/celery/kombu/issues/430 It's about seeing if the kombu folks would be ok with a 'rpc' subfolder in there repository that can start to contain 'rpc' like functionality that now exists in oslo.messaging (I don't see why they would be against this kind of idea, since it seems to make sense IMHO). Let's see what happens, -Josh Doug Hellmann wrote: On Nov 13, 2014, at 7:02 PM, Joshua Harlow harlo...@yahoo-inc.com mailto:harlo...@yahoo-inc.com wrote: Don't forget my executor which isn't dependent on a larger set of changes for asyncio/trollious... https://review.openstack.org/#/c/70914/ The above will/should just 'work', although I'm unsure what thread count should be by default (the number of green threads that is set at like 200 shouldn't be the same number used in that executor which uses real python/system threads). The neat thing about that executor is that it can also replace the eventlet one, since when eventlet is monkey patching the threading module (which it should be) then it should behave just as the existing eventlet one; which IMHO is pretty cool (and could be one way to completely remove the eventlet usage in oslo.messaging). Good point, thanks for reminding me. As for the kombu discussions, maybe its time to jump on the #celery channel (where the kombu folks hang out) and start talking to them about how we can work better together to move some of our features into kombu (and also depreciate/remove some of the oslo.messaging features that now are in kombu). I believe https://launchpad.net/~asksol is the main guy there (and also the main maintainer of celery/kombu?). It'd be nice to have these cross-community talks and at least come up with some kind of game plan; hopefully one that benefits both communities… I would like that, but won’t have time to do it myself this cycle. Maybe we can find another volunteer from the team? Doug -Josh https://launchpad.net/~asksol *From:* Doug Hellmann d...@doughellmann.com mailto:d...@doughellmann.com *To:* OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org *Sent:* Wednesday, November 12, 2014 12:22 PM *Subject:* [openstack-dev] [oslo] oslo.messaging outcome from the summit The oslo.messaging session at the summit [1] resulted in some plans to evolve how oslo.messaging works, but probably not during this cycle. First, we talked about what to do about the various drivers like ZeroMQ and the new AMQP 1.0 driver. We decided that rather than moving those out of the main tree and packaging them separately, we would keep them all in the main repository to encourage the driver authors to help out with the core library (oslo.messaging is a critical component of OpenStack, and we’ve lost several of our core reviewers for the library to other priorities recently). There is a new set of contributors interested in maintaining the ZeroMQ driver, and they are going to work together to review each other’s patches. We will re-evaluate keeping ZeroMQ at the end of Kilo, based on how things go this cycle. We also talked about the fact that the new version of Kombu includes some of the features we have implemented in our own driver, like heartbeats and connection management. Kombu does not include the calling patterns (cast/call/notifications) that we have in oslo.messaging, but we may be able to remove some code from our driver and consolidate the qpid and rabbit driver code to let Kombu do more of the work for us. Python 3 support is coming slowly. There are a couple of patches up for review to provide a different sort of executor based on greenio and trollius. Adopting that would require some application-level changes to use co-routines, so it may not be an optimal solution even though it would get us off of eventlet. (During the Python 3 session later in the week we talked about the possibility of fixing eventlet’s monkey-patching to allow us to use the new eventlet under python 3.) We also talked about the way the oslo.messaging API uses URLs to get some settings and configuration options for others. I thought I remembered this being a conscious decision to pass connection-specific parameters in the URL, and “global” parameters via configuration settings. It sounds like that split may not have been implemented as cleanly as originally intended, though. We identified documenting URL parameters as an issue for removing the configuration object, as well as backwards-compatibility. I don’t think we agreed on any specific
Re: [openstack-dev] Solving the client libs stable support dilemma
On Nov 17, 2014, at 6:16 AM, Sean Dague s...@dague.net wrote: On 11/16/2014 11:23 AM, Doug Hellmann wrote: On Nov 16, 2014, at 9:54 AM, Jeremy Stanley fu...@yuggoth.org wrote: On 2014-11-16 09:06:02 -0500 (-0500), Doug Hellmann wrote: So we would pin the client libraries used by the servers and installed globally, but then install the more recent client libraries in a virtualenv and test using those versions? That's what I was thinking anyway, yes. I like that. Honestly I don't, but it sucks less than the other solutions which sprang to mind. Hopefully someone will come along with a more elegant suggestion... in the meantime I don't see any obvious reasons why it wouldn't work. Really, it’s a much more accurate test of what we want. We have, as an artifact of our test configuration, to install everything on a single box. But what we’re trying to test is that a user can install the new clients and talk to an old cloud. We don’t expect deployers of old clouds to install new clients — at least we shouldn’t, and by pinning the requirements we can make that clear. Using the virtualenv for the new clients gives us separation between the “user” and “cloud” parts of the test configuration that we don’t have now. Anyway, if we’re prepared to go along with this I think it’s safe for us to stop using alpha version numbers for Oslo libraries as a matter of course. We may still opt to do it in cases where we aren’t sure of a new API or feature, but we won’t have to do it for every release. Doug I think this idea sounds good on the surface, though what a working system looks like is going to be a little interesting to make sure you are in / out of the venv. I actually think you might find it simpler to invert this. Create 1 global venv for servers, specify the venv before launching a service. Install all the clients into system level space, then running nova list doesn't require that it is put inside the venv. This should have the same results, but be less confusing for people poking at devstacks manually. That makes sense. I’m a little worried that it’s a bigger change to devstack vs. the job that’s testing the clients, but I’ll defer to you on what’s actually easier since you’re more familiar with the code. Either way, installing the servers and the clients into separate packaging spaces would allow us to pin the clients in the stable branches. Doug -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On Nov 17, 2014, at 7:46 AM, Sean Dague s...@dague.net wrote: On 11/17/2014 07:26 AM, Alan Pevec wrote: We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. What Andreas said, also testtools claims testtools gives you the very latest in unit testing technology in a way that will work with Python 2.6, 2.7, 3.1 and 3.2. so it should be fixed, OpenStack or not. Well, the python 2.6 support was only added for OpenStack. And I think it's fine to drop that burden now that we don't need it (as long as we pin appropriately). We do still need it for some projects, though. The master branches of the servers are off of 2.6, but the master branches of the Oslo libraries are still tested there to make backports easier (and possibly not require them at all of a new version of a lib fixes an issue without breaking anything else). Oslo will continue testing libraries on 2.6 as long as the stable branches that need it are still supported. Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
On Mon, Nov 17, 2014 at 4:26 PM, James Page james.p...@ubuntu.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Denis On 17/11/14 07:43, Denis Makogon wrote: During Paris Design summit oslo.messaging session was raised good question about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all know, good thoughts are comming always after. I’d like to propose several improvements in process of maintaining and developing of ZeroMQ driver in upstream. Contribution focus. As we all see, that there are enough patches that are trying to address certain problems related to ZeroMQ driver. Few of them trying to add functional tests, which is definitely good, but … there’s always ‘but’, they are not “gate”-able. I'm not sure I understand you statement about them not being gate-able - the functional/unit tests currently proposed for the zmq driver run fine as part of the standard test suite execution - maybe the confusion is over what 'functional' actually means, but in my opinion until we have some level of testing of this driver, we can't effectively make changes and fix bugs. I do agree that there's a confusion what functional testing means. Another thing, what the best solution is? Unit tests are welcome, but they are still remain to be units (they are using mocks, etc.) I'd try to define what 'fuctional testing' means for me. Functional testing in oslo.messaging means that we've been using real service for messaging (in this case - deployed 0mq). So, the simple definition, in term os OpenStack integration, we should be able to run full Tempest test suit for OpenStack services that are using oslo.messaging with enabled zmq driver. Am i right or not? My proposal for this topic is to change contribution focus from oslo.messaging by itself to OpenStack/Infra project and DevStack (subsequently to devstack-gate too). I guess there would be questions “why?”. I think the answer is pretty obvious: we have driver that is not being tested at all within DevStack and project integration. This was discussed in the oslo.messaging summit session, and re-enabling zeromq support in devstack is definately on my todo list, but I don't think the should block landing of the currently proposed unit tests on oslo.messaging. For example https://review.openstack.org/#/c/128233/ says about adding functional and units. I'm ok with units, but what about functional tests? Which oslo.messaging gate job runs them? Also i’d say that such focus re-orientation would be very useful as source of use cases and bugs eventually. Here’s a list of what we, as team, should do first: 1. Ensure that DevStack can successfully: 1. Install ZeroMQ. 2. Configure each project to work with zmq driver from oslo.messaging. 2. Ensure that we can run successfully simple test plan for each project (like boot VM, fill object store container, spin up volume, etc.). A better objective would be able to run a full tempest test as conducted with the RabbitMQ driver IMHO. I do agree with this too. But we should define step-by-step plan for this type of testing. Since we want to see quick gate feedback adding full test suit would be an overhead, at least for now. ZeroMQ driver maintainers communityorganization. During design session was raised question about who uses zmq driver in production. I’ve seen folks from Canonical and few other companies. So, here’s my proposals around improving process of maintaining of given driver: 1. With respect to best practices of driver maintaining procedure, we might need to set up community sub-group. What would it give to us and to the project subsequently? It’s not pretty obvious, at least for now, but i’d try to light out couple moments: 1. continuous driver stability 2. continuous community support (across all OpenStack Project that are using same model: driver should have maintaining team, would it be a company or community sub-group) 2. As sub-group we would need to have our own weekly meeting. Separate meeting would keep us, as sub-group, pretty focused on zmq driver only (but it doesn’t mean that we should not participate in regular meetings). Same question. What it would give us and to the project? I’d say that the only one valid answer is: we’d not disturb other folk that are not actually interested in given topic and in zqm drive too. I'd prefer that we continue to discuss ZMQ on the broader oslo.messaging context; I'm keen that the OpenStack community understands that we want ZMQ to be a first tier driver like qpid and rmq, and I'm not convinced that pushing discussion out to a separate sub-group enforces that message... The only thing that i'm woried about is that we could eventually eat all meeting time. That's why i try to build out drive maintaining/contribution
Re: [openstack-dev] Zero MQ remove central broker. Architecture change.
On Mon, Nov 17, 2014 at 5:44 AM, Ilya Pekelny ipeke...@mirantis.com wrote: We want to discuss opportunity of implementation of the p-2-p messaging model in oslo.messaging for ZeroMQ driver. Actual architecture uses uncharacteristic single broker architecture model. In this way we are ignoring the key 0MQ ideas. Lets describe our message in quotes from ZeroMQ documentation: The oslo.messaging driver is not using a single broker. It is designed for a distributed broker model where each host runs a broker. I'm not sure where the confusion comes from that implies this is a single-broker model? All of the points you make around negotiation and security are new concepts introduced after the initial design and implementation of the ZeroMQ driver. It certainly makes sense to investigate what new features are available in ZeroMQ (such as CurveCP) and to see how they might be leveraged. That said, quite a bit of trial-and-error and research went into deciding to use an opposing PUSH-PULL mechanism instead of REQ/REP. Most notably, it's much easier to make PUSH/PULL reliable than REQ/REP. From the current code docstring: ZmqBaseReactor(ConsumerBase): A consumer class implementing a centralized casting broker (PULL-PUSH). This approach is pretty unusual for ZeroMQ. Fortunately we have a bit of raw developments around the problem. These changes can introduce performance improvement. But to proof it we need to implement all new features, at least at WIP status. So, I need to be sure that the community doesn't avoid such of improvements. Again, the design implemented expects a broker running per machine (the zmq-receiver process). Each machine might have multiple workers all pulling messages from queues. Initially, the driver was designed such that each topic was mapped to its own ip:port, but this was not friendly to having arbitrary consumers of the library and required a port mapping file be distributed with the application. Plus, it's valid to have multiple consumers of a topic on a given host, something that is only possible with a distributed broker. As I left the driver, long review queues prevented me from merging a pile of changes to improve performance and increase reliability. I believe the architecture is still sound, even if much of the code itself is bad. What this driver needs is major cleanup, refactoring, and better testing. Regards, Eric Windisch ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
On Mon, Nov 17, 2014 at 8:43 AM, Denis Makogon dmako...@mirantis.com wrote: Good day, Stackers. During Paris Design summit oslo.messaging session was raised good question about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all know, good thoughts are comming always after. I’d like to propose several improvements in process of maintaining and developing of ZeroMQ driver in upstream. I'm glad to see the community looking to revive this driver. What I think could be valuable if there are enough developers is a sub-team as is done with Nova. That doesn't mean to splinter the community, but to provide a focal point for interested developers to interact. I agree with the idea that this should be tested via Tempest. It's easy enough to mask off the failing tests and enable more tests as either the driver itself improves, or support in consuming projects and/or oslo.messaging itself improves. I'd suggest that effort is better spent there than building new bespoke tests. Thanks and good luck! :) Regards, Eric Windisch ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][Cells] Cells subgroup and meeting times
Since there have been no stated conflicts or issues with these times yet we will go with Wednesdays alternating between 1700 and 2200 UTC in #openstack-meeting-3 which seems to be open at that time. This week the meeting will be at 1700. I'll see you all there! On 11/11/2014 04:04 PM, Andrew Laski wrote: We had a great discussion on cells at the summit which is captured at https://etherpad.openstack.org/p/kilo-nova-cells. One of the tasks we agreed upon there was to form a subgroup to co-ordinate this effort and report progress to the Nova meeting regularly. To that end I would like to find a meeting time, or more likely alternating times, that will work for interested parties. I am proposing Wednesday as the meeting day since it's more open than Tues/Thurs so finding a meeting room at almost any time should be feasible. My opening bid is alternating between 1700 and 2200 UTC. That should provide options that aren't too early or too late for most people. Is this fine for everyone or should it be adjusted a bit? A meeting room will be picked once we're settled on times. And I'm not planning on a meeting this week since we haven't picked times yet and I haven't had time to put together specs yet and I would like to start with a discussion on those. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
What would be the best meeting to discuss these works and any others related? Maybe, collectively, a very flexible solution for all related use cases could be found. I also want to go forward with some work I've developed a couple months ago, part of methods to connect x to a neutron l2 segment, which aims at any x, be it a 10-year old cheap home gateway wireless access point located at the other side of the planet, or a datacenter's hardware switch VLAN. Cheers, On 17 November 2014 12:29, Doug Wiegley do...@a10networks.com wrote: Sorry, it's early, I was being imprecise and using trunk to mean, methods to connect x to a neutron (l2 segment. Doug On Nov 17, 2014, at 10:35 AM, Salvatore Orlando sorla...@nicira.com wrote: I think this thread is about the L2 gateway service... how's that related with the notion of trunk port? I know that the spec under review adds a component which is tantamount to a L2 gateway, but while the functionality is similar, the use case, and therefore the API exposed, are rather different. Am I missing something here? Salvatore On 17 November 2014 10:40, Doug Wiegley do...@a10networks.com wrote: On Nov 17, 2014, at 9:13 AM, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. I think we need to be careful of the natural tendency to view the new project as a place to put everything that is moving too slowly in neutron. Certainly advanced services is one of the most obvious use cases of this functionality, but that doesn't mean that the notion of an SDN trunk port should live anywhere but neutron, IMO. Thanks, doug Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers,
[openstack-dev] oslo.db 1.1.0 released
Hello All! Oslo team is pleased to announce the new release of Oslo database handling library - oslo.db 1.1.0 List of changes: $ git log --oneline --no-merges 1.0.2..master 1b0c2b1 Imported Translations from Transifex 9aa02f4 Updated from global requirements 766ff5e Activate pep8 check that _ is imported f99e1b5 Assert exceptions based on API, not string messages 490f644 Updated from global requirements 8bb12c0 Updated from global requirements 4e19870 Reorganize DbTestCase to use provisioning completely 2a6dbcd Set utf8 encoding for mysql and postgresql 1b41056 ModelsMigrationsSync: Add check for foreign keys 8fb696e Updated from global requirements ba4a881 Remove extraneous vim editor configuration comments 33011a5 Remove utils.drop_unique_constraint() 64f6062 Improve error reporting for backend import failures 01a54cc Ensure create_engine() retries the initial connection test 26ec2fc Imported Translations from Transifex 9129545 Use fixture from oslo.config instead of oslo-incubator 2285310 Move begin ping listener to a connect listener 7f9f4f1 Create a nested helper function that will work on py3.x b42d8f1 Imported Translations from Transifex 4fa3350 Start adding a environment for py34/py33 b09ee9a Explicitly depend on six in requirements file 7a3e091 Unwrap DialectFunctionDispatcher from itself. 0928d73 Updated from global requirements 696f3c1 Use six.wraps instead of functools.wraps 8fac4c7 Update help string to use database fc8eb62 Use __qualname__ if we can 6a664b9 Add description for test_models_sync function 8bc1fb7 Use the six provided iterator mix-in 436dfdc ModelsMigrationsSync:add correct server_default check for Enum 2075074 Add history/changelog to docs c9e5fdf Add run_cross_tests.sh script Thanks Andreas Jaeger, Ann Kamyshnikova, Christian Berendt, Davanum Srinivas, Doug Hellmann, Ihar Hrachyshka, James Carey, Joshua Harlow, Mike Bayer, Oleksii Chuprykov, Roman Podoliaka for contributing to this release. Please report issues to the bug tracker: https://bugs.launchpad.net/oslo.db ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] oslo.db 1.1.0 released
go team oslo!!! On Nov 17, 2014, at 10:36 AM, Victor Sergeyev vserge...@mirantis.com wrote: Hello All! Oslo team is pleased to announce the new release of Oslo database handling library - oslo.db 1.1.0 List of changes: $ git log --oneline --no-merges 1.0.2..master 1b0c2b1 Imported Translations from Transifex 9aa02f4 Updated from global requirements 766ff5e Activate pep8 check that _ is imported f99e1b5 Assert exceptions based on API, not string messages 490f644 Updated from global requirements 8bb12c0 Updated from global requirements 4e19870 Reorganize DbTestCase to use provisioning completely 2a6dbcd Set utf8 encoding for mysql and postgresql 1b41056 ModelsMigrationsSync: Add check for foreign keys 8fb696e Updated from global requirements ba4a881 Remove extraneous vim editor configuration comments 33011a5 Remove utils.drop_unique_constraint() 64f6062 Improve error reporting for backend import failures 01a54cc Ensure create_engine() retries the initial connection test 26ec2fc Imported Translations from Transifex 9129545 Use fixture from oslo.config instead of oslo-incubator 2285310 Move begin ping listener to a connect listener 7f9f4f1 Create a nested helper function that will work on py3.x b42d8f1 Imported Translations from Transifex 4fa3350 Start adding a environment for py34/py33 b09ee9a Explicitly depend on six in requirements file 7a3e091 Unwrap DialectFunctionDispatcher from itself. 0928d73 Updated from global requirements 696f3c1 Use six.wraps instead of functools.wraps 8fac4c7 Update help string to use database fc8eb62 Use __qualname__ if we can 6a664b9 Add description for test_models_sync function 8bc1fb7 Use the six provided iterator mix-in 436dfdc ModelsMigrationsSync:add correct server_default check for Enum 2075074 Add history/changelog to docs c9e5fdf Add run_cross_tests.sh script Thanks Andreas Jaeger, Ann Kamyshnikova, Christian Berendt, Davanum Srinivas, Doug Hellmann, Ihar Hrachyshka, James Carey, Joshua Harlow, Mike Bayer, Oleksii Chuprykov, Roman Podoliaka for contributing to this release. Please report issues to the bug tracker: https://bugs.launchpad.net/oslo.db https://bugs.launchpad.net/oslo.db ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
On Mon, Nov 17, 2014 at 5:12 PM, Eric Windisch e...@windisch.us wrote: On Mon, Nov 17, 2014 at 8:43 AM, Denis Makogon dmako...@mirantis.com wrote: Good day, Stackers. During Paris Design summit oslo.messaging session was raised good question about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all know, good thoughts are comming always after. I’d like to propose several improvements in process of maintaining and developing of ZeroMQ driver in upstream. I'm glad to see the community looking to revive this driver. What I think could be valuable if there are enough developers is a sub-team as is done with Nova. That doesn't mean to splinter the community, but to provide a focal point for interested developers to interact. Yes, that's what i've been trying to say, sub-group'ing doing mean completely new community. The reason why i've proposed it is a need to maintain driver by those who's interested in it. As already said, there're not so many of us who uses (or considering) zmq driver. So, eventually, we're on the same boat - let's co-work on making it better than it is now. I agree with the idea that this should be tested via Tempest. It's easy enough to mask off the failing tests and enable more tests as either the driver itself improves, or support in consuming projects and/or oslo.messaging itself improves. I'd suggest that effort is better spent there than building new bespoke tests. Thanks and good luck! :) Regards, Eric Windisch ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Best regards, Denis M. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [horizon] Changing Host marked for maintenance BP target milestone
Hi guys, I've start working on this BP: https://blueprints.launchpad.net/horizon/+spec/mark-host-down-for-maintenance One of the reviews from this BP has been already merged (in juno). Another one has to be finalized. So I have a question: is it possible to change milestone target for this BP feature into Kilo release? - Bart ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.
Hi Steve, For SR-IOV testing we have a CI job running on a multi node setup with Cisco SR-IOV NIC doing API testing on Neutron patches. It is not reporting results up to Neutron yet and currently is used for internal testing only. We are working on the following: 1. Getting the full Tempest tests to pass on this testbed. 2. Writing new tempest tests specifically for SR-IOV 3. Running CI tests on Nova patches (in addition to Neutron patches) I would be the point of contact for this CI testbed. Thanks, Sandhya On 11/16/14 8:31 AM, Irena Berezovsky ire...@mellanox.com wrote: Hi Steve, Regarding SR-IOV testing, at Mellanox we have CI job running on bare metal node with Mellanox SR-IOV NIC. This job is reporting on neutron patches. Currently API tests are executed. The contact person for SRIOV CI job is listed at driverlog: https://github.com/stackforge/driverlog/blob/master/etc/default_data.json# L1439 The following items are in progress: - SR-IOV functional testing - Reporting CI job on nova patches - Multi-node setup It worth to mention that we want to start the collaboration on SR-IOV testing effort as part of the pci pass-through subteam activity. Please join the weekly meeting if you want to collaborate or have some inputs: https://wiki.openstack.org/wiki/Meetings/Passthrough BR, Irena -Original Message- From: Steve Gordon [mailto:sgor...@redhat.com] Sent: Wednesday, November 12, 2014 9:11 PM To: itai mendelsohn; Adrian Hoban; Russell Bryant; Ian Wells (iawells); Irena Berezovsky; ba...@cisco.com Cc: Nikola Đipanov; Russell Bryant; OpenStack Development Mailing List (not for usage questions) Subject: [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra. Hi all, We had some discussions last week - particularly in the Nova NFV design session [1] - on the subject of ensuring that telecommunications and NFV-related functionality has adequate continuous integration testing. In particular the focus here is on functionality that can't easily be tested on the public clouds that back the gate, including: - NUMA (vCPU pinning, vCPU layout, vRAM layout, huge pages, I/O device locality) - SR-IOV with Intel, Cisco, and Mellanox devices (possibly others) In each case we need to confirm where we are at, and the plan going forward, with regards to having: 1) Hardware to run the CI on. 2) Tests that actively exercise the functionality (if not already in existence). 3) Point person for each setup to maintain it and report into the third-party meeting [2]. 4) Getting the jobs operational and reporting [3][4][5][6]. In the Nova session we discussed a goal of having the hardware by K-1 (Dec 18) and having it reporting at least periodically by K-2 (Feb 5). I'm not sure if similar discussions occurred on the Neutron side of the design summit. SR-IOV == Adrian and Irena mentioned they were already in the process of getting up to speed with third party CI for their respective SR-IOV configurations. Robert are you attempting similar with regards to Cisco devices? What is the status of each of these efforts versus the four items I lifted above and what do you need assistance with? NUMA We still need to identify some hardware to run third party CI for the NUMA-related work, and no doubt other things that will come up. It's expected that this will be an interim solution until OPNFV resources can be used (note cdub jokingly replied 1-2 years when asked for a rough estimate - I mention this because based on a later discussion some people took this as a serious estimate). Ian did you have any luck kicking this off? Russell and I are also endeavouring to see what we can do on our side w.r.t. this short term approach - in particular if you find hardware we still need to find an owner to actually setup and manage it as discussed. In theory to get started we need a physical multi-socket box and a virtual machine somewhere on the same network to handle job control etc. I believe the tests themselves can be run in VMs (just not those exposed by existing public clouds) assuming a recent Libvirt and an appropriately crafted Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can assist with this). Thanks, Steve [1] https://etherpad.openstack.org/p/kilo-nova-nfv [2] https://wiki.openstack.org/wiki/Meetings/ThirdParty [3] http://ci.openstack.org/third_party.html [4] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/ [5] http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-sys tem/ [6] http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-sys tem-part-2/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
[openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan
Good morning Stackers, At the summit in Paris, we put together a plan for work on the Nova resource tracker and scheduler in the Kilo timeframe. A large number of contributors across many companies are all working on this particular part of the Nova code base, so it's important that we keep coordinated and updated on the overall efforts. I'll work together with Don Dugger this cycle to make sure we make steady, measured progress. If you are involved in this effort, please do be sure to attend the weekly scheduler IRC meetings [1] (Tuesdays @ 1500UTC on #openstack-meeting). == Decisions from Summit == The following decisions were made at the summit session [2]: 1) The patch series for virt CPU pinning [3] and huge page support [4] shall not be approved until nova/virt/hardware.py is modified to use nova.objects as its serialization/domain object model. Jay is responsible for the conversion patches, and this patch series should be fully proposed by end of this week. 2) We agreed on the concepts introduced by the resource-objects blueprint [5], with a caveat that child object versioning be discussed in greater depth with Jay, Paul, and Dan Smith. 3) We agreed on all concepts and implementation from the 2 isolate-scheduler-db blueprints: aggregates [6] and instance groups [7] 4) We agreed on implementation and need for separating compute node object from the service object [8] 5) We agreed on concept and implementation for converting the request spec from a dict to a versioned object [9] as well as converting select_destinations() to use said object [10] [6] We agreed on the need for returning a proper object from the virt driver's get_available_resource() method [11] but AFAICR, we did not say that this object needed to use nova/objects because this is an interface internal to the virt layer and resource tracker, and the ComputeNode nova.object will handle the setting of resource-related fields properly. [7] We agreed the unit tests for the resource tracker were, well, crappy, and are a real source of pain in making changes to the resource tracker itself. So, we resolved to fix them up in early Kilo-1 [8] We are not interested in adding any additional functionality to the scheduler outside already-agreed NUMA blueprint functionality in Kilo. The goal is to get the scheduler fully independent of the Nova database, and communicating with nova-conductor and nova-compute via fully versioned interfaces by the end of Kilo, so that a split of the scheduler can occur at the start of the L release cycle. == Action Items == 1) Jay to propose patches that objectify the domain objects in nova/virt/hardware.py by EOB November 21 2) Paul Murray, Jay, and Alexis Lee to work on refactoring of the unit tests around the resource tracker in early Kilo-1 3) Dan Smith, Paul Murray, and Jay to discuss the issues with child object versioning 4) Ed Leafe to work on separating the compute node from the service object in Kilo-1 5) Sylvain Bauza to work on the request spec and select_destinations() to use request spec blueprints to be completed for Kilo-2 6) Paul Murray, Sylvain Bauza to work on the isolate-scheduler-db aggregate and instance groups blueprints to be completed by Kilo-3 7) Jay to complete the resource-objects blueprint work by Kilo-2 8) Dan Berrange, Sahid, and Nikola Dipanov to work on completing the CPU pinning, huge page support, and get_available_resources() blueprints in Kilo-1 == Open Items == 1) We need to figure out who is working on the objectification of the PCI tracker stuff (Yunjong maybe or Robert Li?) 2) The child object version thing needs to be thoroughly vetted. Basically, the nova.objects.compute_node.ComputeNode object will have a series of sub objects for resources (NUMA, PCI, other stuff) and Paul Murray has some questions on how to handle the child object versioning properly. 3) Need to coordinate with Steve Gordon, Adrian Hoban, and Ian Wells on NUMA hardware in an external testing lab that the NFV subteam is working on getting up and running [12]. We need functional tests (Tempest+Nova) written for all NUMA-related functionality in the RT and scheduler by end of Kilo-3, but have yet to divvy up the work to make this a reality. == Conclusion == Please everyone read the above thoroughly and respond if I have missed anything or left anyone out of the conversation. Really appreciate everyone coming together to get this work done over the next 4-5 months. Best, -jay [1] https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting [2] https://etherpad.openstack.org/p/kilo-nova-scheduler-rt [3] https://review.openstack.org/#/c/129606/ [4] https://review.openstack.org/#/c/129608/ [5] https://review.openstack.org/#/c/127609/ [6] https://review.openstack.org/#/c/89893/ [7] https://review.openstack.org/#/c/131553/ [8] https://review.openstack.org/#/c/126895/ [9]
Re: [openstack-dev] [Nova] v2 or v3 for new api
Thank you very much Christopher On 11/17/14 12:15, Christopher Yeoh wrote: Yes, sorry documentation has been on our todo list for too long. Could I get you to submit a bug report about the lack of developer documentation for api plugins? It might hurry us up :-) I reported as a bug and subscribed you to it. https://bugs.launchpad.net/nova/+bug/1393455 In the meantime, off the top of my head. you'll need to create or modify the following files in a typical plugin: setup.cfg - add an entry in at least the nova.api.v3.extensions section etc/nova/policy.json - an entry for the permissions for you plugin, perhaps one per api method for maximum flexibility. Also will need a discoverable entry (lots of examples in this file) nova/tests/unit/fake_policy.json (similar to policy.json) I wish I had asked about this before, I found yet these files, but I confess it took quite a bit of time to guess I had to modify them (I actually didn't modify yet fake_policy, but my tests are still not completed). What about nova/nova.egg-info/entry_points.txt I mentioned earlier? nova/api/openstack/compute/plugins/v3/your_plugin.py - please make the alias name something os-scheduler-hints rather than OS-SCH-HNTS. No skimping on vowels. Probably the easiest way at this stage without more doco is look for for a plugin in that directory that does the sort of the thing you want to do. Following the path of other plugins, I created a module nova/api/openstack/compute/plugins/v3/node_uuid.py, while the class is NodeUuid(extensions.V3APIExtensionBase) the alias is os-node-uuid and the actual json parameter is node_uuid. I hope this is correct... nova/tests/unit/nova/api/openstack/compute/contrib/test_your_plugin.py - we have been combining the v2 and v2.1(v3) unittests to share as much as possible, so please do the same here for new tests as the v3 directory will be eventually removed. There's quite a few examples now in that directory of sharing unittests between v2.1 and v2 but with a new extension the customisation between the two should be pretty minimal (just a bit of inheritance to call the right controller) Very good to know. I put my test in nova/tests/unit/api/openstack/plugins/v3 , but I was getting confused by the fact only few tests were in this folder while the tests in nova/tests/unit/api/openstack/compute/contrib/ covered both v2 and v2.1 cases. So should I move my test in nova/tests/unit/api/openstack/compute/contrib/ folder, right? nova/tests/unit/integrated/v3/test_your_plugin.py nova/tests/unit/integrated/test_api_samples.py Sorry the api samples tests are not unified yet. So you'll need to create two. All of the v2 api sample tests are in one directory, whilst the the v2.1 are separated into different files by plugin. There's some rather old documentation on how to generate the api samples themselves (hint: directories aren't made automatically) here: https://blueprints.launchpad.net/nova/+spec/nova-api-samples Personally I wouldn't bother with any xml support if you do decide to support v2 as its deprecated anyway. After reading your answer I understood I have to work more on this part :) Hope this helps. Feel free to add me as a reviewer for the api parts of your changesets. It helps a lot! I will add you for sure as soon as I will upload my code. For now the specification has still to be approved, so I think I have to wait before to upload it, is that correct? This is the blueprint link anyway: https://blueprints.launchpad.net/nova/+spec/use-uuid-v1 Regards, Chris -- Pasquale Porreca DEK Technologies Via dei Castelli Romani, 22 00040 Pomezia (Roma) Mobile +39 3394823805 Skype paskporr ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan
On Mon, Nov 17, 2014 at 10:58:52AM -0500, Jay Pipes wrote: Good morning Stackers, At the summit in Paris, we put together a plan for work on the Nova resource tracker and scheduler in the Kilo timeframe. A large number of contributors across many companies are all working on this particular part of the Nova code base, so it's important that we keep coordinated and updated on the overall efforts. I'll work together with Don Dugger this cycle to make sure we make steady, measured progress. If you are involved in this effort, please do be sure to attend the weekly scheduler IRC meetings [1] (Tuesdays @ 1500UTC on #openstack-meeting). == Decisions from Summit == The following decisions were made at the summit session [2]: 1) The patch series for virt CPU pinning [3] and huge page support [4] shall not be approved until nova/virt/hardware.py is modified to use nova.objects as its serialization/domain object model. Jay is responsible for the conversion patches, and this patch series should be fully proposed by end of this week. 2) We agreed on the concepts introduced by the resource-objects blueprint [5], with a caveat that child object versioning be discussed in greater depth with Jay, Paul, and Dan Smith. 3) We agreed on all concepts and implementation from the 2 isolate-scheduler-db blueprints: aggregates [6] and instance groups [7] 4) We agreed on implementation and need for separating compute node object from the service object [8] 5) We agreed on concept and implementation for converting the request spec from a dict to a versioned object [9] as well as converting select_destinations() to use said object [10] [6] We agreed on the need for returning a proper object from the virt driver's get_available_resource() method [11] but AFAICR, we did not say that this object needed to use nova/objects because this is an interface internal to the virt layer and resource tracker, and the ComputeNode nova.object will handle the setting of resource-related fields properly. IIRC the consensus was that people didn't see the point in the get_available_resource using a different objects compared to the Nova object for the stuff used by the RT / scheduler. To that end I wrote a spec up that describes an idea for just using a single set of nova objects end-to-end. https://review.openstack.org/#/c/133728/ Presumably this would have to dovetail with your resource object models spec. https://review.openstack.org/#/c/127609/ Perhaps we should consider your spec as the place where we define what all the objects look like, and have my blueprint just focus on the actual conversion of get_available_resource() method impls in the virt drivers ? Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ironic] A mascot for Ironic
On Sun, Nov 16, 2014 at 01:14:13PM +, Lucas Alvares Gomes wrote: Hi Ironickers, I was thinking this weekend: All the cool projects does have a mascot so I thought that we could have one for Ironic too. The idea about what the mascot would be was easy because the RAX guys put bear metal their presentation[1] and that totally rocks! So I drew a bear. It also needed an instrument, at first I thought about a guitar, but drums is actually my favorite instrument so I drew a pair of drumsticks instead. The drawing thing wasn't that hard, the problem was to digitalize it. So I scanned the thing and went to youtube to watch some tutorials about gimp and inkspace to learn how to vectorize it. Magic, it worked! Attached in the email there's the original draw, the vectorized version without colors and the final version of it (with colors). Of course, I know some people does have better skills than I do, so I also attached the inkspace file of the final version in case people want to tweak it :) So, what you guys think about making this little drummer bear the mascot of the Ironic project? Ahh he also needs a name. So please send some suggestions and we can vote on the best name for him. [1] http://www.youtube.com/watch?v=2Oi2T2pSGDU#t=90 Lucas +1000, this is awesome. A cool variation would be to put a drum set behind the bear, made out of servers. :) // jim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Process for program without lead
In the last two elections there was a program that was in the last hours of the nomination period before someone stepped up to lead. Currently there is no process for how to address leadership for a program should the nomination period expire without a someone stepping forward. I would like to discuss this with the goal of having a process should this situation arise. By way of kicking things off, I would like to propose the following process: Should the nomination period expire and no PTL candidate has stepped forward for the program in question, the program will be identified to the TC by the election officials. The TC can appoint a leadership candidate by mutual agreement of the TC and the candidate in question. The appointed candidate has all the same responsibilities and obligations as a self-nominated, elected PTL. I welcome ideas and discussion on the above situation and proposed solution. Thank you, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
Le 17/11/2014 14:43, Yves-Gwenaël Bourhis a écrit : Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. s/brake/break/g Sorry for the typo. -- Yves-Gwenaël Bourhis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit
Flavio Percoco wrote: Still, I'd like us to learn from previous experiences and have a better plan for this driver (and future cases like this one). Hi, all! As one of a just joined ZeroMQ maintainers I have a growing plan of ZeroMQ refactoring and development. At the most abstract view our plan is to remove single broker and implement peer-2-peer model in the messaging driver. Now exists a blueprint with this goal https://blueprints.launchpad.net/oslo.messaging/+spec/reduce-central-broker. I maintain a patch and a spec which I had inherited from Aleksey Kornienko. For now this blueprint is the first step in the planning process. I believe we can split this big work in a set of specs and if is needed in several related blueprints. With these specs and BPs our plan should become obvious. I wrote a mail in the dev mail list with short overview to the coming work. Please, feel free to discuss it all with me and correct me on this big road. On Mon, Nov 17, 2014 at 4:45 PM, Doug Hellmann d...@doughellmann.com wrote: Thanks, Josh, I’ll subscribe to the issue to keep up to date. On Nov 16, 2014, at 6:58 PM, Joshua Harlow harlo...@outlook.com wrote: I started the following issue on kombu's github page (to see if there is any interest on there side to such an effort): https://github.com/celery/kombu/issues/430 It's about seeing if the kombu folks would be ok with a 'rpc' subfolder in there repository that can start to contain 'rpc' like functionality that now exists in oslo.messaging (I don't see why they would be against this kind of idea, since it seems to make sense IMHO). Let's see what happens, -Josh Doug Hellmann wrote: On Nov 13, 2014, at 7:02 PM, Joshua Harlow harlo...@yahoo-inc.com mailto:harlo...@yahoo-inc.com wrote: Don't forget my executor which isn't dependent on a larger set of changes for asyncio/trollious... https://review.openstack.org/#/c/70914/ The above will/should just 'work', although I'm unsure what thread count should be by default (the number of green threads that is set at like 200 shouldn't be the same number used in that executor which uses real python/system threads). The neat thing about that executor is that it can also replace the eventlet one, since when eventlet is monkey patching the threading module (which it should be) then it should behave just as the existing eventlet one; which IMHO is pretty cool (and could be one way to completely remove the eventlet usage in oslo.messaging). Good point, thanks for reminding me. As for the kombu discussions, maybe its time to jump on the #celery channel (where the kombu folks hang out) and start talking to them about how we can work better together to move some of our features into kombu (and also depreciate/remove some of the oslo.messaging features that now are in kombu). I believe https://launchpad.net/~asksol is the main guy there (and also the main maintainer of celery/kombu?). It'd be nice to have these cross-community talks and at least come up with some kind of game plan; hopefully one that benefits both communities… I would like that, but won’t have time to do it myself this cycle. Maybe we can find another volunteer from the team? Doug -Josh https://launchpad.net/~asksol *From:* Doug Hellmann d...@doughellmann.com mailto:d...@doughellmann.com *To:* OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org mailto:openstack-dev@lists.openstack.org *Sent:* Wednesday, November 12, 2014 12:22 PM *Subject:* [openstack-dev] [oslo] oslo.messaging outcome from the summit The oslo.messaging session at the summit [1] resulted in some plans to evolve how oslo.messaging works, but probably not during this cycle. First, we talked about what to do about the various drivers like ZeroMQ and the new AMQP 1.0 driver. We decided that rather than moving those out of the main tree and packaging them separately, we would keep them all in the main repository to encourage the driver authors to help out with the core library (oslo.messaging is a critical component of OpenStack, and we’ve lost several of our core reviewers for the library to other priorities recently). There is a new set of contributors interested in maintaining the ZeroMQ driver, and they are going to work together to review each other’s patches. We will re-evaluate keeping ZeroMQ at the end of Kilo, based on how things go this cycle. We also talked about the fact that the new version of Kombu includes some of the features we have implemented in our own driver, like heartbeats and connection management. Kombu does not include the calling patterns (cast/call/notifications) that we have in oslo.messaging, but we may be able to remove some code from our driver and consolidate the qpid and rabbit
Re: [openstack-dev] [nova] RT/Scheduler summit summary and Kilo development plan
Le 17/11/2014 16:58, Jay Pipes a écrit : Good morning Stackers, At the summit in Paris, we put together a plan for work on the Nova resource tracker and scheduler in the Kilo timeframe. A large number of contributors across many companies are all working on this particular part of the Nova code base, so it's important that we keep coordinated and updated on the overall efforts. I'll work together with Don Dugger this cycle to make sure we make steady, measured progress. If you are involved in this effort, please do be sure to attend the weekly scheduler IRC meetings [1] (Tuesdays @ 1500UTC on #openstack-meeting). == Decisions from Summit == The following decisions were made at the summit session [2]: 1) The patch series for virt CPU pinning [3] and huge page support [4] shall not be approved until nova/virt/hardware.py is modified to use nova.objects as its serialization/domain object model. Jay is responsible for the conversion patches, and this patch series should be fully proposed by end of this week. 2) We agreed on the concepts introduced by the resource-objects blueprint [5], with a caveat that child object versioning be discussed in greater depth with Jay, Paul, and Dan Smith. 3) We agreed on all concepts and implementation from the 2 isolate-scheduler-db blueprints: aggregates [6] and instance groups [7] Well, this is no longer needed to implement [7] as a previous merge fixed the problem by moving the instance group setup to the conductor layer. I was on PTO while this spec was created so I had no time to say it was not necessary, my bad. 4) We agreed on implementation and need for separating compute node object from the service object [8] 5) We agreed on concept and implementation for converting the request spec from a dict to a versioned object [9] as well as converting select_destinations() to use said object [10] [6] We agreed on the need for returning a proper object from the virt driver's get_available_resource() method [11] but AFAICR, we did not say that this object needed to use nova/objects because this is an interface internal to the virt layer and resource tracker, and the ComputeNode nova.object will handle the setting of resource-related fields properly. [7] We agreed the unit tests for the resource tracker were, well, crappy, and are a real source of pain in making changes to the resource tracker itself. So, we resolved to fix them up in early Kilo-1 [8] We are not interested in adding any additional functionality to the scheduler outside already-agreed NUMA blueprint functionality in Kilo. The goal is to get the scheduler fully independent of the Nova database, and communicating with nova-conductor and nova-compute via fully versioned interfaces by the end of Kilo, so that a split of the scheduler can occur at the start of the L release cycle. == Action Items == 1) Jay to propose patches that objectify the domain objects in nova/virt/hardware.py by EOB November 21 2) Paul Murray, Jay, and Alexis Lee to work on refactoring of the unit tests around the resource tracker in early Kilo-1 3) Dan Smith, Paul Murray, and Jay to discuss the issues with child object versioning 4) Ed Leafe to work on separating the compute node from the service object in Kilo-1 That's actually managed by my own, you can find both spec and implementation in the patch series, waiting for reviews [13] 5) Sylvain Bauza to work on the request spec and select_destinations() to use request spec blueprints to be completed for Kilo-2 6) Paul Murray, Sylvain Bauza to work on the isolate-scheduler-db aggregate and instance groups blueprints to be completed by Kilo-3 As said above, there is only one spec to validate ie. [6] 7) Jay to complete the resource-objects blueprint work by Kilo-2 8) Dan Berrange, Sahid, and Nikola Dipanov to work on completing the CPU pinning, huge page support, and get_available_resources() blueprints in Kilo-1 == Open Items == 1) We need to figure out who is working on the objectification of the PCI tracker stuff (Yunjong maybe or Robert Li?) 2) The child object version thing needs to be thoroughly vetted. Basically, the nova.objects.compute_node.ComputeNode object will have a series of sub objects for resources (NUMA, PCI, other stuff) and Paul Murray has some questions on how to handle the child object versioning properly. 3) Need to coordinate with Steve Gordon, Adrian Hoban, and Ian Wells on NUMA hardware in an external testing lab that the NFV subteam is working on getting up and running [12]. We need functional tests (Tempest+Nova) written for all NUMA-related functionality in the RT and scheduler by end of Kilo-3, but have yet to divvy up the work to make this a reality. == Conclusion == Please everyone read the above thoroughly and respond if I have missed anything or left anyone out of the conversation. Really appreciate everyone coming together to get this work done over
Re: [openstack-dev] Process for program without lead
On 17/11/14 16:18, Anita Kuno wrote: In the last two elections there was a program that was in the last hours of the nomination period before someone stepped up to lead. Currently there is no process for how to address leadership for a program should the nomination period expire without a someone stepping forward. I would like to discuss this with the goal of having a process should this situation arise. By way of kicking things off, I would like to propose the following process: Should the nomination period expire and no PTL candidate has stepped forward for the program in question, the program will be identified to the TC by the election officials. The TC can appoint a leadership candidate by mutual agreement of the TC and the candidate in question. The appointed candidate has all the same responsibilities and obligations as a self-nominated, elected PTL. I welcome ideas and discussion on the above situation and proposed solution. Thank you, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Would a by-election be an option - with the TC appointment being a last resort? I personally think as many options for a vote as possible is a good idea. Is there technical / administrative barriers to a by-election? Also, something to consider - would the TC nomination be required to be an ATC in that Program? I assume that the TC would try and find a someone within the program, but if a program did not have anyone willing to be the PTL, should outside candidates be considered? I am not advocating yes or no on the second point, just putting it out for discussion. Graham ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] URLs
++ from me. On 11/11/2014 05:35 PM, Adam Young wrote: Recent recurrence of the Why ios everything on its own port question triggered my desire to take this pattern and put it to rest. My suggestion, from a while ago, was to have a naming scheme that deconflicts putting all of the services onto a single server, on port 443. I've removed a lot of the cruft, but not added in entries for all the new *aaS services. https://wiki.openstack.org/wiki/URLs Please add in anything that should be part of OpenStack. Let's make this a reality, and remove the specific ports. If you are worried about debugging, look into rpdb. It is a valuable tool for debugging a mod_wsgi based application. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Process for program without lead
On 11/17/2014 11:35 AM, Hayes, Graham wrote: On 17/11/14 16:18, Anita Kuno wrote: In the last two elections there was a program that was in the last hours of the nomination period before someone stepped up to lead. Currently there is no process for how to address leadership for a program should the nomination period expire without a someone stepping forward. I would like to discuss this with the goal of having a process should this situation arise. By way of kicking things off, I would like to propose the following process: Should the nomination period expire and no PTL candidate has stepped forward for the program in question, the program will be identified to the TC by the election officials. The TC can appoint a leadership candidate by mutual agreement of the TC and the candidate in question. The appointed candidate has all the same responsibilities and obligations as a self-nominated, elected PTL. I welcome ideas and discussion on the above situation and proposed solution. Thank you, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Would a by-election be an option - with the TC appointment being a last resort? I personally think as many options for a vote as possible is a good idea. Is there technical / administrative barriers to a by-election? A by-election with whom? If noone came forward during the nomination period, why would there be an expectation that someone would come forward during a nomination period after the nomination period? Also, something to consider - would the TC nomination be required to be an ATC in that Program? I assume that the TC would try and find a someone within the program, but if a program did not have anyone willing to be the PTL, should outside candidates be considered? That is a good question. I have no proposal here. I am not advocating yes or no on the second point, just putting it out for discussion. Graham Thanks for your thoughts Graham, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [all] Scale out bug-triage by making it easier for people to contribute
Greetings, Regardless of how big/small bugs backlog is for each project, I believe this is a common, annoying and difficult problem. At the oslo meeting today, we're talking about how to address our bug triage process and I proposed something that I've seen done in other communities (rust-language [0]) that I consider useful and a good option for OpenStack too. The process consist in a bot that sends an email to every *volunteer* with 10 bugs to review/triage for the week. Each volunteer follows the triage standards, applies tags and provides information on whether the bug is still valid or not. The volunteer doesn't have to fix the bug, just triage it. In openstack, we could have a job that does this and then have people from each team volunteer to help with triage. The benefits I see are: * Interested folks don't have to go through the list and filter the bugs they want to triage. The bot should be smart enough to pick the oldest, most critical, etc. * It's a totally opt-in process and volunteers can obviously ignore emails if they don't have time that week. * It helps scaling out the triage process without poking people around and without having to do a call for volunteers every meeting/cycle/etc The above doesn't solve the problme completely but just like reviews, it'd be an optional, completely opt-in process that people can sign up for. Thoughts? Flavio [0] https://mail.mozilla.org/pipermail/rust-dev/2013-April/003668.html -- @flaper87 Flavio Percoco pgpAF92uXuyZ7.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] URLs
Adam, I'm not sure why you've marked Swift URLs as having their own scheme. It's true that Swift doesn't have the concept of admin URLs, but in general if Swift were to assume some URL path prefix, I'm not sure why it wouldn't work (for some definition of work). Other issues might be the fact that you'd have the extra complexity of a broker layer for all the OpenStack components. iie instead of clients accessing Swift directly and the operator scaling that, the new scheme would require the operator to manage and scale the broker layer and also the Swift layer. For the record, Swift would need to be updated since it assumes it's the only service running on the domain at that port (Swift does a lot of path parsing). --John On Nov 11, 2014, at 2:35 PM, Adam Young ayo...@redhat.com wrote: Recent recurrence of the Why ios everything on its own port question triggered my desire to take this pattern and put it to rest. My suggestion, from a while ago, was to have a naming scheme that deconflicts putting all of the services onto a single server, on port 443. I've removed a lot of the cruft, but not added in entries for all the new *aaS services. https://wiki.openstack.org/wiki/URLs Please add in anything that should be part of OpenStack. Let's make this a reality, and remove the specific ports. If you are worried about debugging, look into rpdb. It is a valuable tool for debugging a mod_wsgi based application. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [mistral] Team meeting minutes/log - 11/17/2014
Thanks for joining us today, Here are the links to the meeting minutes and full log: Minutes - http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.html http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.html Full log - http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.log.html http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-11-17-16.00.log.html The next meeting will be next Monday Nov 24 at 16.00 UTC. Renat Akhmerov @ Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Process for program without lead
On 11/17/2014 11:45 AM, Anita Kuno wrote: On 11/17/2014 11:35 AM, Hayes, Graham wrote: On 17/11/14 16:18, Anita Kuno wrote: In the last two elections there was a program that was in the last hours of the nomination period before someone stepped up to lead. Currently there is no process for how to address leadership for a program should the nomination period expire without a someone stepping forward. I would like to discuss this with the goal of having a process should this situation arise. By way of kicking things off, I would like to propose the following process: Should the nomination period expire and no PTL candidate has stepped forward for the program in question, the program will be identified to the TC by the election officials. The TC can appoint a leadership candidate by mutual agreement of the TC and the candidate in question. The appointed candidate has all the same responsibilities and obligations as a self-nominated, elected PTL. I welcome ideas and discussion on the above situation and proposed solution. Thank you, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Would a by-election be an option - with the TC appointment being a last resort? I personally think as many options for a vote as possible is a good idea. Is there technical / administrative barriers to a by-election? A by-election with whom? If noone came forward during the nomination period, why would there be an expectation that someone would come forward during a nomination period after the nomination period? Also, something to consider - would the TC nomination be required to be an ATC in that Program? I assume that the TC would try and find a someone within the program, but if a program did not have anyone willing to be the PTL, should outside candidates be considered? That is a good question. I have no proposal here. Let's not put any restrictions on who can be PTL in that case. This is an exceptional situation and the TC has to take care of. Choosing somebody from the program itself sounds like the first step but I suggest to let the TC discuss the exact way forward in case of PTL orphaned programs, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote: Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. +1 to this solution -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystone] Weekly meeting to resume tomorrow Nov. 18 at 1800 UTC
This is just a friendly reminder that the Keystone weekly meeting will resume this week at the normal time. I hope everyone has had a good summit (and potentially break post summit). Welcome back and see everyone tomorrow! Cheers, Morgan Fainberg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Stale patches
good job :-) 2014-11-14 18:34 GMT+08:00 Miguel Ángel Ajo majop...@redhat.com: Thanks for cleaning up the house!, Best regards, Miguel Ángel Ajo On Friday, 14 de November de 2014 at 00:46, Salvatore Orlando wrote: There are a lot of neutron patches which, for different reasons, have not been updated in a while. In order to ensure reviewers focus on active patch, I have set a few patches (about 75) as 'abandoned'. No patch with an update in the past month, either patchset or review, has been abandoned. Moreover, only a part of the patches not updated for over a month have been abandoned. I took extra care in identifying which ones could safely be abandoned, and which ones were instead still valuable; nevertheless, if you find out I abandoned a change you're actively working on, please restore it. If you are the owner of one of these patches, you can use the 'restore change' button in gerrit to resurrect the change. If you're not the other and wish to resume work on these patches either contact any member of the neutron-core team in IRC or push a new patch. Salvatore ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 11/17/14, 10:00 AM, Jason Rist jr...@redhat.com wrote: On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote: Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. +1 to this solution +1 from me as well. smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [FUEL]Re-thinking Fuel Client
Hi folks! I’ve made several internal discussions with Łukasz Oleś and Igor Kalnitsky and decided that the existing Fuel Client has to be redesigned. The implementation of the client we have at the moment does not seem to be compliant with most of the use cases people have in production and cannot be used as a library-wrapper for FUEL’s API. We’ve came of with a draft of our plan about redesigning Fuel Client which you can see here: https://etherpad.openstack.org/p/fuelclient-redesign https://etherpad.openstack.org/p/fuelclient-redesign Everyone is welcome to add their notes, suggestions basing on their needs and use cases. The next step is to create a detailed spec and put it to everyone’s review. - romcheg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] Changing Host marked for maintenance BP target milestone
The first review was actually the implementation for a separate blueprint. https://blueprints.launchpad.net/horizon/+spec/evacuate-host The content for this blueprint should follow the horizon blueprint process for Kilo. See: https://blueprints.launchpad.net/horizon/+spec/template Once the blueprint contains the appropriate information, I'd be happy to consider it for Kilo. David On Mon, Nov 17, 2014 at 8:46 AM, Fic, Bartosz bartosz@intel.com wrote: Hi guys, I've start working on this BP: https://blueprints.launchpad.net/horizon/+spec/mark-host-down-for-maintenance One of the reviews from this BP has been already merged (in juno). Another one has to be finalized. So I have a question: is it possible to change milestone target for this BP feature into Kilo release? - Bart ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Process for program without lead
On 17/11/14 16:46, Anita Kuno wrote: On 11/17/2014 11:35 AM, Hayes, Graham wrote: On 17/11/14 16:18, Anita Kuno wrote: In the last two elections there was a program that was in the last hours of the nomination period before someone stepped up to lead. Currently there is no process for how to address leadership for a program should the nomination period expire without a someone stepping forward. I would like to discuss this with the goal of having a process should this situation arise. By way of kicking things off, I would like to propose the following process: Should the nomination period expire and no PTL candidate has stepped forward for the program in question, the program will be identified to the TC by the election officials. The TC can appoint a leadership candidate by mutual agreement of the TC and the candidate in question. The appointed candidate has all the same responsibilities and obligations as a self-nominated, elected PTL. I welcome ideas and discussion on the above situation and proposed solution. Thank you, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Would a by-election be an option - with the TC appointment being a last resort? I personally think as many options for a vote as possible is a good idea. Is there technical / administrative barriers to a by-election? A by-election with whom? If noone came forward during the nomination period, why would there be an expectation that someone would come forward during a nomination period after the nomination period? Quite often people will come forward in a vacuum - people who thought they were not right for the job, or felt that someone else would suit the role better can come forward in a by-election. (I only have anecdotal evidence for this, but it is first hand, based on other voluntary, self organising groups I have been part of, and run elections for over the years). I would suggest when nominations close with no candidates, they re-open immediately for one week, at with point, if there is no candidates I goes to the TC. If the TC / Election Officials want to do extra outreach / promotion in that week, that might avoid the need for appointment. Also, something to consider - would the TC nomination be required to be an ATC in that Program? I assume that the TC would try and find a someone within the program, but if a program did not have anyone willing to be the PTL, should outside candidates be considered? That is a good question. I have no proposal here. I am not advocating yes or no on the second point, just putting it out for discussion. Graham Thanks for your thoughts Graham, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Removing Nova V2 API xml support
Matt, Thanks for the help/reviews!. Looks like the sequence seems to be DevStack / Icehouse - https://review.openstack.org/#/c/134972/ DevStack / Juno - https://review.openstack.org/#/c/134975/ Tempest - https://review.openstack.org/#/c/134924/ Tempest - https://review.openstack.org/#/c/134985/ Nova - https://review.openstack.org/#/c/134332/ Right? thanks, dims On Mon, Nov 17, 2014 at 8:48 AM, Matthew Treinish mtrein...@kortar.org wrote: On Mon, Nov 17, 2014 at 08:24:47AM -0500, Davanum Srinivas wrote: Sean, Joe, Dean, all, Here's the Nova change to disable the V2 XML support: https://review.openstack.org/#/c/134332/ To keep all the jobs happy, we'll need changes in tempest, devstack, devstack-gate as well: Tempest : https://review.openstack.org/#/c/134924/ Devstack : https://review.openstack.org/#/c/134685/ Devstack-gate : https://review.openstack.org/#/c/134714/ Please see if i am on the right track. So this approach will work, but the direction that neutron took and that keystone is in the process of undertaking for doing the same was basically the opposite. Instead of just overriding the default tempest value on master devstack to disable xml testing the devstack stable branches are update to ensure xml_api is True when running tempest from the stable branches, and then the default in tempest is switched to False. The advantage of this approach is that the default value for tempest running against any cloud will always work. The patches which landed for neutron doing this: Devstack: https://review.openstack.org/#/c/130368/ https://review.openstack.org/#/c/130367/ Tempest: https://review.openstack.org/#/c/127667/ Neutron: https://review.openstack.org/#/c/128095/ Ping me on IRC and we can work through the process, because things need to land in a particular order to make this approach work. But, basically the approach is first the stable devstack changes need to land which enable the testing on stable, followed by a +2 on the failing Nova patch saying the approach is approved, and then we can land the tempest patch which switches the default, which will let the Nova change get through the gate. -Matt Treinish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Process for program without lead
On Mon, 2014-11-17 at 17:25 +, Hayes, Graham wrote: Quite often people will come forward in a vacuum - people who thought they were not right for the job, or felt that someone else would suit the role better can come forward in a by-election. (I only have anecdotal evidence for this, but it is first hand, based on other voluntary, self organising groups I have been part of, and run elections for over the years). I would suggest when nominations close with no candidates, they re-open immediately for one week, at with point, if there is no candidates I goes to the TC. While I think the point is valid, an alternate process would be for the election coordinator(s) to point out the lack of candidates and issue a reminder for the procedure a certain amount of time prior to the end of the nomination period. Say, if no candidates have been put forward with 3 days left in the nomination period, then the election coordinator would send out the appropriate reminder email. I think this would have the same effect as the one week re-open period without delaying the election process. -- Kevin L. Mitchell kevin.mitch...@rackspace.com Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Le 2014-11-17 15:26, James Page a écrit : This was discussed in the oslo.messaging summit session, and re-enabling zeromq support in devstack is definately on my todo list, but I don't think the should block landing of the currently proposed unit tests on oslo.messaging. I would like to see this tests landed too, even we need to install redis or whatever and to start them manually. This will help a lot to review zmq stuffs and ensure fixed thing are not broken again. - --- Mehdi Abaakouk mail: sil...@sileht.net irc: sileht -BEGIN PGP SIGNATURE- Version: OpenPGP.js v.1.20131017 Comment: http://openpgpjs.org wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12 wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW +HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992 SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei Kcmc =hgqJ -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard
On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote: Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. +1 on this proposal as well On Mon, Nov 17, 2014 at 6:00 PM, Jason Rist jr...@redhat.com wrote: On 11/17/2014 06:43 AM, Yves-Gwenaël Bourhis wrote: Le 17/11/2014 14:19, Matthias Runge a écrit : There is already horizon on pypi[1] IMHO this will lead only to more confusion. Matthias [1] https://pypi.python.org/pypi/horizon/2012.2 Well the current horizon on Pypi is The OpenStack Dashboard + horizon(_lib) included If the future horizon on pypi is openstack_dashboard alone, it would still pull horizon_lib as a dependency, so it would not brake the existing. So indeed the horizon package itself in Pypi would not have horizon(_lib) in it anymore, but he pip install horizon would pull everything due to the dependency horizon will have with horizon_lib. I find this the least confusing issue and the horizon package on Pypi would still be seen as The OpenStack Dashboard like it is now. We would only add an horizon_lib package on Pypi. Therefore existing third-party requirements.txt would not brake because they would pull horizon_lib with horizon. and they would still import the proper module. Every backwards compatibility (requirements and module) is therefore preserved. +1 to this solution -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo.messaging] status of rabbitmq heartbeat support
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi, Many peoples want to have heartbeat support into the rabbit driver of oslo.messaging (https://launchpad.net/bugs/856764) We have different approaches to add this feature: - - Putting all the heartbeat logic into the rabbitmq driver. (https://review.openstack.org/#/c/126330/) But the patch use a python thread to do the work, and don't care about which oslo.messaging executor is used. But this patch is also the only already written patch that add heartbeat for all kind of connections and that works. - - Like we quickly talk at the summit, we can make the oslo.messaging executor responsible to trigger the heartbeat method of the driver (https://review.openstack.org/#/c/132979/, https://review.openstack.org/#/c/134542/) At the first glance, this sound good, but the executor is only used for the server side of oslo.messaging. And for the client side, this doesn't solve the issue. Or just an other thought: - - Moving the executor setup/start/stop from the MessageHandlingServer object to the Transport object (note: 1 transport instance == 1 driver instance), the 'transport' become responsible to register 'tasks' into the 'executor' and tasks will be 'polling and dispatch' (one for each rpc/notification server created like we have now) and a background task for the driver (the heartbeat in this case) So when we setup a new transport, it's will automatically schedule the work to do on the underlying driver without the need to know if this is client or server things. This will help a driver to do background tasks within also messaging (I known that AMQP1.0 driver have the same kind of needs, currently done with a python thread into the driver too) This looks a bigger work. So I think if we want a quick resolution of the heartbeat issue, we need to land the first solution when it's ready (https://review.openstack.org/#/c/126330/) Otherwise any thoughts/comments are welcome. Regards, - -- Mehdi Abaakouk mail: sil...@sileht.net irc: sileht -BEGIN PGP SIGNATURE- Version: OpenPGP.js v.1.20131017 Comment: http://openpgpjs.org wsFcBAEBCAAQBQJUajZICRAYkrQvzqrryAAAdoAQAKGrsCUGIKqmGc2VDpQ4 r5iJO1U+6Tq/BVTch70kSAZ9X0FToor8Zwf6/QIv/1f95r9KapOEtmIP2i+f 9qIcuO0U6yFABiOcp+2XPPTo4zWUPUlZf0+KH28MvGcIaulS3t+k+Z2BObIO ZZ+chjadg2CVxFL0WeeSk0U7FdWDUl3/Jm+gA04+cUv/yUDBqo1UCcdLqKz6 /VmvPjnEUtYvityHNuoytPo9Na6RS7fa9UPyAOJJhp577QQGZzfpMwV/AY6c 7OfOABHINvmzB7YMiEhE/nOcu3sxrIbp7lMAvdPHxtpHd90BBLquxoPpbBvo ajKDAw6dPLLL6QYTRUIk4xbN0tQXbkQ/l1/9gV38c6x1HfxkIB8XSVNSNnq2 CAsTq7jWfka08R3dtcLlq9zR7Kv0ouqMvvR0SXcMjASJd/WonBD38zMCOc1Z 6puM1keEaXtmiKyj+WzDkLu0DTEvHdHiTSzazJIqqGbbGIBuhMiTwreRjL4P LkQFnZciL38n98lBbG5JIo8YKrQhAI1c1/vj3+2q2olot13ExmJYIzaf1ZFF 4QyxcUAeR3dVbxkSHU89xv17uImuNw/klUsLCV7hsfZw1lm+HZyW+OHTOzMu PymYsaJewIOmO3YZMu7F1bm8hDq7O0Gax1Yh0aaVUl/NsXsoCR5dYTE4KYQl AZ3k =xSP6 -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 17/11/14 09:01, Denis Makogon wrote: I'm not sure I understand you statement about them not being gate-able - the functional/unit tests currently proposed for the zmq driver run fine as part of the standard test suite execution - maybe the confusion is over what 'functional' actually means, but in my opinion until we have some level of testing of this driver, we can't effectively make changes and fix bugs. I do agree that there's a confusion what functional testing means. Another thing, what the best solution is? Unit tests are welcome, but they are still remain to be units (they are using mocks, etc.) I'd try to define what 'fuctional testing' means for me. Functional testing in oslo.messaging means that we've been using real service for messaging (in this case - deployed 0mq). So, the simple definition, in term os OpenStack integration, we should be able to run full Tempest test suit for OpenStack services that are using oslo.messaging with enabled zmq driver. Am i right or not? 0mq is just a set of messaging semantics on top of tcp/ipc sockets; so its possible to test the entire tcp/ipc messaging flow standalone i.e. without involving any openstack services. That's what the current test proposal includes - unit tests which mock out most things, and base functional tests that validate the tcp/icp messaging flows via the zmq-receiver proxy. These are things that *just work* under a tox environment and don't require any special setup. This will be supplemented with good devstack support and full tempest gate, but lets start from the bottom up please! The work that's been done so far allows us to move forward with bug fixing and re-factoring that's backed up on having a base level of unit/functional tests. - -- James Page Ubuntu and Debian Developer james.p...@ubuntu.com jamesp...@debian.org -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQIcBAEBCAAGBQJUajaZAAoJEL/srsug59jDm2wP/1xW99gc/63CXnNowJLwgCAK AflhWs4SAUSF0VizOFoys6j1ApjAwWDG33B927REH/YDNwmAd7PgHRilgcaBjR5w pgaPRctCHPpWtJCWRCAmgkogqJotN3gTDKORxRNaWo9otzjQQbyPP5sEzuLl86/8 0n9KjwhjdJV42fcoKYvWt18uvz9yVOQLlPqj0WhAuzfpeP/5ZkXkd/dOvh6rwJnk wc+ZExPBhdeMNwaJFPZvle++Ki6tZCV8P8+Be5rqTZxdnGxoct72YnIohW48E9Nu 1sjdJCg42vxIMZi8NfkJDDfTBWzOmkab2jcViIJd9ycTn8CT/e62ZK8nN/hnIjla qU8pdRxNkY7xY3AuVoTWYRZGAon+Pp6Xw3J+lh7xUYukKtP/PaN+PjLCmVYrfca0 JQnc8N5bLfcZkz/tx8R09hxqV7cpaRZh/lM6D62XEMRQJ7y9rcUIaJQnHbsmqLw9 lwriXjNE/77eyttQlGnItyBZrTFjCFED9zg6ihK5w0DNXQr3CbIvlgCjiWkXfxDD 1QK05SbsukSlnO+Aqfs/HNICMdiZmqxcqcUcVs/XnKXf5Bi/Y/P0haLb43nFoa3E FaOYvY/T5HSJDvrFK6+kzPgT2zF3sWy4bZjRwKLl8GM8Mm7K65nfd5APhVCnQq5X yZOvpJehduiy6W/lQgzk =HAiM -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] L2 gateway as a service
On 17 November 2014 01:13, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi On Fri, Nov 14, 2014 at 6:26 PM, Armando M. arma...@gmail.com wrote: Last Friday I recall we had two discussions around this topic. One in the morning, which I think led to Maruti to push [1]. The way I understood [1] was that it is an attempt at unifying [2] and [3], by choosing the API approach of one and the architectural approach of the other. [1] https://review.openstack.org/#/c/134179/ [2] https://review.openstack.org/#/c/100278/ [3] https://review.openstack.org/#/c/93613/ Then there was another discussion in the afternoon, but I am not 100% of the outcome. Me neither, that's why I'd like ian, who led this discussion, to sum up the outcome from its point of view. All this churn makes me believe that we probably just need to stop pretending we can achieve any sort of consensus on the approach and let the different alternatives develop independently, assumed they can all develop independently, and then let natural evolution take its course :) I tend to agree, but I think that one of the reason why we are looking for a consensus, is because API evolutions proposed through Neutron-spec are rejected by core-dev, because they rely on external components (sdn controller, proprietary hardware...) or they are not a high priority for neutron core-dev. I am not sure I agree with this statement. I am not aware of any proposal here being dependent on external components as you suggested, but even if it were, an API can be implemented in multiple ways, just like the (core) Neutron API can be implemented using a fully open source solution or an external party like an SDN controller. By finding a consensus, we show that several players are interested in such an API, and it helps to convince core-dev that this use-case, and its API, is missing in neutron. Right, but it seems we are struggling to find this consensus. In this particular instance, where we are trying to address the use case of L2 Gateway (i.e. allow Neutron logical networks to be extended with physical ones), it seems that everyone has a different opinion as to what abstraction we should adopt in order to express and configure the L2 gateway entity, and at the same time I see no convergence in sight. Now if the specific L2 Gateway case were to be considered part of the core Neutron API, then such a consensus would be mandatory IMO, but if it isn't, is there any value in striving for that consensus at all costs? Perhaps not, and we can have multiple attempts experiment and innovate independently. So far, all my data points seem to imply that such an abstraction need not be part of the core API. Now, if there is room for easily propose new API in Neutron, It make sense to leave new API appear and evolve, and then let natural evolution take its course , as you said. To me, this is in the scope of the advanced services project. Advanced Services may be a misnomer, but an incubation feature, sure why not? Ultimately the biggest debate is on what the API model needs to be for these abstractions. We can judge on which one is the best API of all, but sometimes this ends up being a religious fight. A good API for me might not be a good API for you, even though I strongly believe that a good API is one that can: - be hard to use incorrectly - clear to understand - does one thing, and one thing well So far I have been unable to be convinced why we'd need to cram more than one abstraction in one single API, as it does violate the above mentioned principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's in line with those principles. I'd rather start from there and iterate. My 2c, Armando On 14 November 2014 08:47, Salvatore Orlando sorla...@nicira.com wrote: Thanks guys. I think you've answered my initial question. Probably not in the way I was hoping it to be answered, but it's ok. So now we have potentially 4 different blueprint describing more or less overlapping use cases that we need to reconcile into one? If the above is correct, then I suggest we go back to the use case and make an effort to abstract a bit from thinking about how those use cases should be implemented. Salvatore On 14 November 2014 15:42, Igor Cardoso igordc...@gmail.com wrote: Hello all, Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of its use cases is exactly the L2 gateway. These proposals could probably be inserted in a more generic work for moving existing datacenter L2 resources to Neutron. Cheers, On 14 November 2014 15:28, Mathieu Rohon mathieu.ro...@gmail.com wrote: Hi, As far as I understood last friday afternoon dicussions during the design summit, this use case is in the scope of another umbrella spec which would define external connectivity for neutron networks. Details of those
Re: [openstack-dev] [Fuel] CentOS falls into interactive mode: Unsupported hardware
On Mon, Nov 17, 2014 at 4:41 PM, Matthew Mosesohn mmoses...@mirantis.com wrote: I actually reported this to CentOS back in May: https://bugs.centos.org/view.php?id=7136 It's a bug/feature in Anaconda. It can be worked around quite easily by adding unsupported_hardware to kernel params or to the kickstart file. I reported the bug because there's no support for CentOS (except from the community), so this error message has no true value in a non-commercial OS. On Mon, Nov 17, 2014 at 4:30 PM, Mike Scherbakov mscherba...@mirantis.com wrote: Hi all, I was skimming through a nicely written blogpost about Fuel experience [1], and noticed This hardware ... not supported by CentOS [2] on one of the screenshots. Looks like CentOS goes into interactive mode and complains about unsupported hardware. This was resolved for the Fuel Master: https://bugs.launchpad.net/fuel/+bug/1322502 It would appear that it wasn't resolved for the deployment images though. There's a doc bug: https://bugs.launchpad.net/fuel/+bug/1359494 It would be best to get it fixed up for any and all deployments in a Fuel build. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
понедельник, 17 ноября 2014 г. пользователь Mehdi Abaakouk написал: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Le 2014-11-17 15:26, James Page a écrit : This was discussed in the oslo.messaging summit session, and re-enabling zeromq support in devstack is definately on my todo list, but I don't think the should block landing of the currently proposed unit tests on oslo.messaging. I would like to see this tests landed too, even we need to install redis or whatever and to start them manually. This will help a lot to review zmq stuffs and ensure fixed thing are not broken again. I do agree that we need to find a way to prevent blocking of zmq development. But I don't think that such testing way eventually will lead us to failure. Why not just focus on setting up testing environment that can be used for gating? Just as another example, we can consider on getting at least 3d party CI for zmq driver until we have infra gating environment. - --- Mehdi Abaakouk mail: sil...@sileht.net irc: sileht -BEGIN PGP SIGNATURE- Version: OpenPGP.js v.1.20131017 Comment: http://openpgpjs.org wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12 wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW +HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992 SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei Kcmc =hgqJ -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Kind regards, Denis M. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [stable] [glance] glance_store scheduled release 0.1.10
Hi all, Following last week's (mentioned) corrections to get Glance and the glance_store library come to stable state, we've got a few changes [0, 1, 2] merged. The library is set for the next release in a couple hours or so. Just wanted to give a head's up and see if there were any concerns. [0] https://review.openstack.org/#/c/131528/ [1] https://review.openstack.org/#/c/131838/ [2] https://review.openstack.org/#/c/130200/ Thanks, -Nikhil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] testtools 1.2.0 release breaks the world
On 18 November 2014 01:46, Sean Dague s...@dague.net wrote: On 11/17/2014 07:26 AM, Alan Pevec wrote: We don't support 2.6 any more in OpenStack. If we decide to pin testtools on stable/*, we could just let this be. We still support 2.6 on the python clients and oslo libraries - but indeed not for trove itself with master. What Andreas said, also testtools claims testtools gives you the very latest in unit testing technology in a way that will work with Python 2.6, 2.7, 3.1 and 3.2. so it should be fixed, OpenStack or not. Well, the python 2.6 support was only added for OpenStack. And I think it's fine to drop that burden now that we don't need it (as long as we pin appropriately). Huh? No :) - testtools had Python 2.6 support long before OpenStack existed :) - testtools has kept 2.6 support because a) its easy and b) there are still groups (like but not limited to OpenStack) that care about it. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps
понедельник, 17 ноября 2014 г. пользователь James Page написал: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 17/11/14 09:01, Denis Makogon wrote: I'm not sure I understand you statement about them not being gate-able - the functional/unit tests currently proposed for the zmq driver run fine as part of the standard test suite execution - maybe the confusion is over what 'functional' actually means, but in my opinion until we have some level of testing of this driver, we can't effectively make changes and fix bugs. I do agree that there's a confusion what functional testing means. Another thing, what the best solution is? Unit tests are welcome, but they are still remain to be units (they are using mocks, etc.) I'd try to define what 'fuctional testing' means for me. Functional testing in oslo.messaging means that we've been using real service for messaging (in this case - deployed 0mq). So, the simple definition, in term os OpenStack integration, we should be able to run full Tempest test suit for OpenStack services that are using oslo.messaging with enabled zmq driver. Am i right or not? 0mq is just a set of messaging semantics on top of tcp/ipc sockets; so its possible to test the entire tcp/ipc messaging flow standalone i.e. without involving any openstack services. That's what the current test proposal includes - unit tests which mock out most things, and base functional tests that validate the tcp/icp messaging flows via the zmq-receiver proxy. These are things that *just work* under a tox environment and don't require any special setup. Hm, I see what you've been trying to say. But unfortunately it breaks whole idea of TDD. Why can't we just spend some time on getting non-voting gates? Ok, let me describe what would satisfy all of us: Lest write up docs that are describes how to setup(manually) environment to test out income patches. So, in any way. This topic is not for disagreement. It's for building out team relationship. I'd like to discuss next steps on developing zmq driver. Kind regards, Denis M. This will be supplemented with good devstack support and full tempest gate, but lets start from the bottom up please! The work that's been done so far allows us to move forward with bug fixing and re-factoring that's backed up on having a base level of unit/functional tests. - -- James Page Ubuntu and Debian Developer james.p...@ubuntu.com javascript:; jamesp...@debian.org javascript:; -BEGIN PGP SIGNATURE- Version: GnuPG v1 iQIcBAEBCAAGBQJUajaZAAoJEL/srsug59jDm2wP/1xW99gc/63CXnNowJLwgCAK AflhWs4SAUSF0VizOFoys6j1ApjAwWDG33B927REH/YDNwmAd7PgHRilgcaBjR5w pgaPRctCHPpWtJCWRCAmgkogqJotN3gTDKORxRNaWo9otzjQQbyPP5sEzuLl86/8 0n9KjwhjdJV42fcoKYvWt18uvz9yVOQLlPqj0WhAuzfpeP/5ZkXkd/dOvh6rwJnk wc+ZExPBhdeMNwaJFPZvle++Ki6tZCV8P8+Be5rqTZxdnGxoct72YnIohW48E9Nu 1sjdJCg42vxIMZi8NfkJDDfTBWzOmkab2jcViIJd9ycTn8CT/e62ZK8nN/hnIjla qU8pdRxNkY7xY3AuVoTWYRZGAon+Pp6Xw3J+lh7xUYukKtP/PaN+PjLCmVYrfca0 JQnc8N5bLfcZkz/tx8R09hxqV7cpaRZh/lM6D62XEMRQJ7y9rcUIaJQnHbsmqLw9 lwriXjNE/77eyttQlGnItyBZrTFjCFED9zg6ihK5w0DNXQr3CbIvlgCjiWkXfxDD 1QK05SbsukSlnO+Aqfs/HNICMdiZmqxcqcUcVs/XnKXf5Bi/Y/P0haLb43nFoa3E FaOYvY/T5HSJDvrFK6+kzPgT2zF3sWy4bZjRwKLl8GM8Mm7K65nfd5APhVCnQq5X yZOvpJehduiy6W/lQgzk =HAiM -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev