Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems
On 12 August 2014 07:26, Amit Das amit@cloudbyte.com wrote: I would like some guidance in this regards in form of some links, wiki pages etc. I am currently gathering the driver cert test results i.e. tempest tests from devstack in our environment CI setup would be my next step. This should get you started: http://ci.openstack.org/third_party.html Then Jay Pipes' excellent two part series will help you with the details of getting it done: http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/ http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request
Hi The right layer for this validation is the Neutron REST layer. Since the current validation engine in this layer can only do attribute level validation (e.g make sure timeout is int and timeout 5) but can't do entity level validation (e.g timeout delay), you can find entity level validation code in the lbaas plugin layer and in DB layer. As far as I understand the REST engine of Neutron is about to be replaced (I hope before the Z version :) ) and I hope the new engine will be able to run entity level validations. Avishay From: Samuel Bercovici Sent: Monday, August 11, 2014 4:58 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request Hi, Validations such as timeout delay should be performed on the API level before it reaches the driver. For a configuration tree (lb, listeners, pools, etc.), there should be one provider. Having provider defined in multiple places does not make sense. -San. From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] Sent: Monday, August 11, 2014 2:43 PM To: OpenStack Development Mailing List (openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org) Subject: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request Hi: Continuing from last week's LBaaS meeting... Currently an entity cannot be sent to driver unless it is linked to loadbalancer because loadbalancer is the root object and driver information is only available with loadbalancer. The request to the driver is delayed because of which error propagation becomes tricky. Let's say a monitor was configured with timeout delay there would be no error then. When a listener is configured there will be a monitor creation/deployment error like timeout configured greater than delay. Unless the error is very clearly crafted the user won't be able to understand the error. I am half-heartedly OK with current approach. But, I would prefer Brandon's Solution - make provider an attribute in each of the entities to get rid of this problem. What do others think? Thanks, Vijay V. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] fair standards for all hypervisor drivers
On Mon, 2014-08-11 at 15:25 -0700, Joe Gordon wrote: On Sun, Aug 10, 2014 at 11:59 PM, Mark McLoughlin mar...@redhat.com wrote: On Fri, 2014-08-08 at 09:06 -0400, Russell Bryant wrote: On 08/07/2014 08:06 PM, Michael Still wrote: It seems to me that the tension here is that there are groups who would really like to use features in newer libvirts that we don't CI on in the gate. Is it naive to think that a possible solution here is to do the following: - revert the libvirt version_cap flag I don't feel strongly either way on this. It seemed useful at the time for being able to decouple upgrading libvirt and enabling features that come with that. Right, I suggested the flag as a more deliberate way of avoiding the issue that was previously seen in the gate with live snapshots. I still think it's a pretty elegant and useful little feature, and don't think we need to use it as proxy battle over testing requirements for new libvirt features. Mark, I am not sure if I follow. The gate issue with live snapshots has been worked around by turning it off [0], so presumably this patch is forward facing. I fail to see how this patch is needed to help the gate in the future. On the live snapshot issue specifically, we disabled it by requiring 1.3.0 for the feature. With the version cap set to 1.2.2, we won't automatically enable this code path again if we update to 1.3.0. No question that's a bit of a mess, though. The point was a more general one - we learned from the live snapshot issue that having a libvirt upgrade immediately enable new code paths was a bad idea. The patch is a simple, elegant way of avoiding that. Wouldn't it just delay the issues until we change the version_cap? Yes, that's the idea. Rather than having to scramble when the new devstack-gate image shows up, we'd be able to work on any issues in the context of a patch series to bump the version_cap. The issue I see with the libvirt version_cap [1] is best captured in its commit message: The end user can override the limit if they wish to opt-in to use of untested features via the 'version_cap' setting in the 'libvirt' group. This goes against the very direction nova has been moving in for some time now. We have been moving away from merging untested (re: no integration testing) features. This patch changes the very direction the project is going in over testing without so much as a discussion. While I think it may be time that we revisited this discussion, the discussion needs to happen before any patches are merged. You put it well - some apparently see us moving towards a zero-tolerance policy of not having any code which isn't functionally tested in the gate. That obviously is not the case right now. The sentiment is great, but any zero-tolerance policy is dangerous. I'm very much in favor of discussing this further. We should have some principles and goals around this, but rather than argue this in the abstract we should be open to discussing the tradeoffs involved with individual patches. I am less concerned about the contents of this patch, and more concerned with how such a big de facto change in nova policy (we accept untested code sometimes) without any discussion or consensus. In your comment on the revert [2], you say the 'whether not-CI-tested features should be allowed to be merged' debate is 'clearly unresolved.' How did you get to that conclusion? This was never brought up in the mid-cycles as a unresolved topic to be discussed. In our specs template we say Is this untestable in gate given current limitations (specific hardware / software configurations available)? If so, are there mitigation plans (3rd party testing, gate enhancements, etc) [3]. We have been blocking untested features for some time now. Asking is this tested in a spec template makes a tonne of sense. Requiring some thought to be put into mitigation where a feature is untestable in the gate makes sense. Requiring that the code is tested where possible makes sense. It's a zero-tolerance get your code functionally tested or GTFO policy that I'm concerned about. I am further perplexed by what Daniel Berrange, the patch author, meant when he commented [2] Regardless of the outcome of the testing discussion we believe this is a useful feature to have. Who is 'we'? Because I don't see how that can be nova-core or even nova-specs-core, especially considering how many members of those groups are +2 on the revert. So if 'we' is neither of those groups then who is 'we'? That's for Dan to answer, but I think you're either nitpicking or have a very serious concern. If nitpicking, Dan could just be using the Royal 'We' :) Or he could just mean
Re: [openstack-dev] [Nova] PCI support
Hi Gary, Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the jobs of Mellanox External Testing CI (Check-MLNX-Neutron-ML2-Sriov-driverhttp://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driver). Meanwhile not voting, but will be soon. BR, Irena From: Gary Kotton [mailto:gkot...@vmware.com] Sent: Monday, August 11, 2014 5:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova] PCI support Thanks for the update. From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Monday, August 11, 2014 at 5:08 PM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] PCI support Gary, Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for their MD as well. -Robert On 8/11/14, 9:05 AM, Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com wrote: Hi, At the moment all of the drivers are required CI support. Are there any plans regarding the PIC support. I understand that this is something that requires specific hardware. Are there any plans to add this? Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers
In the context of Octavia and Neutron LBaaS. Susanne On Mon, Aug 11, 2014 at 5:44 PM, Stephen Balukoff sbaluk...@bluebox.net wrote: Susanne, Are you asking in the context of Load Balancer services in general, or in terms of the Neutron LBaaS project or the Octavia project? Stephen On Mon, Aug 11, 2014 at 9:04 AM, Doug Wiegley do...@a10networks.com wrote: Hi Susanne, While there are a few operators involved with LBaaS that would have good input, you might want to also ask this on the non-dev mailing list, for a larger sample size. Thanks, doug On 8/11/14, 3:05 AM, Susanne Balle sleipnir...@gmail.com wrote: Gang, I was asked the following questions around our Neutron LBaaS use cases: 1. Will there be a scenario where the ³VIP² port will be in a different Node, from all the Member ³VMs² in a pool. 2. Also how likely is it for the LBaaS configured subnet to not have a ³router² and just use the ³extra_routes² option. 3. Is there a valid use case where customers will be using the ³extra_routes² with subnets instead of the ³routers². ( It would be great if you have some use case picture for this). Feel free to chime in here and I'll summaries the answers. Regards Susanne ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?
This bug was true in grizzly and older (and was reintroduced in icehouse for a few days but was fixed before the nova icehouse shipped). Aaron On Mon, Aug 11, 2014 at 7:10 AM, CARVER, PAUL pc2...@att.com wrote: Armando M. [mailto:arma...@gmail.com] wrote: On 9 August 2014 10:16, Jay Pipes jaypi...@gmail.com wrote: Paul, does this friend of a friend have a reproduceable test script for this? We would also need to know the OpenStack release where this issue manifest itself. A number of bugs have been raised in the past around this type of issue, and the last fix I recall is this one: https://bugs.launchpad.net/nova/+bug/1300325 It's possible that this might have regressed, though. The reason I called it friend of a friend is because I think the info has filtered through a series of people and is not firsthand observation. I'll ask them to track back to who actually observed the behavior, how long ago, and with what version. It could be a regression, or it could just be old info that people have continued to assume is true without realizing it was considered a bug all along and has been fixed. Thanks! The moment I first heard it my first reaction was that it was almost certainly a bug and had probably already been fixed. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceilometer] Add a pollster plugin
Hi Folks, Is there any best practices or good way to debug whether new pollster plugin work fine for ceilometer? I'd like to add a new pollster plugin into Ceilometer by - adding a new item under enterypoint/ceilometer.poll.central in setup.cfg file - adding the implementation code inheriting plugin.CentralPollster. But when I sudo python setup.py install and restart ceilometer-related services in devstack, NO new metering is displayed upon ceilometer meter-list and I expect that there should be a new metering showing the item defined in setup.cfg. Is there any other source/config files I need to modify or add? Thanks in advance, Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] 9 days until feature proposal freeze
Hi, this is just a friendly reminder that we are now 9 days away from feature proposal freeze for nova. If you think your blueprint isn't going to make it in time, then now would be a good time to let me know so that we can defer it until Kilo. That will free up reviewer time for other blueprints. Some people have more than one blueprint still under development... Perhaps they could defer some of those to Kilo? Thanks, Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
Hi, Are you aware of the dockter_container resource type (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a 'CMD' property which is a list of command to run after the container is spawned. Is that what you want? Regards, Qiming On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote: Hi, I'm now doing some investigation for docker + HEAT integration and come up one question want to get your help. What is the best way for a docker container to run some user data once the docker container was provisioned? I think there are two ways: using cloud-init or the CMD section in Dockerfile, right? just wondering does anyone has some experience with cloud-init for docker container, does the configuration same with VM? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward
Hi Paul, Below are some other useful GBP reference pages: https://wiki.opendaylight.org/view/Project_Proposals:Group_Based_Policy_Plugin http://www.cisco.com/en/US/prod/collateral/netmgtsw/ps13004/ps13460/white-paper-c11-729906_ns1261_Networking_Solutions_White_Paper.html I think the root cause of this long argument, is that GBP core model was not designed native for Neutron, and they are introduced into Neutron so radically, without careful tailoring and adaption. Maybe the GBP team also don't want to do so, their intention is to maintain a unified model across all kinds of platform including Neutron, Opendaylight, ACI/Opflex, etc. However, redundancy and duplication exists between EP/EPG/BD/RD and Port/Network/Subnet. So mapping is used between these objects, and I think this is why so many voice to request moving GBP out and on top of Neutron. Will GBP simply be an *addition*? It absolutely COULD be, but objectively speaking, it's core model also allow it to BE-ABLE-TO take over Neutron core resource (see the wiki above). GBP mapping spec suggested a nova -nic extension to handle EP/EPG resource directly, thus all original Neutron core resource can be shadowed away from user interface: GBP became the new openstack network API :-) However no one can say depreciate Neutron core here and now, but shall we leave Neutron core just as *traditional/legacy*? Personally I prefer not to throw NW-Policy out of Neutron, but at the perquisite that its core model should be reviewed and tailored. A new lightweight model carefully designed native for Neutron is needed, but not directly copying a whole bunch of monolithic core resource from existing other system. Here is the very basic suggestion: because core value of GBP is policy template with contracts , throw away EP/EPG/L2P/L3P model while not just renaming them again and again. APPLY policy template to existing Neutron core resource, but not reinvent similar concept in GBP and then do the mapping. On Mon, Aug 11, 2014 at 9:12 PM, CARVER, PAUL pc2...@att.com wrote: loy wolfe [mailto:loywo...@gmail.com] wrote: Then since Network/Subnet/Port will never be treated just as LEGACY COMPATIBLE role, there is no need to extend Nova-Neutron interface to follow the GBP resource. Anyway, one of optional service plugins inside Neutron shouldn't has any impact on Nova side. This gets to the root of why I was getting confused about Jay and others having Nova related concerns. I was/am assuming that GBP is simply an *additional* mechanism for manipulating Neutron, not a deprecation of any part of the existing Neutron API. I think Jay's concern and the reason why he keeps mentioning Nova as the biggest and most important consumer of Neutron's API stems from an assumption that Nova would need to change to use the GBP API. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
Thanks Qiming. ;-) Yes, this is one solution for running user data when using docker container in HEAT. I see that the properties include almost all of the parameters used in docker run. Do you know if docker container support cloud-init in a image? My understanding is NOT as I did not see userdata in docker property. 2014-08-12 16:21 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com: Hi, Are you aware of the dockter_container resource type (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a 'CMD' property which is a list of command to run after the container is spawned. Is that what you want? Regards, Qiming On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote: Hi, I'm now doing some investigation for docker + HEAT integration and come up one question want to get your help. What is the best way for a docker container to run some user data once the docker container was provisioned? I think there are two ways: using cloud-init or the CMD section in Dockerfile, right? just wondering does anyone has some experience with cloud-init for docker container, does the configuration same with VM? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
Don't have an answer to this. You may try it though. Regards, Qiming On Tue, Aug 12, 2014 at 04:52:58PM +0800, Jay Lau wrote: Thanks Qiming. ;-) Yes, this is one solution for running user data when using docker container in HEAT. I see that the properties include almost all of the parameters used in docker run. Do you know if docker container support cloud-init in a image? My understanding is NOT as I did not see userdata in docker property. 2014-08-12 16:21 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com: Hi, Are you aware of the dockter_container resource type (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a 'CMD' property which is a list of command to run after the container is spawned. Is that what you want? Regards, Qiming On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote: Hi, I'm now doing some investigation for docker + HEAT integration and come up one question want to get your help. What is the best way for a docker container to run some user data once the docker container was provisioned? I think there are two ways: using cloud-init or the CMD section in Dockerfile, right? just wondering does anyone has some experience with cloud-init for docker container, does the configuration same with VM? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Gantt project
This is to make sure that everyone knows about the Gantt project and to make sure that no one has a strong aversion to what we are doing. The basic goal is to split the scheduler out of Nova and create a separate project that, ultimately, can be used by other OpenStack projects that have a need for scheduling services. Note that we have no intention of forcing people to use Gantt but it seems silly to have a scheduler inside Nova, another scheduler inside Cinder, another scheduler inside Neutron and so forth. This is clearly predicated on the idea that we can create a common, flexible scheduler that can meet everyone's needs but, as I said, theirs is no rule that any project has to use Gantt, if we don't meet your needs you are free to roll your own scheduler. We will start out by just splitting the scheduler code out of Nova into a separate project that will initially only be used by Nova. This will be followed by enhancements, like a common API, that can then be utilized by other projects. We are cleaning up the internal interfaces in the Juno release with the expectation that early in the Kilo cycle we will be able to do the split and create a Gantt project that is completely compatible with the current Nova scheduler. Hopefully our initial goal (a separate project that is completely compatible with the Nova scheduler) is not too controversial but feel free to reply with any concerns you may have. -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment
Hi, I got an error message when Jenkins executed tox -epy26 in the following fix. https://review.openstack.org/#/c/112771/ I think that the reason of the error message is a mongod isn't installed in test environment. (it works in my test env) Do you have any idea to solve this? - setup-test-env.sh 16 export PATH=${PATH:+$PATH:}/sbin:/usr/sbin 17 if ! which mongod /dev/null 21 18 then 19 echo Could not find mongod command 12 20 exit 1 21 fi - console.log 2014-08-12 07:25:03.329 | + tox -epy26 2014-08-12 07:25:03.542 | py26 create: /home/jenkins/workspace/gate-ceilometer-python26/.tox/py26 2014-08-12 07:25:05.255 | py26 installdeps: -r/home/jenkins/workspace/gate-ceilometer-python26/requirements.txt, -r/home/jenkins/workspace/gate-ceilometer-python26/test-requirements.txt 2014-08-12 07:28:01.581 | py26 develop-inst: /home/jenkins/workspace/gate-ceilometer-python26 2014-08-12 07:28:07.861 | py26 runtests: commands[0] | bash -x /home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python setup.py testr --slowest --testr-args= 2014-08-12 07:28:07.864 | + set -e 2014-08-12 07:28:07.865 | ++ mktemp -d CEILO-MONGODB-X 2014-08-12 07:28:07.866 | + MONGO_DATA=CEILO-MONGODB-t6f5p 2014-08-12 07:28:07.866 | + MONGO_PORT=29000 2014-08-12 07:28:07.866 | + trap clean_exit EXIT 2014-08-12 07:28:07.867 | + mkfifo CEILO-MONGODB-t6f5p/out 2014-08-12 07:28:07.868 | + export PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin 2014-08-12 07:28:07.869 | + PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin 2014-08-12 07:28:07.869 | + which mongod 2014-08-12 07:28:07.870 | + echo 'Could not find mongod command' 2014-08-12 07:28:07.870 | Could not find mongod command 2014-08-12 07:28:07.871 | + exit 1 2014-08-12 07:28:07.871 | + clean_exit 2014-08-12 07:28:07.872 | + local error_code=1 2014-08-12 07:28:07.872 | + rm -rf CEILO-MONGODB-t6f5p 2014-08-12 07:28:07.873 | ++ jobs -p 2014-08-12 07:28:07.873 | + kill 2014-08-12 07:28:07.874 | kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] 2014-08-12 07:28:07.875 | ERROR: InvocationError: '/bin/bash -x /home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python setup.py testr --slowest --testr-args=' Best Regards, Hisashi Osanai ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
I did not have the environment set up now, but by reviewing code, I think that the logic should be as following: 1) When using nova docker driver, we can use cloud-init or/and CMD in docker images to run post install scripts. myapp: Type: OS::Nova::Server Properties: flavor: m1.small image: my-app:latest docker image user-data: 2) When using heat docker driver, we can only use CMD in docker image or heat template to run post install scripts. wordpress: type: DockerInc::Docker::Container depends_on: [database] properties: image: wordpress links: db: mysql port_bindings: 80/tcp: [{HostPort: 80}] docker_endpoint: str_replace: template: http://host:2345/ params: host: {get_attr: [docker_host, networks, private, 0]} cmd: /bin/bash 2014-08-12 17:11 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com: Don't have an answer to this. You may try it though. Regards, Qiming On Tue, Aug 12, 2014 at 04:52:58PM +0800, Jay Lau wrote: Thanks Qiming. ;-) Yes, this is one solution for running user data when using docker container in HEAT. I see that the properties include almost all of the parameters used in docker run. Do you know if docker container support cloud-init in a image? My understanding is NOT as I did not see userdata in docker property. 2014-08-12 16:21 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com: Hi, Are you aware of the dockter_container resource type (DockerInc::Docker::Container) in Heat contrib directory? I am seeing a 'CMD' property which is a list of command to run after the container is spawned. Is that what you want? Regards, Qiming On Tue, Aug 12, 2014 at 02:27:39PM +0800, Jay Lau wrote: Hi, I'm now doing some investigation for docker + HEAT integration and come up one question want to get your help. What is the best way for a docker container to run some user data once the docker container was provisioned? I think there are two ways: using cloud-init or the CMD section in Dockerfile, right? just wondering does anyone has some experience with cloud-init for docker container, does the configuration same with VM? -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] fair standards for all hypervisor drivers
On Mon, Aug 11, 2014 at 03:25:39PM -0700, Joe Gordon wrote: I am not sure if I follow. The gate issue with live snapshots has been worked around by turning it off [0], so presumably this patch is forward facing. I fail to see how this patch is needed to help the gate in the future. Wouldn't it just delay the issues until we change the version_cap? Consider that we have a feature already in tree that is not currently tested by the gate. Now we update libvirt in the gate and so tempest suddenly starts exercising the feature. Now if there is a bug, every single review submitted to the gate is potentially going to crash and burn causing major pain for everyone trying to get tests to pass. With this version cap, you can update libvirt in the gate in knowledge that we won't turn on new previously untested feature patches, so you have lower risk of causing gate instability. Once the gate is updated to new libvirt, we submit a patch to update version cap. If there is a bug in the new features enabled it only affects that one patch under review instead of killing the entire CI system for anyone. Only once we have passing tests for the new version cap value and that is merged would the gate as a whole be impact. Of course sometimes the bugs are very non-deterministic and rare so things might still sneak through, but at least some portion of bugs will be detected this way and help the gate reliability during updates of libvirt. The issue I see with the libvirt version_cap [1] is best captured in its commit message: The end user can override the limit if they wish to opt-in to use of untested features via the 'version_cap' setting in the 'libvirt' group. This goes against the very direction nova has been moving in for some time now. We have been moving away from merging untested (re: no integration testing) features. This patch changes the very direction the project is going in over testing without so much as a discussion. While I think it may be time that we revisited this discussion, the discussion needs to happen before any patches are merged. Like it or not we have a number of features in Nova that we don't have test coverage for, due to a variety of reasons, some short term, some long term, some permanently unavoidable. One of the reasons is due to the gate having too old libvirt for a feature. As mentioned elsewhere people are looking at addressing that, by trying to figure out how to do a gate job with newer libvirt. Blocking feature development during Juno until the gate issues are addressed is not going to help the work to get new gate jobs, but will discourage our contributors and further the (somewhat valid) impression that we're not a very welcoming project to work with. The version cap setting is *not* encouraging us to add more features that lack testing. It is about recognising that we're *already* accepting such features and so taking steps to ensure our end users don't exercise the untested code paths unless they explicitly choose to. This ensures that what the user tests out of the box actually meets our Tier 1 status. I am less concerned about the contents of this patch, and more concerned with how such a big de facto change in nova policy (we accept untested code sometimes) without any discussion or consensus. In your comment on the revert [2], you say the 'whether not-CI-tested features should be allowed to be merged' debate is 'clearly unresolved.' How did you get to that conclusion? This was never brought up in the mid-cycles as a unresolved topic to be discussed. In our specs template we say Is this untestable in gate given current limitations (specific hardware / software configurations available)? If so, are there mitigation plans (3rd party testing, gate enhancements, etc) [3]. We have been blocking untested features for some time now. That last lines are nonsense. We have never unconditionally blocked untested features nor do I recommend that we do so. The specs template testing allows the contributor to *justify* why they think the feature is worth accepting despite lack of testing. The reviewers make a judgement call on whether the justification is valid or not. This is a pragmmatic approach to the problem. I am further perplexed by what Daniel Berrange, the patch author, meant when he commented [2] Regardless of the outcome of the testing discussion we believe this is a useful feature to have. Who is 'we'? Because I don't see how that can be nova-core or even nova-specs-core, especially considering how many members of those groups are +2 on the revert. So if 'we' is neither of those groups then who is 'we'? By 'we' I'm referring to the people who submitted approved the patch. As explained soo many times now, this version cap concept is something that is useful to end users even if this debate of testing was not happening and libvirt had 100% testing coverage. ie consider we test on libvirt 1.2.0 but a cloud admin has deployed on libvirt
Re: [openstack-dev] [tc][ceilometer] Some background on the gnocchi project
Doesn't InfluxDB do the same? InfluxDB stores timeseries data primarily. Gnocchi in intended to store strongly-typed OpenStack resource representations (instance, images, etc.) in addition to providing a means to access timeseries data associated with those resources. So to answer your question: no, IIUC, it doesn't do the same thing. Ok, I think I'm getting closer on this. Great! Thanks for the clarification. Sadly, I have more questions :) Any time, Sandy :) Is this closer? a metadata repo for resources (instances, images, etc) + an abstraction to some TSDB(s)? Somewhat closer (more clarification below on the metadata repository aspect, and the completeness/authority of same). Hmm, thinking out loud ... if it's a metadata repo for resources, who is the authoritative source for what the resource is? Ceilometer/Gnocchi or the source service? The source service is authoritative. For example, if I want to query instance power state do I ask ceilometer or Nova? In that scenario, you'd ask nova. If, on the other hand, you wanted to average out the CPU utilization over all instances with a certain metadata attribute set (e.g. some user metadata set by Heat that indicated membership of an autoscaling group), then you'd ask ceilometer. Or is it metadata about the time-series data collected for that resource? Both. But the focus of my preceding remarks was on the latter. In which case, I think most tsdb's have some sort of series description facilities. Sure, and those should be used for metadata related directly to the timeseries (granularity, retention etc.) I guess my question is, what makes this metadata unique and how would it differ from the metadata ceilometer already collects? The primary difference between the way ceilometer currently stores metadata, is the avoidance of per-sample snapshots of resource metadata (as stated in the initial mail on this thread). Will it be using Glance, now that Glance is becoming a pure metadata repo? No, we have no plans to use glance for this. By becoming a pure metadata repo, presumably you mean this spec: https://github.com/openstack/glance-specs/blob/master/specs/juno/metadata-schema-catalog.rst I don't see this on the glance roadmap for Juno: https://blueprints.launchpad.net/glance/juno so presumably the integration of graffiti and glance is still more of a longer term intent, than a present-tense becoming. I'm totally open to correction on this by markwash and others, but my reading of the debate around the recent change in glance's mission statement was that the primary focus in the immediate term was to expand into providing an artifact repository (for artifacts such as Heat templates), while not to *precluding* any future expansion into also providing a metadata repository. The fossil-record of that discussion is here: https://review.openstack.org/98002 Though of course these things are not a million miles from each other, one is just a step up in the abstraction stack, having a wider and more OpenStack-specific scope. Could it be a generic timeseries service? Is it openstack specific because it uses stackforge/python/oslo? No, I meant OpenStack-specific in terms of it understanding something of the nature of OpenStack resources and their ownership (e.g. instances, with some metadata, each being associated with a user tenant etc.) Not OpenStack-specific in the sense that it takes dependencies from oslo or stackforge. As for using python: yes, gnocchi is implemented in python, like much of the rest of OpenStack. However, no, I don't think that choice of implementation language makes it OpenStack-specific. I assume the rules and schemas will be data-driven (vs. hard-coded)? Well one of the ideas was to move away from loosely typed representations of resources in ceilometer, in the form of a dict of metadata containing whatever it contains, and instead decide upfront what was the specific minimal information per resource type that we need to store. ... and since the ceilometer collectors already do the bridge work, is it a pre-packaging of definitions that target openstack specifically? I'm not entirely sure of what you mean by the bridge work in this context. The ceilometer collector effectively acts a concentrator, by persisting the metering messages emitted by the other ceilometer agents (i.e. the compute, central, notification agents) to the metering store. These samples are stored by the collector pretty much as-is, so there's no real bridging going on currently in the collector (in the sense of mapping or transforming). However, the collector is indeed the obvious hook point for ceilometer to emit data to gnocchi. (not sure about wider and more specific) I presume you're thinking oxymoron with wider and more specific? I meant: * wider in the sense that it covers more ground than generic timeseries data storage * more specific in the sense that some of
Re: [openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment
Hisashi Osanai, yes, that is blocking the Ceilometer gate at all for now. The reason might be in updated centos6 image in the gate, but I have no opportunity to check it actually. Infra folks, can you help us? Thanks, Dina On Tue, Aug 12, 2014 at 1:47 PM, Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote: Hi, I got an error message when Jenkins executed tox -epy26 in the following fix. https://review.openstack.org/#/c/112771/ I think that the reason of the error message is a mongod isn't installed in test environment. (it works in my test env) Do you have any idea to solve this? - setup-test-env.sh 16 export PATH=${PATH:+$PATH:}/sbin:/usr/sbin 17 if ! which mongod /dev/null 21 18 then 19 echo Could not find mongod command 12 20 exit 1 21 fi - console.log 2014-08-12 07:25:03.329 | + tox -epy26 2014-08-12 07:25:03.542 | py26 create: /home/jenkins/workspace/gate-ceilometer-python26/.tox/py26 2014-08-12 07:25:05.255 | py26 installdeps: -r/home/jenkins/workspace/gate-ceilometer-python26/requirements.txt, -r/home/jenkins/workspace/gate-ceilometer-python26/test-requirements.txt 2014-08-12 07:28:01.581 | py26 develop-inst: /home/jenkins/workspace/gate-ceilometer-python26 2014-08-12 07:28:07.861 | py26 runtests: commands[0] | bash -x /home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python setup.py testr --slowest --testr-args= 2014-08-12 07:28:07.864 | + set -e 2014-08-12 07:28:07.865 | ++ mktemp -d CEILO-MONGODB-X 2014-08-12 07:28:07.866 | + MONGO_DATA=CEILO-MONGODB-t6f5p 2014-08-12 07:28:07.866 | + MONGO_PORT=29000 2014-08-12 07:28:07.866 | + trap clean_exit EXIT 2014-08-12 07:28:07.867 | + mkfifo CEILO-MONGODB-t6f5p/out 2014-08-12 07:28:07.868 | + export PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin 2014-08-12 07:28:07.869 | + PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin 2014-08-12 07:28:07.869 | + which mongod 2014-08-12 07:28:07.870 | + echo 'Could not find mongod command' 2014-08-12 07:28:07.870 | Could not find mongod command 2014-08-12 07:28:07.871 | + exit 1 2014-08-12 07:28:07.871 | + clean_exit 2014-08-12 07:28:07.872 | + local error_code=1 2014-08-12 07:28:07.872 | + rm -rf CEILO-MONGODB-t6f5p 2014-08-12 07:28:07.873 | ++ jobs -p 2014-08-12 07:28:07.873 | + kill 2014-08-12 07:28:07.874 | kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] 2014-08-12 07:28:07.875 | ERROR: InvocationError: '/bin/bash -x /home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python setup.py testr --slowest --testr-args=' Best Regards, Hisashi Osanai ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. I propose to revert [2] ASAP since it is still fresh, and see how we can come up with a cleaner design. Would like to hear opinions on this, before I propose the patch tho! Thanks all, Nikola [1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement [2] https://review.openstack.org/#/c/109643/ [3] https://review.openstack.org/#/c/111782/ [4] https://review.openstack.org/#/c/89893 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment
On Tuesday, August 12, 2014 7:05 PM, Dina Belova wrote: that is blocking the Ceilometer gate at all for now. Thank you for your quick response. I understand current situation. Thanks again! Hisashi Osanai ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment
Folks, it looks like mongo packages were retired as a result of this ticket: https://fedorahosted.org/rel-eng/ticket/5963 Although also it looks like this mistake was quickly reverted here: http://pkgs.fedoraproject.org/cgit/mongodb.git/commit/?h=el6 Let's wait if it fixed the issue, but it looks like it should be ok for now. Thanks, Dina On Tue, Aug 12, 2014 at 2:23 PM, Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote: On Tuesday, August 12, 2014 7:05 PM, Dina Belova wrote: that is blocking the Ceilometer gate at all for now. Thank you for your quick response. I understand current situation. Thanks again! Hisashi Osanai ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [keystone] Configuring protected API functions to allow public access
Hi All, Correct me if I am wrong but I don't think you can configure the Keystone policy.json to allow public access to an API function, as far as I can tell you can allow access to any authenticated user regardless of role assignments but not public access. My use case is a client which allows users to query for a list of supported identity providers / protocols so that the user can then select which provider to authenticate with - as the user is unauthenticated at the time of the query the request needs to allow public access to the 'List Identity Providers' API function. I can remove the protected decorator from the required functions but this is a nasty hack. I suggest that it should be possible to configure this kind of access rule on a deployment by deployment basis and I was just hoping to get some thoughts on this. Many thanks, Kristy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] make mac address updatable: which plugins?
Hi Chuck, I'll comment regarding Mellanox Plug-in and Ml2 Mech driver in the review. BR, Irena -Original Message- From: Carlino, Chuck (OpenStack TripleO, Neutron) [mailto:chuck.carl...@hp.com] Sent: Wednesday, August 06, 2014 10:42 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] make mac address updatable: which plugins? Yamamoto has reviewed the changes for this, and has raised the following issue (among others). * iirc mellanox uses mac address as port identifier. what happens on address change? Can someone who knows mellanox please comment, either here or in the review? Thanks, Chuck On Aug 5, 2014, at 1:22 PM, Carlino, Chuck (OpenStack TripleO, Neutron) chuck.carl...@hp.commailto:chuck.carl...@hp.com wrote: Thanks for the quick responses. Here's the WIP review: https://review.openstack.org/112129. The base plugin doesn't contribute to the notification decision right now, so I've modified the actual plugin code. Chuck On Aug 5, 2014, at 12:51 PM, Amir Sadoughi amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com wrote: I agree with Kevin here. Just a note, don't bother with openvswitch and linuxbridge plugins as they are marked for deletion this cycle, imminently (already deprecated)[0]. Amir [0] http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-04-21.02.html Announcements 2e. From: Kevin Benton [blak...@gmail.commailto:blak...@gmail.commailto:blak...@gmail.com] Sent: Tuesday, August 05, 2014 2:40 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] make mac address updatable: which plugins? How are you implementing the change? It would be good to get to see some code in a review to get an idea of what needs to be updated. If it's just a change in the DB base plugin, just let those changes propagate to the plugins that haven't overridden the inherited behavior. On Tue, Aug 5, 2014 at 1:28 PM, Charles Carlino chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.commailto:chuckjcarl...@gmail.com wrote: Hi all, I need some help regarding a bug [1] I'm working on. The bug is basically a request to make the mac address of a port updatable. The use case is a baremetal (Ironic) node that has a bad NIC which must be replaced, resulting in a new mac address. The bad NIC has an associated neutron port which of course holds the NIC's IP address. The reason to make mac_address updatable (as opposed to having the user create a new port and delete the old one) is that during the recovery process the IP address must be retained and assigned to the new NIC/port, which is not guaranteed in the above work-around. I'm coding the changes to do this in the ml2, openvswitch, and linuxbridge plugins but I'm not sure how to handle the the other plugins since I don't know if the associated backends are prepared to handle such updates. My first thought is to disallow the update in the other plugins, but I would really appreciate your advice. Kind regards, Chuck Carlino [1] https://bugs.launchpad.net/neutron/+bug/1341268 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Which program for Rally
On Aug 11, 2014, at 12:00 PM, David Kranz dkr...@redhat.com wrote: On 08/06/2014 05:48 PM, John Griffith wrote: I have to agree with Duncan here. I also don't know if I fully understand the limit in options. Stress test seems like it could/should be different (again overlap isn't a horrible thing) and I don't see it as siphoning off resources so not sure of the issue. We've become quite wrapped up in projects, programs and the like lately and it seems to hinder forward progress more than anything else. I'm also not convinced that Tempest is where all things belong, in fact I've been thinking more and more that a good bit of what Tempest does today should fall more on the responsibility of the projects themselves. For example functional testing of features etc, ideally I'd love to have more of that fall on the projects and their respective teams. That might even be something as simple to start as saying if you contribute a new feature, you have to also provide a link to a contribution to the Tempest test-suite that checks it. Sort of like we do for unit tests, cross-project tracking is difficult of course, but it's a start. The other idea is maybe functional test harnesses live in their respective projects. Honestly I think who better to write tests for a project than the folks building and contributing to the project. At some point IMO the QA team isn't going to scale. I wonder if maybe we should be thinking about proposals for delineating responsibility and goals in terms of functional testing? All good points. Your last paragraph was discussed by the QA team leading up to and at the Atlanta summit. The conclusion was that the api/functional tests focused on a single project should be part of that project. As Sean said, we can envision there being half (or some other much smaller number) as many such tests in tempest going forward. Details are under discussion, but the way this is likely to play out is that individual projects will start by creating their own functional tests outside of tempest. Swift already does this and neutron seems to be moving in that direction. There is a spec to break out parts of tempest (https://github.com/openstack/qa-specs/blob/master/specs/tempest-library.rst) into a library that might be used by projects implementing functional tests. Once a project has sufficient functional testing, we can consider removing its api tests from tempest. This is a bit tricky because tempest needs to cover *all* cross-project interactions. In this respect, there is no clear line in tempest between scenario tests which have this goal explicitly, and api tests which may also involve interactions that might not be covered in a scenario. So we will need a principled way to make sure there is complete cross-project coverage in tempest with a smaller number of api tests. -David We need to be careful about dumping the tests from tempest now that the DefCore group is relying on them as well. Tempest is no longer just a developer/QA/operations tool. It’s also being used as the basis of a trademark enforcement tool. That’s not to say we can’t change the test suite, but we have to consider a new angle when doing so. Doug ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron] Feature Proposal Freeze is 9 days away
Just a reminder that Neutron observes FPF [1], and it's 9 days away. We have an incredible amount of BPs which do not have code submitted yet. My suggestion to those who own one of these BPs would be to think hard about whether or not you can realistically land this code in Juno before jamming things up at the last minute. I hope we as a team can refocus on the remaining Juno tasks for the rest of Juno now and land items of importance to the community at the end. Thanks! Kyle [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] oslo.concurrency repo review
On Mon, Aug 11 2014, Joshua Harlow wrote: Sounds great, I've been wondering why https://github.com/stackforge/tooz/commit/f3e11e40f9871f8328 happened/merged (maybe it should be changed?). For the simple reason that there's people wanting to use a lock distributed against several processes without being distributed against several nodes. In that case, having a ZK or memcached backend is useless as IPC is good enough. -- Julien Danjou ;; Free Software hacker ;; http://julien.danjou.info signature.asc Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase
On Tue, Aug 12 2014, Osanai, Hisashi wrote: I did cherry-pick for https://bugs.launchpad.net/ceilometer/+bug/1326250; and executed git review (https://review.openstack.org/#/c/112806/). In review phase I got the error message from Jenkins. The reason of the error is happybase-0.8 (latest one) uses execfile function and the usage of the function is removed from python. The py33 gate shouldn't be activated for the stable/icehouse. I'm no infra-config expert, but we should be able to patch it for that (hint?). -- Julien Danjou /* Free Software hacker http://julien.danjou.info */ signature.asc Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Add a pollster plugin
On Aug 12, 2014, at 4:11 AM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC) li-gong.d...@hp.com wrote: Hi Folks, Is there any best practices or good way to debug whether new pollster plugin work fine for ceilometer? I’d like to add a new pollster plugin into Ceilometer by - adding a new item under enterypoint/ceilometer.poll.central in setup.cfg file - adding the implementation code inheriting “plugin.CentralPollster”. But when I “sudo python setup.py install” and restart ceilometer-related services in devstack, NO new metering is displayed upon “ceilometer meter-list” and I expect that there should be a new metering showing the item defined in setup.cfg. Is there any other source/config files I need to modify or add? You need to define a pipeline [1] to include the data from your new pollster and schedule it to be run. Doug [1] http://docs.openstack.org/developer/ceilometer/configuration.html#pipelines Thanks in advance, Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] PCI support
Thanks, the concern is for the code in Nova and not in Neutron. That is, there is quite a lot of PCI code being added and no way of knowing that it actually works (unless we trust the developers working on it :)). Thanks Gary From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com Date: Tuesday, August 12, 2014 at 10:25 AM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com Subject: RE: [openstack-dev] [Nova] PCI support Hi Gary, Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the jobs of Mellanox External Testing CI (Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789). Meanwhile not voting, but will be soon. BR, Irena From: Gary Kotton [mailto:gkot...@vmware.com] Sent: Monday, August 11, 2014 5:17 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova] PCI support Thanks for the update. From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com Reply-To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Monday, August 11, 2014 at 5:08 PM To: OpenStack List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] PCI support Gary, Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for their MD as well. -Robert On 8/11/14, 9:05 AM, Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com wrote: Hi, At the moment all of the drivers are required CI support. Are there any plans regarding the PIC support. I understand that this is something that requires specific hardware. Are there any plans to add this? Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] fair standards for all hypervisor drivers
On 08/12/2014 05:54 AM, Daniel P. Berrange wrote: I am less concerned about the contents of this patch, and more concerned with how such a big de facto change in nova policy (we accept untested code sometimes) without any discussion or consensus. In your comment on the revert [2], you say the 'whether not-CI-tested features should be allowed to be merged' debate is 'clearly unresolved.' How did you get to that conclusion? This was never brought up in the mid-cycles as a unresolved topic to be discussed. In our specs template we say Is this untestable in gate given current limitations (specific hardware / software configurations available)? If so, are there mitigation plans (3rd party testing, gate enhancements, etc) [3]. We have been blocking untested features for some time now. That last lines are nonsense. We have never unconditionally blocked untested features nor do I recommend that we do so. The specs template testing allows the contributor to *justify* why they think the feature is worth accepting despite lack of testing. The reviewers make a judgement call on whether the justification is valid or not. This is a pragmmatic approach to the problem. That has been my interpretation and approach as well: we strongly prefer functional testing for everything, but take a pragmatic approach and evaluate proposals on a case by case basis. It's clear we need to be a bit more explicit here. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Implications of moving functional tests to projects (was Re: Which program for Rally)
Changing subject line to continue thread about new $subj On 08/12/2014 08:56 AM, Doug Hellmann wrote: On Aug 11, 2014, at 12:00 PM, David Kranz dkr...@redhat.com mailto:dkr...@redhat.com wrote: On 08/06/2014 05:48 PM, John Griffith wrote: I have to agree with Duncan here. I also don't know if I fully understand the limit in options. Stress test seems like it could/should be different (again overlap isn't a horrible thing) and I don't see it as siphoning off resources so not sure of the issue. We've become quite wrapped up in projects, programs and the like lately and it seems to hinder forward progress more than anything else. I'm also not convinced that Tempest is where all things belong, in fact I've been thinking more and more that a good bit of what Tempest does today should fall more on the responsibility of the projects themselves. For example functional testing of features etc, ideally I'd love to have more of that fall on the projects and their respective teams. That might even be something as simple to start as saying if you contribute a new feature, you have to also provide a link to a contribution to the Tempest test-suite that checks it. Sort of like we do for unit tests, cross-project tracking is difficult of course, but it's a start. The other idea is maybe functional test harnesses live in their respective projects. Honestly I think who better to write tests for a project than the folks building and contributing to the project. At some point IMO the QA team isn't going to scale. I wonder if maybe we should be thinking about proposals for delineating responsibility and goals in terms of functional testing? All good points. Your last paragraph was discussed by the QA team leading up to and at the Atlanta summit. The conclusion was that the api/functional tests focused on a single project should be part of that project. As Sean said, we can envision there being half (or some other much smaller number) as many such tests in tempest going forward. Details are under discussion, but the way this is likely to play out is that individual projects will start by creating their own functional tests outside of tempest. Swift already does this and neutron seems to be moving in that direction. There is a spec to break out parts of tempest (https://github.com/openstack/qa-specs/blob/master/specs/tempest-library.rst) into a library that might be used by projects implementing functional tests. Once a project has sufficient functional testing, we can consider removing its api tests from tempest. This is a bit tricky because tempest needs to cover *all* cross-project interactions. In this respect, there is no clear line in tempest between scenario tests which have this goal explicitly, and api tests which may also involve interactions that might not be covered in a scenario. So we will need a principled way to make sure there is complete cross-project coverage in tempest with a smaller number of api tests. -David We need to be careful about dumping the tests from tempest now that the DefCore group is relying on them as well. Tempest is no longer just a developer/QA/operations tool. It's also being used as the basis of a trademark enforcement tool. That's not to say we can't change the test suite, but we have to consider a new angle when doing so. Doug Thanks, Doug. We need to get away from acceptance test == tempest while retaining the ability to define and run an acceptance test as easily as tempest can be run now. My view is that functional tests in projects should have the capability to be run against real clouds, and that in-project functional tests should look like, and be interchangeable with, the api tests in tempest. The in-project tests would be focused on completeness of api testing and the tempest tests focused on cross-project interaction, but they could be run in similar ways. Then a trademark enforcement tool, or any other kind of acceptance test, could select which tests to run. I think this view may be a bit controversial but your point obviously needs to be addressed. -David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [devstack] Core team proposals
These changes have been completed. Welcome Ian! dt -- Dean Troyer dtro...@gmail.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Hey all, as per [1], Cisco Nexus ML2 plugin requires a patched version of ncclient from github. I wonder: - - whether this information is still current; - - why don't we depend on ncclient thru our requirements.txt file. [1]: https://wiki.openstack.org/wiki/Neutron/ML2/MechCiscoNexus Cheers, /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJT6iRtAAoJEC5aWaUY1u57+UcIAJ4xghwqJJK/Lif7o9sRjgRp q1gzQ6+fb7ExL4YqP9SP/VFed500DUWulZAillVO4xnJQyXFvFSBcWgtDL4VvJBm 5gPX145sFX1/uul95AioOX4b/74SAFm/qInbdTX9VWpq3ZznlD7rXt2aZAliqf1s 59SwYhLYBv0pVqWQWGRN84/FU5HXdSlQCAY/5AYCIa98jPGT+rQl7luNyFPsQQIf KR4wSZD3CqWdatgoweDT0hv8FO9y20WOn7nA0+NOG1P1qvrBErlIUTlKqixhbl7s /tTvAyqmvzxBe+z/XWPcQ5SDf8IzahJGtBA9f/vKqsXu8FEqOrQ//8SOK8V4DIY= =tWKl -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] oslo.concurrency repo review
Sure, thats great as to why a feature like it might exist (and I think such a feature is great to have, for cases when a bigger *distributed* system isn't desired). Just the one there taken from lockutils has some issues that IMHO I think tooz would be better to avoid for the time-being (until pylockfile is updated to have a more reliable implementation). Some of the current issues I can think of off the top of my head: 1. https://bugs.launchpad.net/oslo/+bug/1327946 (this means the usage in tooz will be similarily not resistant to program termination, which in a library like tooz seems more severe, since tooz has no way of knowing how it, as a library, will be used). With this bug future acquisition after *forceful* program termination will result in the acquire() method not working for the same IPClock name (ever). 2. The double API, tooz configures lockutils one way, someone else can go under tooz and use `set_defaults()` (or other ways that I'm not aware of that can be used to configure oslo.config) and expect that the settings they will have set will actually do something, when in fact they will not (or will they???). This seems like a bad point of confusion for an API to have, where some of its API is from methods/functions... and some is from oslo.config... 3. Bringing in oslo.config as a dependency (tooz isn't configured via oslo.config but now it has code that looks like it is configured this via it). What happens if the parts of tooz now are set by oslo.config and some parts aren't? This seems bad from a user experience (where the user is the library user) point of view and a testability point of view (and probably other points of view that I can't think of), when there are new options that can be set via a *secret* API that now affect how tooz works... 4. What happens with windows here (since tooz is a library it's hard to predict how it will be used, unless windows is not supported)? Windows will resort back to using a filelock, which will default to using whatever oslo.config file path was set for tooz, which again goes back to #2 and #3 and having 2 apis, one public and one *secret* that can affect how tooz operates... In this case it seems `default=os.environ.get(TOOZ_LOCK_PATH)` will be used, when that's not set tooz now blows up with a weird configuration error @ https://github.com/stackforge/tooz/blob/master/tooz/openstack/common/lockutils.py#L222 (this all seems bad for users of tooz)... What do u think about waiting until pylockfile is ready and avoiding 1-4 from above? At least if taskflow uses tooz I surely don't want taskflow to have to deal with #1-4 (which it will inherit from tooz if taskflow starts to use tooz by the very nature of taskflow using tooz as a library). Thoughts? -Josh On Tue, Aug 12, 2014 at 6:12 AM, Julien Danjou jul...@danjou.info wrote: On Mon, Aug 11 2014, Joshua Harlow wrote: Sounds great, I've been wondering why https://github.com/stackforge/tooz/commit/f3e11e40f9871f8328 happened/merged (maybe it should be changed?). For the simple reason that there's people wanting to use a lock distributed against several processes without being distributed against several nodes. In that case, having a ZK or memcached backend is useless as IPC is good enough. -- Julien Danjou ;; Free Software hacker ;; http://julien.danjou.info ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
Hi Nikola, Le 12/08/2014 12:21, Nikola Đipanov a écrit : Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. Well, ERT does provide a plugin mechanism for testing resources at the claim level. This is the plugin responsibility to implement a test() method [2.1] which will be called when test_claim() [2.2] So, provided this method is implemented, a local host check can be done based on the host's view of resources. This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. And here is where I agree with you : at the moment, ResourceTracker (and consequently Extensible RT) only provides the view of the resources the host is knowing (see my point above) and possibly some other resources are missing. So, whatever your choice of going with or without ERT, your patch [3] still deserves it if we want not to lookup DB each time a claim goes. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. Two distinct implementations of that spec (ie. instances and flavors) have been proposed [2.3] [2.4] so reviews are welcome. If you see the test() method, it's no-op thing for both plugins. I'm open to comments because I have the stated problem : how can we define a limit on just a counter of instances and flavors ? I propose to revert [2] ASAP since it is still fresh, and see how we can come up with a cleaner design. Would like to hear opinions on this, before I propose the patch tho! IMHO, I think the problem is more likely that the regular RT misses some information for each host so it requires to handle it on a case-by-case basis, but I don't think ERT either increases complexity or creates another issue. Thanks, -Sylvain Thanks all, Nikola [1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement [2] https://review.openstack.org/#/c/109643/ [3] https://review.openstack.org/#/c/111782/ [4] https://review.openstack.org/#/c/89893 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2.1] https://github.com/openstack/nova/blob/master/nova/compute/resources/__init__.py#L75 [2.2] https://github.com/openstack/nova/blob/master/nova/compute/claims.py#L134 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
(sorry for reposting, missed 2 links...) Hi Nikola, Le 12/08/2014 12:21, Nikola Đipanov a écrit : Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. Well, ERT does provide a plugin mechanism for testing resources at the claim level. This is the plugin responsibility to implement a test() method [2.1] which will be called when test_claim() [2.2] So, provided this method is implemented, a local host check can be done based on the host's view of resources. This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. And here is where I agree with you : at the moment, ResourceTracker (and consequently Extensible RT) only provides the view of the resources the host is knowing (see my point above) and possibly some other resources are missing. So, whatever your choice of going with or without ERT, your patch [3] still deserves it if we want not to lookup DB each time a claim goes. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. Two distinct implementations of that spec (ie. instances and flavors) have been proposed [2.3] [2.4] so reviews are welcome. If you see the test() method, it's no-op thing for both plugins. I'm open to comments because I have the stated problem : how can we define a limit on just a counter of instances and flavors ? I propose to revert [2] ASAP since it is still fresh, and see how we can come up with a cleaner design. Would like to hear opinions on this, before I propose the patch tho! IMHO, I think the problem is more likely that the regular RT misses some information for each host so it requires to handle it on a case-by-case basis, but I don't think ERT either increases complexity or creates another issue. Thanks, -Sylvain Thanks all, Nikola [1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement [2] https://review.openstack.org/#/c/109643/ [3] https://review.openstack.org/#/c/111782/ [4] https://review.openstack.org/#/c/89893 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2.1] https://github.com/openstack/nova/blob/master/nova/compute/resources/__init__.py#L75 [2.2] https://github.com/openstack/nova/blob/master/nova/compute/claims.py#L134 [2.3] https://review.openstack.org/112578 [2.4] https://review.openstack.org/113373 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo] oslo.concurrency repo review
On Tue, Aug 12 2014, Joshua Harlow wrote: […] What do u think about waiting until pylockfile is ready and avoiding 1-4 from above? At least if taskflow uses tooz I surely don't want taskflow to have to deal with #1-4 (which it will inherit from tooz if taskflow starts to use tooz by the very nature of taskflow using tooz as a library). I definitely agree with you! The thing is that I wanted to have this for Gnocchi and the patch https://review.openstack.org/#/c/110260/ which is going to be simplified by that. So I went ahead and implemented a first version of the IPC driver, which as you point out, is far from being perfect. Now, if you think – and you have good points – that the IPC driver in tooz could be better, I'm not going to disagree, and patches are welcome! :-) -- Julien Danjou ;; Free Software hacker ;; http://julien.danjou.info signature.asc Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Retrospective veto revert policy
Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
On 08/12/2014 10:56 AM, Mark McLoughlin wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Looks reasonable to me. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient
On 8/12/2014 10:27 AM, Ihar Hrachyshka wrote: as per [1], Cisco Nexus ML2 plugin requires a patched version of ncclient from github. I wonder: - - whether this information is still current; Please see: https://review.openstack.org/112175 But we need to do backports before updating the wiki. - - why don't we depend on ncclient thru our requirements.txt file. Do we want to have requirements on things that are only used by a specific vendor plugin? So far it has worked by vendor-specific documentation instructing to manually install the requirement, or vendor-tailored deployment tools/scripts. [1]: https://wiki.openstack.org/wiki/Neutron/ML2/MechCiscoNexus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
Looks reasonable to me. +1 --Dan signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gantt project
Thanks for the info. It does seem like most OpenStack projects have some concept of a scheduler, as you mentioned. Perhaps that's expected in any distributed system. Is it expected or assumed that Gantt will become the common scheduler for all OpenStack projects? That is, is Gantt's plan and/or design goals to provide scheduling (or a scheduling framework) for all OpenStack projects? Perhaps this is a question for the TC rather than Don. [1] Since Gantt is initially intended to be used by Nova, will it be under the compute program or will there be a new program created for it? --John [1] You'll forgive me, but I've certainly seen OpenStack projects move from you can use it if you want to you must start using this in the past. On Aug 11, 2014, at 11:09 PM, Dugger, Donald D donald.d.dug...@intel.com wrote: This is to make sure that everyone knows about the Gantt project and to make sure that no one has a strong aversion to what we are doing. The basic goal is to split the scheduler out of Nova and create a separate project that, ultimately, can be used by other OpenStack projects that have a need for scheduling services. Note that we have no intention of forcing people to use Gantt but it seems silly to have a scheduler inside Nova, another scheduler inside Cinder, another scheduler inside Neutron and so forth. This is clearly predicated on the idea that we can create a common, flexible scheduler that can meet everyone’s needs but, as I said, theirs is no rule that any project has to use Gantt, if we don’t meet your needs you are free to roll your own scheduler. We will start out by just splitting the scheduler code out of Nova into a separate project that will initially only be used by Nova. This will be followed by enhancements, like a common API, that can then be utilized by other projects. We are cleaning up the internal interfaces in the Juno release with the expectation that early in the Kilo cycle we will be able to do the split and create a Gantt project that is completely compatible with the current Nova scheduler. Hopefully our initial goal (a separate project that is completely compatible with the Nova scheduler) is not too controversial but feel free to reply with any concerns you may have. -- Don Dugger Censeo Toto nos in Kansa esse decisse. - D. Gale Ph: 303/443-3786 signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
On Tue, Aug 12, 2014 at 5:53 AM, Jay Lau jay.lau@gmail.com wrote: I did not have the environment set up now, but by reviewing code, I think that the logic should be as following: 1) When using nova docker driver, we can use cloud-init or/and CMD in docker images to run post install scripts. myapp: Type: OS::Nova::Server Properties: flavor: m1.small image: my-app:latest docker image user-data: 2) When using heat docker driver, we can only use CMD in docker image or heat template to run post install scripts. wordpress: type: DockerInc::Docker::Container depends_on: [database] properties: image: wordpress links: db: mysql port_bindings: 80/tcp: [{HostPort: 80}] docker_endpoint: str_replace: template: http://host:2345/ params: host: {get_attr: [docker_host, networks, private, 0]} cmd: /bin/bash I can confirm this is correct for both use-cases. Currently, using Nova, one may only specify the CMD in the image itself, or as glance metadata. The cloud metadata service should be assessable and usable from Docker. The Heat plugin allow settings the CMD as a resource property. The user-data is only passed to the instance that runs Docker, not the containers. Configuring the CMD and/or environment variables for the container is the correct approach. -- Regards, Eric Windisch ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Configuring protected API functions to allow public access
Hi Kristy, Have you try the [] or @ rule as mentioned here? https://github.com/openstack/keystone/blob/master/keystone/openstack/common/ policy.py#L71 Guang -Original Message- From: K.W.S.Siu [mailto:k.w.s@kent.ac.uk] Sent: Tuesday, August 12, 2014 3:44 AM To: openstack Mailing List Subject: [openstack-dev] [keystone] Configuring protected API functions to allow public access Hi All, Correct me if I am wrong but I don't think you can configure the Keystone policy.json to allow public access to an API function, as far as I can tell you can allow access to any authenticated user regardless of role assignments but not public access. My use case is a client which allows users to query for a list of supported identity providers / protocols so that the user can then select which provider to authenticate with - as the user is unauthenticated at the time of the query the request needs to allow public access to the 'List Identity Providers' API function. I can remove the protected decorator from the required functions but this is a nasty hack. I suggest that it should be possible to configure this kind of access rule on a deployment by deployment basis and I was just hoping to get some thoughts on this. Many thanks, Kristy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev smime.p7s Description: S/MIME cryptographic signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
On 08/12/2014 10:56 AM, Mark McLoughlin wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Eminently reasonable. +1 -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] introducing cyclops
On Tue, Aug 12, 2014 at 05:47:49PM +1200, Fei Long Wang wrote: Our suggestion for the first IRC meeting is 25/August 8PM-10PM UTC time on Freenodes's #openstack-rating channel. Thoughts? Please reply with the best date/time for you so we can figure out a time to start. I'll like to participate to this meeting, but one of my colleagues will not be available on the 25th. Maybe we can shift the date to the 26th? Thanks ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
On Tue, Aug 12, 2014 at 9:56 AM, Mark McLoughlin mar...@redhat.com wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. This looks good to me as well, and personally, I think this type of thing should be project wide. I'd be keen to adopt this for Neutron as well. Kyle Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
On Tue, Aug 12, 2014 at 8:46 AM, Jay Pipes jaypi...@gmail.com wrote: On 08/12/2014 10:56 AM, Mark McLoughlin wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Eminently reasonable. +1 +1 -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward
Per the blueprint spec [1], what has been proposed are optional extensions which complement the existing Neutron core resources' model: The main advantage of the extensions described in this blueprint is that they allow for an application-centric interface to Neutron that complements the existing network-centric interface. It has been pointed out earlier in this thread that this is not a replacement for the current Neutron core resources/API. [1] https://review.openstack.org/#/c/89469/10/specs/juno/group-based-policy-abstraction.rst,cm On Tue, Aug 12, 2014 at 1:22 AM, loy wolfe loywo...@gmail.com wrote: Hi Paul, Below are some other useful GBP reference pages: https://wiki.opendaylight.org/view/Project_Proposals:Group_Based_Policy_Plugin http://www.cisco.com/en/US/prod/collateral/netmgtsw/ps13004/ps13460/white-paper-c11-729906_ns1261_Networking_Solutions_White_Paper.html I think the root cause of this long argument, is that GBP core model was not designed native for Neutron, and they are introduced into Neutron so radically, without careful tailoring and adaption. Maybe the GBP team also don't want to do so, their intention is to maintain a unified model across all kinds of platform including Neutron, Opendaylight, ACI/Opflex, etc. However, redundancy and duplication exists between EP/EPG/BD/RD and Port/Network/Subnet. So mapping is used between these objects, and I think this is why so many voice to request moving GBP out and on top of Neutron. Will GBP simply be an *addition*? It absolutely COULD be, but objectively speaking, it's core model also allow it to BE-ABLE-TO take over Neutron core resource (see the wiki above). GBP mapping spec suggested a nova -nic extension to handle EP/EPG resource directly, thus all original Neutron core resource can be shadowed away from user interface: GBP became the new openstack network API :-) However no one can say depreciate Neutron core here and now, but shall we leave Neutron core just as *traditional/legacy*? Personally I prefer not to throw NW-Policy out of Neutron, but at the perquisite that its core model should be reviewed and tailored. A new lightweight model carefully designed native for Neutron is needed, but not directly copying a whole bunch of monolithic core resource from existing other system. Here is the very basic suggestion: because core value of GBP is policy template with contracts , throw away EP/EPG/L2P/L3P model while not just renaming them again and again. APPLY policy template to existing Neutron core resource, but not reinvent similar concept in GBP and then do the mapping. On Mon, Aug 11, 2014 at 9:12 PM, CARVER, PAUL pc2...@att.com wrote: loy wolfe [mailto:loywo...@gmail.com] wrote: Then since Network/Subnet/Port will never be treated just as LEGACY COMPATIBLE role, there is no need to extend Nova-Neutron interface to follow the GBP resource. Anyway, one of optional service plugins inside Neutron shouldn't has any impact on Nova side. This gets to the root of why I was getting confused about Jay and others having Nova related concerns. I was/am assuming that GBP is simply an *additional* mechanism for manipulating Neutron, not a deprecation of any part of the existing Neutron API. I think Jay's concern and the reason why he keeps mentioning Nova as the biggest and most important consumer of Neutron's API stems from an assumption that Nova would need to change to use the GBP API. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away
What would be the best practice for those who realize their work will not make it in Juno? Is it enough to not submit code for review? Would it be better to also request a change in milestone? Thanks, Mohammad From: Kyle Mestery mest...@mestery.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/12/2014 09:15 AM Subject:[openstack-dev] [neutron] Feature Proposal Freeze is 9 days away Just a reminder that Neutron observes FPF [1], and it's 9 days away. We have an incredible amount of BPs which do not have code submitted yet. My suggestion to those who own one of these BPs would be to think hard about whether or not you can realistically land this code in Juno before jamming things up at the last minute. I hope we as a team can refocus on the remaining Juno tasks for the rest of Juno now and land items of importance to the community at the end. Thanks! Kyle [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request
Hi, I think we are debating some edge-case. An important part of the flavor framework is the ability of me the operator to say failover from Octavia to an F5. So as an operator I would ensure to only offer the features in that flavor which both support. So in order to arrive at Brandon’s example I would have misconfigured my environment and rightfully would get errors at the drive level – which might be hard to understand for end users but hopefully pretty clear for me the operator… German From: Eugene Nikanorov [mailto:enikano...@mirantis.com] Sent: Monday, August 11, 2014 9:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request Well, that exactly what we've tried to solve with tags in the flavor. Considering your example with whole configuration being sent to the driver - i think it will be fine to not apply unsupported parts of configuration (like such HM) and mark the HM object with error status/status description. Thanks, Eugene. On Tue, Aug 12, 2014 at 12:33 AM, Brandon Logan brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote: Hi Eugene, An example of the HM issue (and really this can happen with any entity) is if the driver the API sends the configuration to does not actually support the value of an attribute. For example: Provider A support PING health monitor type, Provider B does not. API allows the PING health monitor type to go through. Once a load balancer has been linked with that health monitor and the LoadBalancer chose to use Provider B, that entire configuration is then sent to the driver. The driver errors out not on the LoadBalancer create, but on the health monitor create. I think that's the issue. Thanks, Brandon On Tue, 2014-08-12 at 00:17 +0400, Eugene Nikanorov wrote: Hi folks, That actually going in opposite direction to what flavor framework is trying to do (and for dispatching it's doing the same as providers). REST call dispatching should really go via the root object. I don't quite get the issue with health monitors. If HM is incorrectly configured prior to association with a pool - API layer should handle that. I don't think driver implementations should be different at constraints to HM parameters. So I'm -1 on adding provider (or flavor) to each entity. After all, it looks just like data denormalization which actually will affect lots of API aspects in negative way. Thanks, Eugene. On Mon, Aug 11, 2014 at 11:20 PM, Vijay Venkatachalam vijay.venkatacha...@citrix.commailto:vijay.venkatacha...@citrix.com wrote: Yes, the point was to say the plugin need not restrict and let driver decide what to do with the API. Even if the call was made to driver instantaneously, I understand, the driver might decide to ignore first and schedule later. But, if the call is present, there is scope for validation. Also, the driver might be scheduling an async-api to backend, in which case deployment error cannot be shown to the user instantaneously. W.r.t. identifying a provider/driver, how would it be to make tenant the default root object? tenantid is already associated with each of these entities, so no additional pain. For the tenant who wants to override let him specify provider in each of the entities. If you think of this in terms of the UI, let's say if the loadbalancer configuration is exposed as a single wizard (which has loadbalancer, listener, pool, monitor properties) then provider is chosen only once. Curious question, is flavour framework expected to address this problem? Thanks, Vijay V. -Original Message- From: Doug Wiegley [mailto:do...@a10networks.commailto:do...@a10networks.com] Sent: 11 August 2014 22:02 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron][LBaaS] Continuing on Calling driver interface on every API request Hi Sam, Very true. I think that Vijay’s objection is that we are currently imposing a logical structure on the driver, when it should be a driver decision. Certainly, it goes both ways. And I also agree that the mechanism for returning multiple errors, and the ability to specify whether those errors are fatal or not, individually, is currently weak. Doug On 8/11/14, 10:21 AM, Samuel Bercovici samu...@radware.commailto:samu...@radware.com wrote: Hi Doug, In some implementations Driver !== Device. I think this is also true for HA Proxy. This might mean that there is a
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Do u know if ceilometer is using six.wraps? If so, that helper adds in the `__wrapped__` attribute to decorated methods (which can be used to find the original decorated function). If just plain functools are used (and python3.x isn't used) then it will be pretty hard afaik to find the original decorated function (if that's the desire). six.wraps() is new in six 1.7.x so it might not be used in ceilometer yet (although maybe it should start to be used?). -Josh On Tue, Aug 12, 2014 at 9:08 AM, Pendergrass, Eric eric.pendergr...@hp.com wrote: Hi, I’m trying to use the built in secure decorator in Pecan for access control, and I’ld like to get the name of the method that is wrapped from within the decorator. For instance, if I’m wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don’t see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that’s not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation
On 11 August 2014 21:03, Dean Troyer dtro...@gmail.com wrote: On Mon, Aug 11, 2014 at 5:34 PM, Duncan Thomas duncan.tho...@gmail.com wrote: Making an previously mandatory parameter optional, at least on the command line, does break backward compatibility though, does it? Everything that worked before will still work. By itself, maybe that is ok. You're right, nothing _should_ break. But then the following is legal: cinder create What does that do? It returns an error. The following becomes legal though: cinder create --src-volume aaa-bbb-ccc-ddd cinder create --snapshot aaa-bbb-ccc-ddd cinder create --image aaa-bbb-ccc-ddd -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
This should give you what you need: from pecan.core import state state.controller On 08/12/14 04:08 PM, Pendergrass, Eric wrote: Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore
If this plugin will be deprecated in Juno it means that the code will be there for this release, I will expect to have the CI still running for until the code is completely removed from the Neutron tree. Anyway, Infra guys will have the last word here! Edgar On 8/11/14, 5:38 PM, Anita Kuno ante...@anteaya.info wrote: On 08/11/2014 06:31 PM, Henry Gessau wrote: On 8/11/2014 7:56 PM, Anita Kuno wrote: On 08/11/2014 05:46 PM, Henry Gessau wrote: Anita Kuno ante...@anteaya.info wrote: On 08/11/2014 05:05 PM, Edgar Magana wrote: Cisco Folks, I don't see the CI for Cisco NX-OS anymore. Is this being deprecated? I don't ever recall seeing that as a name of a third party gerrit account in my list[0], Edgar. Do you happen to have a link to a patchset that has that name attached to a comment? The Cisco Neutron CI tests at least five different configurations. By NX-OS Edgar is referring to the Cisco Nexus switch configurations. The CI used to run both the monolithic_nexus and ml2_nexus configurations, but the monolithic cisco plugin for nexus is being deprecated for juno and its configuration has already been removed from testing. Thanks Henry: Do we have a url for patch in gerrit for this or was this an internal code change? This was a change only in the internal 3rd party Jenkins/Zuul settings. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Okay. Perhaps going forward this could be an item for the third party meeting under the topic of Deadlines Deprecations: https://wiki.openstack.org/wiki/Meetings/ThirdParty Then at the very least if someone missed the announcement we could have a log of it and point someone to the conversation. Thanks Henry, Anita. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Thanks Ryan, but for some reason the controller attribute is None: (Pdb) from pecan.core import state (Pdb) state.__dict__ {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650, ceilometer.api.hooks.PipelineHook object at 0x39871d0, ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None, 'response': Response at 0x3ed74d0 200 OK} -Original Message- From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com] Sent: Tuesday, August 12, 2014 10:34 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators? This should give you what you need: from pecan.core import state state.controller On 08/12/14 04:08 PM, Pendergrass, Eric wrote: Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
On 08/12/2014 04:49 PM, Sylvain Bauza wrote: (sorry for reposting, missed 2 links...) Hi Nikola, Le 12/08/2014 12:21, Nikola Đipanov a écrit : Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. Well, ERT does provide a plugin mechanism for testing resources at the claim level. This is the plugin responsibility to implement a test() method [2.1] which will be called when test_claim() [2.2] So, provided this method is implemented, a local host check can be done based on the host's view of resources. Yes - the problem is there is no clear API to get all the needed bits to do so - especially the user supplied one from image and flavors. On top of that, in current implementation we only pass a hand-wavy 'usage' blob in. This makes anyone wanting to use this in conjunction with some of the user supplied bits roll their own 'extract_data_from_instance_metadata_flavor_image' or similar which is horrible and also likely bad for performance. This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. And here is where I agree with you : at the moment, ResourceTracker (and consequently Extensible RT) only provides the view of the resources the host is knowing (see my point above) and possibly some other resources are missing. So, whatever your choice of going with or without ERT, your patch [3] still deserves it if we want not to lookup DB each time a claim goes. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. Two distinct implementations of that spec (ie. instances and flavors) have been proposed [2.3] [2.4] so reviews are welcome. If you see the test() method, it's no-op thing for both plugins. I'm open to comments because I have the stated problem : how can we define a limit on just a counter of instances and flavors ? Will look at these - but none of them seem to hit the issue I am complaining about, and that is that it will need to consider other request data for claims, not only data available for on instances. Also - the fact that you don't implement test() in flavor ones tells me that the implementation is indeed racy (but it is racy atm as well) and two requests can indeed race for the same host, and since no claims are done, both can succeed. This is I believe (at least in case of single flavor hosts) unlikely to happen in practice, but you get the idea. I propose to revert [2] ASAP since it is still fresh, and see how we can come up with a cleaner design. Would like to hear opinions on this, before I propose the patch tho! IMHO, I think the problem is more likely that the regular RT misses some information for each host so it requires to handle it on a case-by-case basis, but I don't think ERT either increases complexity or creates another issue. RT does not miss info about the host, but about the particular request which we have to fish out of different places like image_metadata extra_specs etc, yet - it can't really work without them. This is definitely a RT issue that is not specific to ERT. However, I still see several issues
Re: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away
If you know it won't make it, please let me know so I can remove your BP from the LP milestone. Thanks! Kyle On Tue, Aug 12, 2014 at 11:18 AM, Mohammad Banikazemi m...@us.ibm.com wrote: What would be the best practice for those who realize their work will not make it in Juno? Is it enough to not submit code for review? Would it be better to also request a change in milestone? Thanks, Mohammad [image: Inactive hide details for Kyle Mestery ---08/12/2014 09:15:03 AM---Just a reminder that Neutron observes FPF [1], and it's 9 da]Kyle Mestery ---08/12/2014 09:15:03 AM---Just a reminder that Neutron observes FPF [1], and it's 9 days away. We have an incredible amount of From: Kyle Mestery mest...@mestery.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 08/12/2014 09:15 AM Subject: [openstack-dev] [neutron] Feature Proposal Freeze is 9 days away -- Just a reminder that Neutron observes FPF [1], and it's 9 days away. We have an incredible amount of BPs which do not have code submitted yet. My suggestion to those who own one of these BPs would be to think hard about whether or not you can realistically land this code in Juno before jamming things up at the last minute. I hope we as a team can refocus on the remaining Juno tasks for the rest of Juno now and land items of importance to the community at the end. Thanks! Kyle [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Configuring protected API functions to allow public access
On Tue, Aug 12, 2014 at 10:30 AM, Yee, Guang guang@hp.com wrote: Hi Kristy, Have you try the [] or @ rule as mentioned here? That still requires valid authentication though, just not any specific authorization. I don't think we have a way to express truly public resources in oslo.policy. https://github.com/openstack/keystone/blob/master/keystone/openstack/common/ policy.py#L71 Guang -Original Message- From: K.W.S.Siu [mailto:k.w.s@kent.ac.uk] Sent: Tuesday, August 12, 2014 3:44 AM To: openstack Mailing List Subject: [openstack-dev] [keystone] Configuring protected API functions to allow public access Hi All, Correct me if I am wrong but I don't think you can configure the Keystone policy.json to allow public access to an API function, as far as I can tell you can allow access to any authenticated user regardless of role assignments but not public access. My use case is a client which allows users to query for a list of supported identity providers / protocols so that the user can then select which provider to authenticate with - as the user is unauthenticated at the time of the query the request needs to allow public access to the 'List Identity Providers' API function. I can remove the protected decorator from the required functions but this is a nasty hack. I suggest that it should be possible to configure this kind of access rule on a deployment by deployment basis and I was just hoping to get some thoughts on this. Many thanks, Kristy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Can you share some code? What do you mean by, is there a way for the decorator code to know it was called by MetersController.get_all On 08/12/14 04:46 PM, Pendergrass, Eric wrote: Thanks Ryan, but for some reason the controller attribute is None: (Pdb) from pecan.core import state (Pdb) state.__dict__ {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650, ceilometer.api.hooks.PipelineHook object at 0x39871d0, ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None, 'response': Response at 0x3ed74d0 200 OK} -Original Message- From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com] Sent: Tuesday, August 12, 2014 10:34 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators? This should give you what you need: from pecan.core import state state.controller On 08/12/14 04:08 PM, Pendergrass, Eric wrote: Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack. With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture. It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population. At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues. We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ? TL;DR: Our development model is having growing pains. until we sort out the growing pains adding more projects spreads us too thin. +100 In addition to the issues mentioned above, with the scale of OpenStack today we have many major cross project issues to address and no good place to discuss them. We do have the ML, as well as the cross-project meeting every Tuesday [1], but we as a project need to do a better job of actually bringing up relevant issues here. [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ? I really like this idea, as Michael and others alluded to in above, we are attempting to set cycle goals for Kilo in Nova. but I think it is worth doing for all of OpenStack. We would like to make a list of key goals before the summit so that we can plan our summit sessions around the goals. On a really high level one way to look at this is, in Kilo we need to pay down our technical debt. The slots/runway idea is somewhat separate from defining key cycle goals; we can be approve blueprints based on key cycle goals without doing slots. But with so many concurrent blueprints up for review at any given time, the review teams are doing a lot of multitasking and humans are not very good at multitasking. Hopefully slots can help address this issue, and hopefully allow us to actually merge more blueprints in a given cycle. I'm not 100% sold on what the slots idea buys us. What I've seen this cycle in Neutron is that we have a LOT of BPs proposed. We approve them after review. And then we hit one of two issues: Slow review cycles, and slow code turnaround issues. I don't think slots would help this, and in fact may cause more issues. If we approve a BP and give it a slot for which the eventual result is slow review and/or code review turnaround, we're right back where we started. Even worse, we may have not picked a BP for which the code submitter would have turned around reviews faster. So we've now doubly hurt ourselves. I have no idea how to solve this issue, but by over subscribing the
Re: [openstack-dev] Which program for Rally
On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote: On 11/08/14 16:21, Matthew Treinish wrote: I'm sorry, but the fact that the docs in the rally tree has a section for user testimonials [4] I feel speaks a lot about the intent of the project. What... does that even mean? Yeah, I apologize for that sentence, it was an unfair thing to say and uncalled for. Looking at it with fresh eyes this morning I'm not entirely sure what my intent was by pointing out that section. I personally feel that those user stories would probably be more appropriate as a blog post, and shouldn't necessarily be in a doc tree. But, that's not the stinging indictment which didn't need any explanation that I apparently thought it was yesterday; it definitely isn't something worth calling out on this thread. They seem like just the type of guys that would help Keystone with performance benchmarking! Burn them! I'm pretty sure that's not what I meant. :) I apologize if any of this is somewhat incoherent, I'm still a bit jet-lagged so I'm not sure that I'm making much sense. Ah. Yeah, let's chalk it up to dulled senses from insufficient sleep and trying to get back on my usual schedule from a trip down under. [4] http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories -Matt Treinish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 12/08/14 17:12, Henry Gessau wrote: On 8/12/2014 10:27 AM, Ihar Hrachyshka wrote: as per [1], Cisco Nexus ML2 plugin requires a patched version of ncclient from github. I wonder: - - whether this information is still current; Please see: https://review.openstack.org/112175 But we need to do backports before updating the wiki. Thanks for the link! - - why don't we depend on ncclient thru our requirements.txt file. Do we want to have requirements on things that are only used by a specific vendor plugin? So far it has worked by vendor-specific documentation instructing to manually install the requirement, or vendor-tailored deployment tools/scripts. In downstream, it's hard to maintain all plugin dependencies if they are not explicitly mentioned in e.g. requirements.txt. Red Hat ships those plugins (with no commercial support or testing done on our side), and we didn't know that to make the plugin actually useable, we need to install that ncclient module until a person from Cisco reported the issue to us. We don't usually monitor random wiki pages to get an idea what we need to package and depend on. :) I think we should have every third party module that we directly use in requirements.txt. We have code in the tree that imports ncclient (btw is it unit tested?), so I think it's enough to make that dependency explicit. Now, maybe putting the module into requirements.txt is an overkill (though I doubt it). In that case, we could be interested in getting the info in some other centralized way. /Ihar -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJT6lSAAAoJEC5aWaUY1u57rk8IAKWBqBAJ+DChAkcU/hzs70o5 dqTKm1y5dtLpebSckjLuTb568nd1ShghCaqEQbck4U01g6aDg1hWyWzm2wF2FUyG PtkYHZRSnKlqyAN7J2PU/Ak7uvTr51UfVKFzqc1hfLujY+SGSlzIjKeucXgjatts TYIq53xz69y9+9GE/XxX0NpD1ROeaOwaj884WFUI5sIwKWvTjur929o58grym1Hb bncQUc3wSY1Mtp6OdvwxZJ0MEmlC3t8ukykAUSkv1fBU6xSYo/nLmpGYeHn3o3GQ icNJXTZbJ/z3oAktbTol1DCxHkKKKruMBqCZcxmxniAdV+l1yNSkZUlAqYwuy3E= =nI7E -END PGP SIGNATURE- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
On Tue, Aug 12, 2014 at 03:56:44PM +0100, Mark McLoughlin wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. A bit cumbersome, but given we have to work within Gerrit's limitations, it looks like a valid approach / process to me. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore
On 2014-08-12 16:35:18 + (+), Edgar Magana wrote: If this plugin will be deprecated in Juno it means that the code will be there for this release, I will expect to have the CI still running for until the code is completely removed from the Neutron tree. Anyway, Infra guys will have the last word here! It's really not up to the Project Infrastructure Team to decide this (we merely provide guidance, assistance and, sometimes, arbitration for such matters). It's ultimately the Neutron developer community who needs to determine whether they're willing to support an untested feature through deprecation or insist on continued testing until its full removal can be realized. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack. With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture. It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population. At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues. We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ? TL;DR: Our development model is having growing pains. until we sort out the growing pains adding more projects spreads us too thin. +100 In addition to the issues mentioned above, with the scale of OpenStack today we have many major cross project issues to address and no good place to discuss them. We do have the ML, as well as the cross-project meeting every Tuesday [1], but we as a project need to do a better job of actually bringing up relevant issues here. [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ? I really like this idea, as Michael and others alluded to in above, we are attempting to set cycle goals for Kilo in Nova. but I think it is worth doing for all of OpenStack. We would like to make a list of key goals before the summit so that we can plan our summit sessions around the goals. On a really high level one way to look at this is, in Kilo we need to pay down our technical debt. The slots/runway idea is somewhat separate from defining key cycle goals; we can be approve blueprints based on key cycle goals without doing slots. But with so many concurrent blueprints up for review at any given time, the review teams are doing a lot of multitasking and humans are not very good at multitasking. Hopefully slots can help address this issue, and hopefully allow us to actually merge more blueprints in a given cycle. I'm not 100% sold on what the slots idea buys us. What I've seen this cycle in Neutron is that we have a LOT of BPs proposed. We approve them after review. And then we hit one of two issues: Slow review cycles, and slow code turnaround issues. I don't think slots would help this, and in fact may cause more issues. If we approve a BP and give it a slot for which the eventual result is slow review and/or code review turnaround, we're right back where we started. Even worse, we may have not picked a BP for which the code submitter would have turned around reviews faster. So we've now doubly hurt ourselves. I
Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job
And just when the patch was only missing a +A, another bug slipped in! The nova patch to fix it is available at [1] And while we're there, it won't be a bad idea to also push the neutron full job, as non-voting, into the integrated gate [2] Thanks in advance, (especially to the nova and infra cores who'll review these patches!) Salvatore [1] https://review.openstack.org/#/c/113554/ [2] https://review.openstack.org/#/c/113562/ On 7 August 2014 17:51, Salvatore Orlando sorla...@nicira.com wrote: Thanks Armando, The fix for the bug you pointed out was the reason of the failure we've been seeing. The follow-up patch merged and I've removed the wip status from the patch for the full job [1] Salvatore [1] https://review.openstack.org/#/c/88289/ On 7 August 2014 16:50, Armando M. arma...@gmail.com wrote: Hi Salvatore, I did notice the issue and I flagged this bug report: https://bugs.launchpad.net/nova/+bug/1352141 I'll follow up. Cheers, Armando On 7 August 2014 01:34, Salvatore Orlando sorla...@nicira.com wrote: I had to put the patch back on WIP because yesterday a bug causing a 100% failure rate slipped in. It should be an easy fix, and I'm already working on it. Situations like this, exemplified by [1] are a bit frustrating for all the people working on improving neutron quality. Now, if you allow me a little rant, as Neutron is receiving a lot of attention for all the ongoing discussion regarding this group policy stuff, would it be possible for us to receive a bit of attention to ensure both the full job and the grenade one are switched to voting before the juno-3 review crunch. We've already had the attention of the QA team, it would probably good if we could get the attention of the infra core team to ensure: 1) the jobs are also deemed by them stable enough to be switched to voting 2) the relevant patches for openstack-infra/config are reviewed Regards, Salvatore [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwie3UnbWVzc2FnZSc6IHUnRmxvYXRpbmcgaXAgcG9vbCBub3QgZm91bmQuJywgdSdjb2RlJzogNDAwfVwiIEFORCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1kc3ZtLW5ldXRyb24tZnVsbFwiIEFORCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiMTcyODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzQwMDExMDIwNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ== On 23 July 2014 14:59, Matthew Treinish mtrein...@kortar.org wrote: On Wed, Jul 23, 2014 at 02:40:02PM +0200, Salvatore Orlando wrote: Here I am again bothering you with the state of the full job for Neutron. The patch for fixing an issue in nova's server external events extension merged yesterday [1] We do not have yet enough data points to make a reliable assessment, but of out 37 runs since the patch merged, we had only 5 failures, which puts the failure rate at about 13% This is ugly compared with the current failure rate of the smoketest (3%). However, I think it is good enough to start making the full job voting at least for neutron patches. Once we'll be able to bring down failure rate to anything around 5%, we can then enable the job everywhere. I think that sounds like a good plan. I'm also curious how the failure rates compare to the other non-neutron jobs, that might be a useful comparison too for deciding when to flip the switch everywhere. As much as I hate asymmetric gating, I think this is a good compromise for avoiding developers working on other projects are badly affected by the higher failure rate in the neutron full job. So we discussed this during the project meeting a couple of weeks ago [3] and there was a general agreement that doing it asymmetrically at first would be better. Everyone should be wary of the potential harms with doing it asymmetrically and I think priority will be given to fixing issues that block the neutron gate should they arise. I will therefore resume work on [2] and remove the WIP status as soon as I can confirm a failure rate below 15% with more data points. Thanks for keeping on top of this Salvatore. It'll be good to finally be at least partially gating with a parallel job. -Matt Treinish [1] https://review.openstack.org/#/c/103865/ [2] https://review.openstack.org/#/c/88289/ [3] http://eavesdrop.openstack.org/meetings/project/2014/project.2014-07-08-21.03.log.html#l-28 On 10 July 2014 11:49, Salvatore Orlando sorla...@nicira.com wrote: On 10 July 2014 11:27, Ihar Hrachyshka ihrac...@redhat.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 On 10/07/14 11:07, Salvatore Orlando wrote: The patch for bug 1329564 [1] merged about 11 hours ago. From [2] it seems there has been an improvement on the failure rate, which seem to have dropped to 25% from over 40%. Still, since the patch merged there have been 11 failures already in the full job out of 42 jobs
Re: [openstack-dev] [all] The future of the integrated release
On Tue, Aug 12, 2014 at 10:44 AM, Dolph Mathews dolph.math...@gmail.com wrote: On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: Slow review: by limiting the number of blueprints up we hope to focus our efforts on fewer concurrent things slow code turn around: when a blueprint is given a slot (runway) we will first make sure the author/owner is available for fast code turnaround. If a blueprint review stalls out (slow code turnaround, stalemate in review discussions etc.) we will take the slot and give it to another blueprint. How is that more efficient than today's do-the-best-we-can approach? It just sounds like bureaucracy to me. Reading between the lines throughout this thread, it sounds like what we're lacking is a reliable method to communicate review prioritization to core reviewers. AIUI, that is precisely what the proposed slots would do -- allow the PTL (or the drivers team) to reliably communicate review prioritization to the core review team, in a way that is *not* just more noise on IRC, and is visible to all contributors. -Deva ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Aug 12, 2014, at 11:08 AM, Doug Hellmann d...@doughellmann.com wrote: On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack. With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture. It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population. At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues. We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ? TL;DR: Our development model is having growing pains. until we sort out the growing pains adding more projects spreads us too thin. +100 In addition to the issues mentioned above, with the scale of OpenStack today we have many major cross project issues to address and no good place to discuss them. We do have the ML, as well as the cross-project meeting every Tuesday [1], but we as a project need to do a better job of actually bringing up relevant issues here. [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ? I really like this idea, as Michael and others alluded to in above, we are attempting to set cycle goals for Kilo in Nova. but I think it is worth doing for all of OpenStack. We would like to make a list of key goals before the summit so that we can plan our summit sessions around the goals. On a really high level one way to look at this is, in Kilo we need to pay down our technical debt. The slots/runway idea is somewhat separate from defining key cycle goals; we can be approve blueprints based on key cycle goals without doing slots. But with so many concurrent blueprints up for review at any given time, the review teams are doing a lot of multitasking and humans are not very good at multitasking. Hopefully slots can help address this issue, and hopefully allow us to actually merge more blueprints in a given cycle. I'm not 100% sold on what the slots idea buys us. What I've seen this cycle in Neutron is that we have a LOT of BPs proposed. We approve them after review. And then we hit one of two issues: Slow review cycles, and slow code turnaround issues. I don't think slots would help this, and in fact may cause more issues. If we approve a BP and give it a slot for which the eventual result is slow review and/or code review turnaround, we're right back where we started. Even worse, we may have not picked a BP for which the
[openstack-dev] [Horizon] Feature Proposal Freeze date Aug 14
It came to my attention today that I've only communicated this in Horizon team meetings. Due to the high number of blueprints already targeting Juno-3 and the resource contention of reviewers, I have set the Horizon Feature Proposal Deadline at August 14 (August 12 actually, but since I didn't include the mailing list, adding 2 days). This will hopefully reduce some of the noise as we approach the J-3 milestone. Thanks, David ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
It seems like this is exactly what the slots give us, though. The core review team picks a number of slots indicating how much work they think they can actually do (less than the available number of blueprints), and then blueprints queue up to get a slot based on priorities and turnaround time and other criteria that try to make slot allocation fair. By having the slots, not only is the review priority communicated to the review team, it is also communicated to anyone watching the project. One thing I'm not seeing shine through in this discussion of slots is whether any notion of individual cores, or small subsets of the core team with aligned interests, can champion blueprints that they have a particular interest in. For example it might address some pain-point they've encountered, or impact on some functional area that they themselves have worked on in the past, or line up with their thinking on some architectural point. But for whatever motivation, such small groups of cores currently have the freedom to self-organize in a fairly emergent way and champion individual BPs that are important to them, simply by *independently* giving those BPs review attention. Whereas under the slots initiative, presumably this power would be subsumed by the group will, as expressed by the prioritization applied to the holding pattern feeding the runways? I'm not saying this is good or bad, just pointing out a change that we should have our eyes open to. Cheers, Eoghan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient
On 8/12/2014 1:53 PM, Ihar Hrachyshka wrote: On 12/08/14 17:12, Henry Gessau wrote: On 8/12/2014 10:27 AM, Ihar Hrachyshka wrote: as per [1], Cisco Nexus ML2 plugin requires a patched version of ncclient from github. I wonder: - - whether this information is still current; Please see: https://review.openstack.org/112175 But we need to do backports before updating the wiki. Thanks for the link! - - why don't we depend on ncclient thru our requirements.txt file. Do we want to have requirements on things that are only used by a specific vendor plugin? So far it has worked by vendor-specific documentation instructing to manually install the requirement, or vendor-tailored deployment tools/scripts. In downstream, it's hard to maintain all plugin dependencies if they are not explicitly mentioned in e.g. requirements.txt. Red Hat ships those plugins (with no commercial support or testing done on our side), and we didn't know that to make the plugin actually useable, we need to install that ncclient module until a person from Cisco reported the issue to us. We don't usually monitor random wiki pages to get an idea what we need to package and depend on. :) I think we should have every third party module that we directly use in requirements.txt. We have code in the tree that imports ncclient (btw is it unit tested?), so I think it's enough to make that The unit tests mock the import of ncclient. dependency explicit. Now, maybe putting the module into requirements.txt is an overkill (though I doubt it). In that case, we could be interested in getting the info in some other centralized way. I am not familiar with other ways, but let me know if I can be of any help. Note: it seems that the Brocade plugin also imports ncclient. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Sure, here's the decorated method from v2.py: class MetersController(rest.RestController): Works on meters. @pecan.expose() def _lookup(self, meter_name, *remainder): return MeterController(meter_name), remainder @wsme_pecan.wsexpose([Meter], [Query]) @secure(RBACController.check_permissions) def get_all(self, q=None): and here's the decorator called by the secure tag: class RBACController(object): global _ENFORCER if not _ENFORCER: _ENFORCER = policy.Enforcer() @classmethod def check_permissions(cls): # do some stuff In check_permissions I'd like to know the class and method with the @secure tag that caused check_permissions to be invoked. In this case, that would be MetersController.get_all. Thanks Can you share some code? What do you mean by, is there a way for the decorator code to know it was called by MetersController.get_all On 08/12/14 04:46 PM, Pendergrass, Eric wrote: Thanks Ryan, but for some reason the controller attribute is None: (Pdb) from pecan.core import state (Pdb) state.__dict__ {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650, ceilometer.api.hooks.PipelineHook object at 0x39871d0, ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None, 'response': Response at 0x3ed74d0 200 OK} -Original Message- From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com] Sent: Tuesday, August 12, 2014 10:34 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators? This should give you what you need: from pecan.core import state state.controller On 08/12/14 04:08 PM, Pendergrass, Eric wrote: Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore
On 8/12/2014 2:04 PM, Jeremy Stanley wrote: On 2014-08-12 16:35:18 + (+), Edgar Magana wrote: If this plugin will be deprecated in Juno it means that the code will be there for this release, I will expect to have the CI still running for until the code is completely removed from the Neutron tree. Anyway, Infra guys will have the last word here! It's really not up to the Project Infrastructure Team to decide this (we merely provide guidance, assistance and, sometimes, arbitration for such matters). It's ultimately the Neutron developer community who needs to determine whether they're willing to support an untested feature through deprecation or insist on continued testing until its full removal can be realized. The Cisco Nexus sub-plugin is broken because the OVS plugin that is depends on is broken. The Neutron Project switched from the OVS plugin to ML2 for testing a long time ago, and the OVS plugin will be removed from the tree in Juno. There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin will not be fixed either. There are bugs[1,2] open to remove the deprecated plugins from the tree. [1] https://bugs.launchpad.net/neutron/+bug/1323729 [2] https://bugs.launchpad.net/neutron/+bug/1350387 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] 9 days until feature proposal freeze
On 08/12/2014 04:13 AM, Michael Still wrote: Hi, this is just a friendly reminder that we are now 9 days away from feature proposal freeze for nova. If you think your blueprint isn't going to make it in time, then now would be a good time to let me know so that we can defer it until Kilo. That will free up reviewer time for other blueprints. Some people have more than one blueprint still under development... Perhaps they could defer some of those to Kilo? I removed https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker from the Juno cycle, and noted reasons why in the whiteboard (ongoing discussions around scheduler separation and the scope of the resource tracker in regards to claim processing. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore
Henry, That makes a lot of sense to me. If the code will be remove in Juno, then there is nothing else to discuss. Thank you so much for providing detailed information and sorry for bothering you with this issue. Edgar On 8/12/14, 11:49 AM, Henry Gessau ges...@cisco.com wrote: On 8/12/2014 2:04 PM, Jeremy Stanley wrote: On 2014-08-12 16:35:18 + (+), Edgar Magana wrote: If this plugin will be deprecated in Juno it means that the code will be there for this release, I will expect to have the CI still running for until the code is completely removed from the Neutron tree. Anyway, Infra guys will have the last word here! It's really not up to the Project Infrastructure Team to decide this (we merely provide guidance, assistance and, sometimes, arbitration for such matters). It's ultimately the Neutron developer community who needs to determine whether they're willing to support an untested feature through deprecation or insist on continued testing until its full removal can be realized. The Cisco Nexus sub-plugin is broken because the OVS plugin that is depends on is broken. The Neutron Project switched from the OVS plugin to ML2 for testing a long time ago, and the OVS plugin will be removed from the tree in Juno. There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin will not be fixed either. There are bugs[1,2] open to remove the deprecated plugins from the tree. [1] https://bugs.launchpad.net/neutron/+bug/1323729 [2] https://bugs.launchpad.net/neutron/+bug/1350387 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Tue, Aug 12, 2014 at 1:08 PM, Doug Hellmann d...@doughellmann.com wrote: On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack. With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture. It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population. At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues. We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ? TL;DR: Our development model is having growing pains. until we sort out the growing pains adding more projects spreads us too thin. +100 In addition to the issues mentioned above, with the scale of OpenStack today we have many major cross project issues to address and no good place to discuss them. We do have the ML, as well as the cross-project meeting every Tuesday [1], but we as a project need to do a better job of actually bringing up relevant issues here. [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ? I really like this idea, as Michael and others alluded to in above, we are attempting to set cycle goals for Kilo in Nova. but I think it is worth doing for all of OpenStack. We would like to make a list of key goals before the summit so that we can plan our summit sessions around the goals. On a really high level one way to look at this is, in Kilo we need to pay down our technical debt. The slots/runway idea is somewhat separate from defining key cycle goals; we can be approve blueprints based on key cycle goals without doing slots. But with so many concurrent blueprints up for review at any given time, the review teams are doing a lot of multitasking and humans are not very good at multitasking. Hopefully slots can help address this issue, and hopefully allow us to actually merge more blueprints in a given cycle. I'm not 100% sold on what the slots idea buys us. What I've seen this cycle in Neutron is that we have a LOT of BPs proposed. We approve them after review. And then we hit one of two issues: Slow review cycles, and slow code turnaround issues. I don't think slots would help this, and in fact may cause more issues. If we approve a BP and give it a slot for which the eventual result is slow review and/or code review turnaround, we're right back where we started. Even worse, we may have not picked a BP for which the
[openstack-dev] [Neutron] [LBaaS] Followup on Service Ports and IP Allocation - IPAM from LBaaS Mid Cycle meeting
Hi Mark, Going through the notes from our midcycle meeting (see https://etherpad.openstack.org/p/juno-lbaas-mid-cycle-hackathon) I noticed your name next to the Service Port and IPAM: Service Ports * Owner: Mark * Nova hacks * Nova port that nova borrows but doesn't destroy when VM is IP allocation - IPAM * TBD: Large task: Owner: Mark * ability to assoc an IP that is not associated with a port/vm * can we create a faster way of moving IP's? (Susanne) With all the other LBaaS work we sort of lost track on that but now as we started work on planning for Octavia I am wondering if there is any progress on those topics. Thanks a dozen, German ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] Use cases with regards to VIP and routers
From the perspective of Blue Box: * Load balancing appliances will often (usually?) live outside the same subnet as back-end member VMs. * The network in which the load balancing appliances live will usually have a default router (gateway) * We don't anticipate the need for using extra_routes at this time, though I suspect other operators might need this. * We also anticipate occasionally needing the load balancing appliances to have layer-2 connectivity to some back-end member VMs. On Tue, Aug 12, 2014 at 12:32 AM, Susanne Balle sleipnir...@gmail.com wrote: In the context of Octavia and Neutron LBaaS. Susanne On Mon, Aug 11, 2014 at 5:44 PM, Stephen Balukoff sbaluk...@bluebox.net wrote: Susanne, Are you asking in the context of Load Balancer services in general, or in terms of the Neutron LBaaS project or the Octavia project? Stephen On Mon, Aug 11, 2014 at 9:04 AM, Doug Wiegley do...@a10networks.com wrote: Hi Susanne, While there are a few operators involved with LBaaS that would have good input, you might want to also ask this on the non-dev mailing list, for a larger sample size. Thanks, doug On 8/11/14, 3:05 AM, Susanne Balle sleipnir...@gmail.com wrote: Gang, I was asked the following questions around our Neutron LBaaS use cases: 1. Will there be a scenario where the ³VIP² port will be in a different Node, from all the Member ³VMs² in a pool. 2. Also how likely is it for the LBaaS configured subnet to not have a ³router² and just use the ³extra_routes² option. 3. Is there a valid use case where customers will be using the ³extra_routes² with subnets instead of the ³routers². ( It would be great if you have some use case picture for this). Feel free to chime in here and I'll summaries the answers. Regards Susanne ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] fair standards for all hypervisor drivers
On Mon, Aug 11, 2014 at 08:05:26AM -0400, Russell Bryant wrote: On 08/11/2014 07:58 AM, Russell Bryant wrote: On 08/11/2014 05:53 AM, Daniel P. Berrange wrote: There is work to add support for this in devestack already which I prefer since it makes it easy for developers to get an environment which matches the build system: https://review.openstack.org/#/c/108714/ Ah, cool. Devstack is indeed a better place to put the build scripting. So, I think we should: 1) Get the above patch working, and then merged. 2) Get an experimental job going to use the above while we work on #3 3) Before the job can move into the check queue and potentially become voting, it needs to not rely on downloading the source on every run. IIRC, we can have nodepool build an image to use for these jobs that includes the bits already installed. I'll switch my efforts over to helping get the above completed. I still think the devstack patch is good, but after some more thought, I think a better long term CI job setup would just be a fedora image with the virt-preview repo. So, effectively, you're trying to add a minimal Fedora image w/ virt-preview repo (as part of some post-install kickstart script). If so, where would the image be stored? I'm asking because, previously Sean Dague mentioned of mirroring issues (which later turned out to be intermittent network issues with OpenStack infra cloud providers) of Fedora images, and floated an idea whether an updated image can be stored on tarballs.openstack.org, like how Trove[1] does. But, OpenStack infra folks (fungi) raised some valid points on why not do that. IIUC, if you intend to run tests w/ this CI job with this new image, there has to be a mechanism in place to ensure the cached copy (on tarballs.o.o) is updated. If I misunderstood what you said, please correct me. [1] http://tarballs.openstack.org/trove/images/ I think I'll try that ... -- /kashyap ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [NFV] Meeting cancelled and time for next week.
Hi all, I am not available to run the meeting tomorrow and was not able to identify someone to step in, given this I think it makes sense to cancel for this week. For next week I would like to trial the new alternate time we discussed, 1600 UTC on a Thursday, and assuming there is reasonable attendance alternating weekly from there. Are there any objections to this? As the Feature Proposal Freeze [1] is fast approaching for projects that enforce it I will endeavour to track down any of the blueprints listed on the wiki that were approved but don't have code submissions associated with them yet and highlight this on the mailing list in lieu of a meeting. Thanks, Steve [1] https://wiki.openstack.org/wiki/FeatureProposalFreeze ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Octavia] Agenda for 13 Aug 2014 meeting
Hi folks! This is what I have for my tentative agenda for tomorrow's Octavia meeting. Please e-mail me if you want anything else added to this list. (Also, I will start putting these weekly agendas in the wiki in the near future.) * Discuss future of Octavia in light of Neutron-incubator project proposal. * Discuss operator networking requirements (carryover from last week) * Discuss v0.5 component design proposal: https://review.openstack.org/#/c/113458/ * Discuss timeline on moving these meetings to IRC. As usual, please e-mail me if you'd like information on connecting to the webex we're presently using for these meetings. Thanks, Stephen -- Stephen Balukoff Blue Box Group, LLC (800)613-4305 x807 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Cisco NXOS is not tested anymore
On 08/12/2014 01:16 PM, Edgar Magana wrote: Henry, That makes a lot of sense to me. If the code will be remove in Juno, then there is nothing else to discuss. Thank you so much for providing detailed information and sorry for bothering you with this issue. Edgar I don't think it is a bother, I think it is good information to have. Now we just have to figure out the process for future so we also know the best path of communication. Thanks, Anita. On 8/12/14, 11:49 AM, Henry Gessau ges...@cisco.com wrote: On 8/12/2014 2:04 PM, Jeremy Stanley wrote: On 2014-08-12 16:35:18 + (+), Edgar Magana wrote: If this plugin will be deprecated in Juno it means that the code will be there for this release, I will expect to have the CI still running for until the code is completely removed from the Neutron tree. Anyway, Infra guys will have the last word here! It's really not up to the Project Infrastructure Team to decide this (we merely provide guidance, assistance and, sometimes, arbitration for such matters). It's ultimately the Neutron developer community who needs to determine whether they're willing to support an untested feature through deprecation or insist on continued testing until its full removal can be realized. The Cisco Nexus sub-plugin is broken because the OVS plugin that is depends on is broken. The Neutron Project switched from the OVS plugin to ML2 for testing a long time ago, and the OVS plugin will be removed from the tree in Juno. There are no plans to fix the OVS plugin, so the Cisco Nexus sub-plugin will not be fixed either. There are bugs[1,2] open to remove the deprecated plugins from the tree. [1] https://bugs.launchpad.net/neutron/+bug/1323729 [2] https://bugs.launchpad.net/neutron/+bug/1350387 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] fair standards for all hypervisor drivers
On 08/12/2014 03:40 PM, Kashyap Chamarthy wrote: On Mon, Aug 11, 2014 at 08:05:26AM -0400, Russell Bryant wrote: On 08/11/2014 07:58 AM, Russell Bryant wrote: On 08/11/2014 05:53 AM, Daniel P. Berrange wrote: There is work to add support for this in devestack already which I prefer since it makes it easy for developers to get an environment which matches the build system: https://review.openstack.org/#/c/108714/ Ah, cool. Devstack is indeed a better place to put the build scripting. So, I think we should: 1) Get the above patch working, and then merged. 2) Get an experimental job going to use the above while we work on #3 3) Before the job can move into the check queue and potentially become voting, it needs to not rely on downloading the source on every run. IIRC, we can have nodepool build an image to use for these jobs that includes the bits already installed. I'll switch my efforts over to helping get the above completed. I still think the devstack patch is good, but after some more thought, I think a better long term CI job setup would just be a fedora image with the virt-preview repo. So, effectively, you're trying to add a minimal Fedora image w/ virt-preview repo (as part of some post-install kickstart script). If so, where would the image be stored? I'm asking because, previously Sean Dague mentioned of mirroring issues (which later turned out to be intermittent network issues with OpenStack infra cloud providers) of Fedora images, and floated an idea whether an updated image can be stored on tarballs.openstack.org, like how Trove[1] does. But, OpenStack infra folks (fungi) raised some valid points on why not do that. IIUC, if you intend to run tests w/ this CI job with this new image, there has to be a mechanism in place to ensure the cached copy (on tarballs.o.o) is updated. If I misunderstood what you said, please correct me. Patches for this here: https://review.openstack.org/#/c/113349/ https://review.openstack.org/#/c/113350/ The first one is the important part about how the image is created. nodepool runs some prep scripts against the cloud's distro image and then snapshots it. That's the image stored to be used later for testing. In this case, it enables the virt-preview repo and then calls out to the regular devstack prep scripts to cache all packages needed for the test locally on the image. If there are issues with the reliability of fedorapeople.org, it will indeed cause problems, but at least it's local to image creation and not every test run. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
Dan Smith wrote: Looks reasonable to me. +1 +1 -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?
Le 12/08/2014 18:54, Nikola Đipanov a écrit : On 08/12/2014 04:49 PM, Sylvain Bauza wrote: (sorry for reposting, missed 2 links...) Hi Nikola, Le 12/08/2014 12:21, Nikola Đipanov a écrit : Hey Nova-istas, While I was hacking on [1] I was considering how to approach the fact that we now need to track one more thing (NUMA node utilization) in our resources. I went with - I'll add it to compute nodes table thinking it's a fundamental enough property of a compute host that it deserves to be there, although I was considering Extensible Resource Tracker at one point (ERT from now on - see [2]) but looking at the code - it did not seem to provide anything I desperately needed, so I went with keeping it simple. So fast-forward a few days, and I caught myself solving a problem that I kept thinking ERT should have solved - but apparently hasn't, and I think it is fundamentally a broken design without it - so I'd really like to see it re-visited. The problem can be described by the following lemma (if you take 'lemma' to mean 'a sentence I came up with just now' :)): Due to the way scheduling works in Nova (roughly: pick a host based on stale(ish) data, rely on claims to trigger a re-schedule), _same exact_ information that scheduling service used when making a placement decision, needs to be available to the compute service when testing the placement. This is not the case right now, and the ERT does not propose any way to solve it - (see how I hacked around needing to be able to get extra_specs when making claims in [3], without hammering the DB). The result will be that any resource that we add and needs user supplied info for scheduling an instance against it, will need a buggy re-implementation of gathering all the bits from the request that scheduler sees, to be able to work properly. Well, ERT does provide a plugin mechanism for testing resources at the claim level. This is the plugin responsibility to implement a test() method [2.1] which will be called when test_claim() [2.2] So, provided this method is implemented, a local host check can be done based on the host's view of resources. Yes - the problem is there is no clear API to get all the needed bits to do so - especially the user supplied one from image and flavors. On top of that, in current implementation we only pass a hand-wavy 'usage' blob in. This makes anyone wanting to use this in conjunction with some of the user supplied bits roll their own 'extract_data_from_instance_metadata_flavor_image' or similar which is horrible and also likely bad for performance. I see your concern where there is no interface for user-facing resources like flavor or image metadata. I also think indeed that the big 'usage' blob is not a good choice for long-term vision. That said, I don't think as we say in French to throw the bath water... ie. the problem is with the RT, not the ERT (apart the mention of third-party API that you noted - I'll go to it later below) This is obviously a bigger concern when we want to allow users to pass data (through image or flavor) that can affect scheduling, but still a huge concern IMHO. And here is where I agree with you : at the moment, ResourceTracker (and consequently Extensible RT) only provides the view of the resources the host is knowing (see my point above) and possibly some other resources are missing. So, whatever your choice of going with or without ERT, your patch [3] still deserves it if we want not to lookup DB each time a claim goes. As I see that there are already BPs proposing to use this IMHO broken ERT ([4] for example), which will surely add to the proliferation of code that hacks around these design shortcomings in what is already a messy, but also crucial (for perf as well as features) bit of Nova code. Two distinct implementations of that spec (ie. instances and flavors) have been proposed [2.3] [2.4] so reviews are welcome. If you see the test() method, it's no-op thing for both plugins. I'm open to comments because I have the stated problem : how can we define a limit on just a counter of instances and flavors ? Will look at these - but none of them seem to hit the issue I am complaining about, and that is that it will need to consider other request data for claims, not only data available for on instances. Also - the fact that you don't implement test() in flavor ones tells me that the implementation is indeed racy (but it is racy atm as well) and two requests can indeed race for the same host, and since no claims are done, both can succeed. This is I believe (at least in case of single flavor hosts) unlikely to happen in practice, but you get the idea. Agreed, these 2 patches probably require another iteration, in particular how we make sure that it won't be racy. So I need another run to think about what to test() for these 2 examples. Another patch has to be done for aggregates, but it's still WIP so not mentioned here. Anyway, as discussed during today's
Re: [openstack-dev] [nova] Retrospective veto revert policy
On Tue, Aug 12, 2014 at 9:56 AM, Mark McLoughlin mar...@redhat.com wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Thanks for the write up, Mark. When I first read the thread I thought it'd be about the case where a core takes a vacation or is unreachable _after_ marking a review -2. Can this case be considered in this policy as well (or is it already and I don't know it?) Thanks, Anne Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
This looks reasonable to me, with a slight concern that I don't know what step five looks like... What if we can never reach a consensus on an issue? Michael On Wed, Aug 13, 2014 at 12:56 AM, Mark McLoughlin mar...@redhat.com wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?
On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote: On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote: On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote: While forcing people to move to a newer version of libvirt is doable on most environments, do we want to do that now? What is the benefit of doing so? [...] The only dog I have in this fight is that using the split-out libvirt-python on PyPI means we finally get to run Nova unit tests in virtualenvs which aren't built with system-site-packages enabled. It's been a long-running headache which I'd like to see eradicated everywhere we can. I understand though if we have to go about it more slowly, I'm just excited to see it finally within our grasp. -- Jeremy Stanley We aren't quite forcing people to move to newer versions. Only those installing nova test-requirements need newer libvirt. Yeah, I'm a bit confused about the problem here. Is it that people want to satisfy test-requirements through packages rather than using a virtualenv? (i.e. if people just use virtualenvs for unit tests, there's no problem right?) If so, is it possible/easy to create new, alternate packages of the libvirt python bindings (from PyPI) on their own separately from the libvirt.so and libvirtd packages? Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] The future of the integrated release
On Tue, Aug 12, 2014 at 11:08 AM, Doug Hellmann d...@doughellmann.com wrote: On Aug 12, 2014, at 1:44 PM, Dolph Mathews dolph.math...@gmail.com wrote: On Tue, Aug 12, 2014 at 12:30 AM, Joe Gordon joe.gord...@gmail.com wrote: On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote: On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote: On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org wrote: Hi everyone, With the incredible growth of OpenStack, our development community is facing complex challenges. How we handle those might determine the ultimate success or failure of OpenStack. With this cycle we hit new limits in our processes, tools and cultural setup. This resulted in new limiting factors on our overall velocity, which is frustrating for developers. This resulted in the burnout of key firefighting resources. This resulted in tension between people who try to get specific work done and people who try to keep a handle on the big picture. It all boils down to an imbalance between strategic and tactical contributions. At the beginning of this project, we had a strong inner group of people dedicated to fixing all loose ends. Then a lot of companies got interested in OpenStack and there was a surge in tactical, short-term contributions. We put on a call for more resources to be dedicated to strategic contributions like critical bugfixing, vulnerability management, QA, infrastructure... and that call was answered by a lot of companies that are now key members of the OpenStack Foundation, and all was fine again. But OpenStack contributors kept on growing, and we grew the narrowly-focused population way faster than the cross-project population. At the same time, we kept on adding new projects to incubation and to the integrated release, which is great... but the new developers you get on board with this are much more likely to be tactical than strategic contributors. This also contributed to the imbalance. The penalty for that imbalance is twofold: we don't have enough resources available to solve old, known OpenStack-wide issues; but we also don't have enough resources to identify and fix new issues. We have several efforts under way, like calling for new strategic contributors, driving towards in-project functional testing, making solving rare issues a more attractive endeavor, or hiring resources directly at the Foundation level to help address those. But there is a topic we haven't raised yet: should we concentrate on fixing what is currently in the integrated release rather than adding new projects ? TL;DR: Our development model is having growing pains. until we sort out the growing pains adding more projects spreads us too thin. +100 In addition to the issues mentioned above, with the scale of OpenStack today we have many major cross project issues to address and no good place to discuss them. We do have the ML, as well as the cross-project meeting every Tuesday [1], but we as a project need to do a better job of actually bringing up relevant issues here. [1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting We seem to be unable to address some key issues in the software we produce, and part of it is due to strategic contributors (and core reviewers) being overwhelmed just trying to stay afloat of what's happening. For such projects, is it time for a pause ? Is it time to define key cycle goals and defer everything else ? I really like this idea, as Michael and others alluded to in above, we are attempting to set cycle goals for Kilo in Nova. but I think it is worth doing for all of OpenStack. We would like to make a list of key goals before the summit so that we can plan our summit sessions around the goals. On a really high level one way to look at this is, in Kilo we need to pay down our technical debt. The slots/runway idea is somewhat separate from defining key cycle goals; we can be approve blueprints based on key cycle goals without doing slots. But with so many concurrent blueprints up for review at any given time, the review teams are doing a lot of multitasking and humans are not very good at multitasking. Hopefully slots can help address this issue, and hopefully allow us to actually merge more blueprints in a given cycle. I'm not 100% sold on what the slots idea buys us. What I've seen this cycle in Neutron is that we have a LOT of BPs proposed. We approve them after review. And then we hit one of two issues: Slow review cycles, and slow code turnaround issues. I don't think slots would help this, and in fact may cause more issues. If we approve a BP and give it a slot for which the eventual result is slow review and/or code review turnaround, we're right back where we started. Even worse, we may have not picked a BP for which the
Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core
On Wed, 2014-07-30 at 14:02 -0700, Michael Still wrote: Greetings, I would like to nominate Jay Pipes for the nova-core team. Jay has been involved with nova for a long time now. He's previously been a nova core, as well as a glance core (and PTL). He's been around so long that there are probably other types of core status I have missed. Please respond with +1s or any concerns. Was away, but +1 for the record. Would have been happy to see this some time ago. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] SoftwareDeployment resource is always in progress
On 11/08/14 20:42, david ferahi wrote: Hello, I 'm trying to create a simple stack with heat (Icehouse release). The template contains SoftwareConfig, SoftwareDeployment and a single server resources. The problem is that the SoftwareDeployment resource is always in progress ! So first I'm going to assume you're using an image that you have created with diskimage-builder which includes the heat-config-script element: https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements When I a diagnosing deployments which don't signal back I do the following: - ssh into the server and sudo to root - stop the os-collect-config service: systemctl stop os-collect-config - run os-collect-config manually and check for errors: os-collect-config --one-time --debug After waiting for more than an hour the stack deployment failed and I got this error: TRACE heat.engine.resource HTTPUnauthorized: ERROR: Authentication failed. Please try again with option --include-password or export HEAT_INCLUDE_PASSWORD=1 TRACE heat.engine.resource Authentication required This looks like a different issue, you should find out what is happening to your server configuration first. When I checked the log file (/var/log/heat/heat-engine.log), it shows the following message(every second): 2014-08-10 19:41:09.622 2391 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.122.10 2014-08-10 19:41:10.648 2391 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.122.10 2014-08-10 19:41:11.671 2391 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.122.10 2014-08-10 19:41:12.690 2391 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.122.10 Here the template I am using : https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/wordpress/WordPress_software-config_1-instance.yaml Please help ! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron
Kyle, One Convergence third-party CI is failing due to https://bugs.launchpad.net/neutron/+bug/1353309. Let me know if we should turn off the CI logs until this is fixed or if we need to fix anything on the CI end. I think one other third-party CI (Mellanox) is failing due to the same issue. Regards, -hemanth On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery mest...@mestery.com wrote: On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com wrote: Kyle, One Convergence CI has been fixed (setup issue) and is running without the failures for ~10 days now. Updated the etherpad. Thanks for the update Hemanth, much appreciated! Kyle Thanks, -hemanth On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com wrote: On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery mest...@noironetworks.com wrote: PLUMgrid Not saving enough logs All Jenkins slaves were just updated to upload all required logs. PLUMgrid CI should be good now. Thanks, Fawad Khaliq ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Retrospective veto revert policy
Should subsequent patches be reverted as well that depended on the change in question? On Tue, Aug 12, 2014 at 7:56 AM, Mark McLoughlin mar...@redhat.com wrote: Hey (Terrible name for a policy, I know) From the version_cap saga here: https://review.openstack.org/110754 I think we need a better understanding of how to approach situations like this. Here's my attempt at documenting what I think we're expecting the procedure to be: https://etherpad.openstack.org/p/nova-retrospective-veto-revert-policy If it sounds reasonably sane, I can propose its addition to the Development policies doc. Mark. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron
On 08/12/2014 03:23 PM, Hemanth Ravi wrote: Kyle, One Convergence third-party CI is failing due to https://bugs.launchpad.net/neutron/+bug/1353309. Let me know if we should turn off the CI logs until this is fixed or if we need to fix anything on the CI end. I think one other third-party CI (Mellanox) is failing due to the same issue. Regards, -hemanth Are you One Convergence CI, hemanth? Sorry I don't know who is admin'ing this account. Thanks, Anita. On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery mest...@mestery.com wrote: On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com wrote: Kyle, One Convergence CI has been fixed (setup issue) and is running without the failures for ~10 days now. Updated the etherpad. Thanks for the update Hemanth, much appreciated! Kyle Thanks, -hemanth On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com wrote: On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery mest...@noironetworks.com wrote: PLUMgrid Not saving enough logs All Jenkins slaves were just updated to upload all required logs. PLUMgrid CI should be good now. Thanks, Fawad Khaliq ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Docker][HEAT] Cloud-init and docker container
Thanks Eric for the confirmation ;-) 2014-08-12 23:30 GMT+08:00 Eric Windisch ewindi...@docker.com: On Tue, Aug 12, 2014 at 5:53 AM, Jay Lau jay.lau@gmail.com wrote: I did not have the environment set up now, but by reviewing code, I think that the logic should be as following: 1) When using nova docker driver, we can use cloud-init or/and CMD in docker images to run post install scripts. myapp: Type: OS::Nova::Server Properties: flavor: m1.small image: my-app:latest docker image user-data: 2) When using heat docker driver, we can only use CMD in docker image or heat template to run post install scripts. wordpress: type: DockerInc::Docker::Container depends_on: [database] properties: image: wordpress links: db: mysql port_bindings: 80/tcp: [{HostPort: 80}] docker_endpoint: str_replace: template: http://host:2345/ params: host: {get_attr: [docker_host, networks, private, 0]} cmd: /bin/bash I can confirm this is correct for both use-cases. Currently, using Nova, one may only specify the CMD in the image itself, or as glance metadata. The cloud metadata service should be assessable and usable from Docker. The Heat plugin allow settings the CMD as a resource property. The user-data is only passed to the instance that runs Docker, not the containers. Configuring the CMD and/or environment variables for the container is the correct approach. -- Regards, Eric Windisch ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators?
Yep, you're right, this doesn't seem to work. The issue is that security is enforced at routing time (while the controller is still actually being discovered). In order to do this sort of thing with the `check_permissions`, we'd probably need to add a feature to pecan. On 08/12/14 06:38 PM, Pendergrass, Eric wrote: Sure, here's the decorated method from v2.py: class MetersController(rest.RestController): Works on meters. @pecan.expose() def _lookup(self, meter_name, *remainder): return MeterController(meter_name), remainder @wsme_pecan.wsexpose([Meter], [Query]) @secure(RBACController.check_permissions) def get_all(self, q=None): and here's the decorator called by the secure tag: class RBACController(object): global _ENFORCER if not _ENFORCER: _ENFORCER = policy.Enforcer() @classmethod def check_permissions(cls): # do some stuff In check_permissions I'd like to know the class and method with the @secure tag that caused check_permissions to be invoked. In this case, that would be MetersController.get_all. Thanks Can you share some code? What do you mean by, is there a way for the decorator code to know it was called by MetersController.get_all On 08/12/14 04:46 PM, Pendergrass, Eric wrote: Thanks Ryan, but for some reason the controller attribute is None: (Pdb) from pecan.core import state (Pdb) state.__dict__ {'hooks': [ceilometer.api.hooks.ConfigHook object at 0x31894d0, ceilometer.api.hooks.DBHook object at 0x3189650, ceilometer.api.hooks.PipelineHook object at 0x39871d0, ceilometer.api.hooks.TranslationHook object at 0x3aa5510], 'app': pecan.core.Pecan object at 0x2e76390, 'request': Request at 0x3ed7390 GET http://localhost:8777/v2/meters, 'controller': None, 'response': Response at 0x3ed74d0 200 OK} -Original Message- From: Ryan Petrello [mailto:ryan.petre...@dreamhost.com] Sent: Tuesday, August 12, 2014 10:34 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ceilometer] Way to get wrapped method's name/class using Pecan secure decorators? This should give you what you need: from pecan.core import state state.controller On 08/12/14 04:08 PM, Pendergrass, Eric wrote: Hi, I'm trying to use the built in secure decorator in Pecan for access control, and I'ld like to get the name of the method that is wrapped from within the decorator. For instance, if I'm wrapping MetersController.get_all with an @secure decorator, is there a way for the decorator code to know it was called by MetersController.get_all? I don't see any global objects that provide this information. I can get the endpoint, v2/meters, with pecan.request.path, but that's not as elegant. Is there a way to derive the caller or otherwise pass this information to the decorator? Thanks Eric Pendergrass ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO] lists and merges
Just ran into a merge conflict with https://review.openstack.org/#/c/105878/ which looks like this: - name: nova_osapi port: 8774 net_binds: *public_binds - name: nova_metadata port: 8775 net_binds: *public_binds - name: ceilometer port: 8777 net_binds: *public_binds - name: swift_proxy_server port: 8080 net_binds: *public_binds HEAD - name: rabbitmq port: 5672 options: - timeout client 0 - timeout server 0 === - name: mysql port: 3306 extra_server_params: - backup Change overcloud to use VIP for MySQL I'd like to propose that we make it a standard - possibly lint on it, certainly fixup things when we see its wrong - to alpha-sort such structures: that avoids the textual-merge failure mode of 'append to the end'. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] Ryu plugin deprecation
hi, As announced in the last neutron meeting [1], the Ryu plugin is being deprecated. Juno is the last release to support Ryu plugin. The Ryu team will be focusing on the ofagent going forward. btw, i'll be mostly offline from Aug 16 to Aug 31. sorry for inconvenience. [1] http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html YAMAMOTO Takashi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev