Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse
Hi, We have submitted the following LBaaS related BPs: https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features I can't attend the meeting but I will check the meeting log later. Thanks. Itsuro Oda On Wed, 23 Oct 2013 21:57:13 +0400 Eugene Nikanorov enikano...@mirantis.com wrote: So currently it moves to 10AM PDT Thanks, Eugene. -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse
Hi, Please find a summary of talks and discussion related to LBaaS for the summit at: https://docs.google.com/document/d/1Vjm57lh7PnXDelOy-VxsJkzc8QRiNN368sS11ePs_vA/edit?pli=1#heading=h.6doqijxd389j I have also added the list bellow to it. We can review in the meeting today. Regards, -Sam. -Original Message- From: Itsuro ODA [mailto:o...@valinux.co.jp] Sent: Thursday, October 24, 2013 9:53 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Neutron][LBaaS] LBaaS plans for Icehouse Hi, We have submitted the following LBaaS related BPs: https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features I can't attend the meeting but I will check the meeting log later. Thanks. Itsuro Oda On Wed, 23 Oct 2013 21:57:13 +0400 Eugene Nikanorov enikano...@mirantis.com wrote: So currently it moves to 10AM PDT Thanks, Eugene. -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Climate] Weekly IRC team meeting
Hi all, Climate is growing and time is coming for having a weekly meeting in between all of us. There is a huge number of reviews in progress, and at least the first agenda will be triaging those, making sure they are either coming to trunk as soon as possible, or splitted into smaller chunks of code. The Icehouse summit is also coming, and I would like to take opportunity to discuss about any topics we could raise during the Summit. Is Mondays 10:00am UTC [1] a convenient time for you ? http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166 -Sylvain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Move pep8 requirements to a separate file
Hello all, I noticed, that when I run tests using tox I get some redundant modules installed to tox virtual environments. At the moment, pep8 specific requirements (such as pep8, flake8 and so on) are stated in test-requirements.txt file, which is meant to contain testing specific modules (nose, testtools, etc). So when we run tox, it installs those unnecessary pep8 libraries into py26, py27 environments. The same is true for pep8 environment - tox installs all libraries from test-requirements.txt there, but only pep8 specific modules are actually needed. I think, it would be nice to move pep8 specific requirements from test-requirements.txt to a separate file (pep8-requirements.txt, perhaps) to avoid this situation. It would save network traffic and reduce the time it takes to create virtual environments. Thoughts? Does it sound reasonable? Victor. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion
Hi, Swami, I am also very interested in this topic. we have been looking into this domain for months, will be happy to share what we have. Please include me in the discussion loop. Thanks, -- Jaesuk Ahn, Ph.D. Team Leader Next Generation Cloud Platform Development Project KT (Korea Telecom) T. +82-10-9888-0328 | F. +82-303-0993-5340 Active member on OpenStack Korea Community Oct 24, 2013, 5:07 AM, Yapeng Wu yapeng...@huawei.com 작성: Hello, Swami, I am interested in this topic. Please include me in the discussion. Thanks, Yapeng From: Vasudevan, Swaminathan (PNB Roseville) [mailto:swaminathan.vasude...@hp.com] Sent: Tuesday, October 22, 2013 2:50 PM To: cloudbengo; Artem Dmytrenko; yong sheng gong (gong...@unitedstack.com); OpenStack Development Mailing List Subject: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion Hi Folks, Thanks for your interests in the DVR feature. We should get together to start discussing the details in the DVR. Please let me know who else is interested, probably the time slot and we can start nailing down the details. https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr https://wiki.openstack.org/wiki/Distributed_Router_for_OVS Thanks Swami From: Robin Wang [mailto:cloudbe...@gmail.com] Sent: Tuesday, October 22, 2013 11:45 AM To: Artem Dmytrenko; yong sheng gong (gong...@unitedstack.com); OpenStack Development Mailing List; Vasudevan, Swaminathan (PNB Roseville) Subject: Re: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion Hi Artem, Very happy to see more stackers working on this feature. : ) Note that the images in your document are badly corrupted - maybe my questions could already be answered by your diagrams. I met the same issue at first. Downloading the doc and open it locally may help. It works for me. Also, a wiki page for DVR/VDR feature is created, including some interesting performance test output. Thanks. https://wiki.openstack.org/wiki/Distributed_Router_for_OVS Best, Robin Wang From: Artem Dmytrenko Date: 2013-10-22 02:51 To: yong sheng gong \(gong...@unitedstack.com\); cloudbe...@gmail.com; OpenStack Development Mailing List Subject: Re: [openstack-dev] Distributed Virtual Router Discussion Hi Swaminathan. I work for a virtual networking startup called Midokura and I'm very interested in joining the discussion. We currently have distributed router implementation using existing Neutron API. Could you clarify why distributed vs centrally located routing implementation need to be distinguished? Another question is that are you proposing distributed routing implementation for tenant routers or for the router connecting the virtual cloud to the external network? The reason that I'm asking this question is because our company would also like to propose a router implementation that would eliminate a single point uplink failures. We have submitted a couple blueprints on that topic (https://blueprints.launchpad.net/neutron/+spec/provider-router-support, https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing) and would appreciate an opportunity to collaborate on making it a reality. Note that the images in your document are badly corrupted - maybe my questions could already be answered by your diagrams. Could you update your document with legible diagrams? Looking forward to further discussing this topic with you! Sincerely, Artem Dmytrenko On Mon, 10/21/13, Vasudevan, Swaminathan (PNB Roseville) swaminathan.vasude...@hp.com wrote: Subject: [openstack-dev] Distributed Virtual Router Discussion To: yong sheng gong (gong...@unitedstack.com) gong...@unitedstack.com, cloudbe...@gmail.com cloudbe...@gmail.com, OpenStack Development Mailing List (openstack-dev@lists.openstack.org) openstack-dev@lists.openstack.org Date: Monday, October 21, 2013, 12:18 PM Hi Folks, I am currently working on a blueprint for Distributed Virtual Router. If anyone interested in being part of the discussion please let me know. I have put together a first draft of my blueprint and have posted it on Launchpad for review. https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr Thanks. Swaminathan Vasudevan Systems Software Engineer (TC) HP Networking Hewlett-Packard 8000 Foothills Blvd M/S 5541 Roseville, CA - 95747 tel: 916.785.0937 fax: 916.785.1815 email: swaminathan.vasude...@hp.com -Inline Attachment Follows- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list
Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion
Hi Swami I am interested in this, please keep me in the loop. /Alan From: Vasudevan, Swaminathan (PNB Roseville) [mailto:swaminathan.vasude...@hp.com] Sent: October-22-13 8:50 PM To: cloudbengo; Artem Dmytrenko; yong sheng gong (gong...@unitedstack.com); OpenStack Development Mailing List Subject: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion Hi Folks, Thanks for your interests in the DVR feature. We should get together to start discussing the details in the DVR. Please let me know who else is interested, probably the time slot and we can start nailing down the details. https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr https://wiki.openstack.org/wiki/Distributed_Router_for_OVS Thanks Swami From: Robin Wang [mailto:cloudbe...@gmail.com] Sent: Tuesday, October 22, 2013 11:45 AM To: Artem Dmytrenko; yong sheng gong (gong...@unitedstack.commailto:gong...@unitedstack.com); OpenStack Development Mailing List; Vasudevan, Swaminathan (PNB Roseville) Subject: Re: Re: [openstack-dev] [Neutron] Distributed Virtual Router Discussion Hi Artem, Very happy to see more stackers working on this feature. : ) Note that the images in your document are badly corrupted - maybe my questions could already be answered by your diagrams. I met the same issue at first. Downloading the doc and open it locally may help. It works for me. Also, a wiki page for DVR/VDR feature is created, including some interesting performance test output. Thanks. https://wiki.openstack.org/wiki/Distributed_Router_for_OVS Best, Robin Wang From: Artem Dmytrenkomailto:nexton...@yahoo.com Date: 2013-10-22 02:51 To: yong sheng gong \(gong...@unitedstack.com\)mailto:gong...@unitedstack.com; cloudbe...@gmail.commailto:cloudbe...@gmail.com; OpenStack Development Mailing Listmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] Distributed Virtual Router Discussion Hi Swaminathan. I work for a virtual networking startup called Midokura and I'm very interested in joining the discussion. We currently have distributed router implementation using existing Neutron API. Could you clarify why distributed vs centrally located routing implementation need to be distinguished? Another question is that are you proposing distributed routing implementation for tenant routers or for the router connecting the virtual cloud to the external network? The reason that I'm asking this question is because our company would also like to propose a router implementation that would eliminate a single point uplink failures. We have submitted a couple blueprints on that topic (https://blueprints.launchpad.net/neutron/+spec/provider-router-support, https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing) and would appreciate an opportunity to collaborate on making it a reality. Note that the images in your document are badly corrupted - maybe my questions could already be answered by your diagrams. Could you update your document with legible diagrams? Looking forward to further discussing this topic with you! Sincerely, Artem Dmytrenko On Mon, 10/21/13, Vasudevan, Swaminathan (PNB Roseville) swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com wrote: Subject: [openstack-dev] Distributed Virtual Router Discussion To: yong sheng gong (gong...@unitedstack.commailto:gong...@unitedstack.com) gong...@unitedstack.commailto:gong...@unitedstack.com, cloudbe...@gmail.commailto:cloudbe...@gmail.com cloudbe...@gmail.commailto:cloudbe...@gmail.com, OpenStack Development Mailing List (openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Monday, October 21, 2013, 12:18 PM Hi Folks, I am currently working on a blueprint for Distributed Virtual Router. If anyone interested in being part of the discussion please let me know. I have put together a first draft of my blueprint and have posted it on Launchpad for review. https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr Thanks. Swaminathan Vasudevan Systems Software Engineer (TC) HP Networking Hewlett-Packard 8000 Foothills Blvd M/S 5541 Roseville, CA - 95747 tel: 916.785.0937 fax: 916.785.1815 email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com -Inline Attachment Follows- ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi Clint, Thank you! I have few replies/questions in-line. Cheers, Patrick On 10/23/13 8:36 PM, Clint Byrum wrote: Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700: Dear Steve and All, If I may add up on this already busy thread to share our experience with using Heat in large and complex software deployments. Thanks for sharing Patrick, I have a few replies in-line. I work on a project which precisely provides additional value at the articulation point between resource orchestration automation and configuration management. We rely on Heat and chef-solo respectively for these base management functions. On top of this, we have developed an event-driven workflow to manage the life-cycles of complex software stacks which primary purpose is to support middleware components as opposed to end-user apps. Our use cases are peculiar in the sense that software setup (install, config, contextualization) is not a one-time operation issue but a continuous thing that can happen any time in life-span of a stack. Users can deploy (and undeploy) apps long time after the stack is created. Auto-scaling may also result in an asynchronous apps deployment. More about this latter. The framework we have designed works well for us. It clearly refers to a PaaS-like environment which I understand is not the topic of the HOT software configuration proposal(s) and that's absolutely fine with us. However, the question for us is whether the separation of software config from resources would make our life easier or not. I think the answer is definitely yes but at the condition that the DSL extension preserves almost everything from the expressiveness of the resource element. In practice, I think that a strict separation between resource and component will be hard to achieve because we'll always need a little bit of application's specific in the resources. Take for example the case of the SecurityGroups. The ports open in a SecurityGroup are application specific. Components can only be made up of the things that are common to all users of said component. Also components would, if I understand the concept correctly, just be for things that are at the sub-resource level. That isn't cop Security groups and open ports would be across multiple resources, and thus would be separately specified from your app's component (though it might be useful to allow components to export static values so that the port list can be referred to along with the app component). Then, designing a Chef or Puppet component type may be harder than it looks at first glance. Speaking of our use cases we still need a little bit of scripting in the instance's user-data block to setup a working chef-solo environment. For example, we run librarian-chef prior to starting chef-solo to resolve the cookbook dependencies. A cookbook can present itself as a downloadable tarball but it's not always the case. A chef component type would have to support getting a cookbook from a public or private git repo (maybe subversion), handle situations where there is one cookbook per repo or multiple cookbooks per repo, let the user choose a particular branch or label, provide ssh keys if it's a private repo, and so forth. We support all of this scenarios and so we can provide more detailed requirements if needed. Correct me if I'm wrong though, all of those scenarios are just variations on standard inputs into chef. So the chef component really just has to allow a way to feed data to chef. I am not sure adding component relations like the 'depends-on' would really help us since it is the job of config management to handle software dependencies. Also, it doesn't address the issue of circular dependencies. Circular dependencies occur in complex software stack deployments. Example. When we setup a Slum virtual cluster, both the head node and compute nodes depend on one another to complete their configuration and so they would wait for each other indefinitely if we were to rely on the 'depends-on'. In addition, I think it's critical to distinguish between configuration parameters which are known ahead of time, like a db name or user name and password, versus contextualization parameters which are known after the fact generally when the instance is created. Typically those contextualization parameters are IP addresses but not only. The fact packages x,y,z have been properly installed and services a,b,c successfully started is contextualization information (a.k.a facts) which may be indicative that other components can move on to the next setup stage. The form of contextualization you mention above can be handled by a slightly more capable wait condition mechanism than we have now. I've been suggesting that this is the interface that workflow systems should use. The case of complex deployments with or without circular dependencies is typically resolved by making the system converge toward the desirable end-state through running idempotent recipes. This
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Sorry, I clicked the 'send' button too quickly. On 10/24/13 11:54 AM, Patrick Petit wrote: Hi Clint, Thank you! I have few replies/questions in-line. Cheers, Patrick On 10/23/13 8:36 PM, Clint Byrum wrote: Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700: Dear Steve and All, If I may add up on this already busy thread to share our experience with using Heat in large and complex software deployments. Thanks for sharing Patrick, I have a few replies in-line. I work on a project which precisely provides additional value at the articulation point between resource orchestration automation and configuration management. We rely on Heat and chef-solo respectively for these base management functions. On top of this, we have developed an event-driven workflow to manage the life-cycles of complex software stacks which primary purpose is to support middleware components as opposed to end-user apps. Our use cases are peculiar in the sense that software setup (install, config, contextualization) is not a one-time operation issue but a continuous thing that can happen any time in life-span of a stack. Users can deploy (and undeploy) apps long time after the stack is created. Auto-scaling may also result in an asynchronous apps deployment. More about this latter. The framework we have designed works well for us. It clearly refers to a PaaS-like environment which I understand is not the topic of the HOT software configuration proposal(s) and that's absolutely fine with us. However, the question for us is whether the separation of software config from resources would make our life easier or not. I think the answer is definitely yes but at the condition that the DSL extension preserves almost everything from the expressiveness of the resource element. In practice, I think that a strict separation between resource and component will be hard to achieve because we'll always need a little bit of application's specific in the resources. Take for example the case of the SecurityGroups. The ports open in a SecurityGroup are application specific. Components can only be made up of the things that are common to all users of said component. Also components would, if I understand the concept correctly, just be for things that are at the sub-resource level. Security groups and open ports would be across multiple resources, and thus would be separately specified from your app's component (though it might be useful to allow components to export static values so that the port list can be referred to along with the app component). Okay got it. If that's the case then that would work Then, designing a Chef or Puppet component type may be harder than it looks at first glance. Speaking of our use cases we still need a little bit of scripting in the instance's user-data block to setup a working chef-solo environment. For example, we run librarian-chef prior to starting chef-solo to resolve the cookbook dependencies. A cookbook can present itself as a downloadable tarball but it's not always the case. A chef component type would have to support getting a cookbook from a public or private git repo (maybe subversion), handle situations where there is one cookbook per repo or multiple cookbooks per repo, let the user choose a particular branch or label, provide ssh keys if it's a private repo, and so forth. We support all of this scenarios and so we can provide more detailed requirements if needed. Correct me if I'm wrong though, all of those scenarios are just variations on standard inputs into chef. So the chef component really just has to allow a way to feed data to chef. That's correct. Boils down to specifying correctly all the constraints that apply to deploying a cookbook in an instance from it's component description. I am not sure adding component relations like the 'depends-on' would really help us since it is the job of config management to handle software dependencies. Also, it doesn't address the issue of circular dependencies. Circular dependencies occur in complex software stack deployments. Example. When we setup a Slum virtual cluster, both the head node and compute nodes depend on one another to complete their configuration and so they would wait for each other indefinitely if we were to rely on the 'depends-on'. In addition, I think it's critical to distinguish between configuration parameters which are known ahead of time, like a db name or user name and password, versus contextualization parameters which are known after the fact generally when the instance is created. Typically those contextualization parameters are IP addresses but not only. The fact packages x,y,z have been properly installed and services a,b,c successfully started is contextualization information (a.k.a facts) which may be indicative that other components can move on to the next setup stage. The form of contextualization you mention above can be handled by a slightly more capable wait condition
Re: [openstack-dev] Move pep8 requirements to a separate file
On 10/24/2013 03:52 AM, Victor Sergeyev wrote: Hello all, I noticed, that when I run tests using tox I get some redundant modules installed to tox virtual environments. At the moment, pep8 specific requirements (such as pep8, flake8 and so on) are stated in test-requirements.txt file, which is meant to contain testing specific modules (nose, testtools, etc). So when we run tox, it installs those unnecessary pep8 libraries into py26, py27 environments. The same is true for pep8 environment - tox installs all libraries from test-requirements.txt there, but only pep8 specific modules are actually needed. I think, it would be nice to move pep8 specific requirements from test-requirements.txt to a separate file (pep8-requirements.txt, perhaps) to avoid this situation. It would save network traffic and reduce the time it takes to create virtual environments. Thoughts? Does it sound reasonable? Actually, not. The point of the test requirements is we've got everything you need for tox, breaking it up and making it more complicated seems counter productive. # this massively speeds up pip install export PIP_DOWNLOAD_CACHE=~/.pip/cache Added to your .bashrc will reduce the network traffix needed for tox. Also, I believe 1.6.1 will reuse envs by default. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On Wed, Oct 23, 2013 at 4:33 PM, Russell Bryant rbry...@redhat.com wrote: Greetings, At the last Nova meeting we started talking about some updates to the Nova blueprint process for the Icehouse cycle. I had hoped we could talk about and finalize this in a Nova design summit session on Nova Project Structure and Process [1], but I think we need to push forward on finalizing this as soon as possible so that it doesn't block current work being done. Here is a first cut at the process. Let me know what you think is missing or should change. I'll get the result of this thread posted on the wiki. 1) Proposing a Blueprint Proposing a blueprint for Nova is not much different than other projects. You should follow the instructions here: https://wiki.openstack.org/wiki/Blueprints The particular important step that seems to be missed by most is: Once it is ready for PTL review, you should set: Milestone: Which part of the release cycle you think your work will be proposed for merging. That is really important. Due to the volume of Nova blueprints, it probably will not be seen until you do this. 2) Blueprint Review Team Ensuring blueprints get reviewed is one of the responsibilities of the PTL. However, due to the volume of Nova blueprints, it's not practical for me to do it alone. A team of people (nova-drivers) [2], a subset of nova-core, will be doing blueprint reviews. By having more people reviewing blueprints, we can do a more thorough job and have a higher quality result. Note that even though there is a nova-drivers team, *everyone* is encouraged to participate in the review process by providing feedback on the mailing list. 3) Blueprint Review Criteria Here are some things that the team reviewing blueprints should look for: The blueprint ... - is assigned to the person signing up to do the work - has been targeted to the milestone when the code is planned to be completed - is an appropriate feature for Nova. This means it fits with the vision for Nova and OpenStack overall. This is obviously very subjective, but the result should represent consensus. - includes enough detail to be able to complete an initial design review before approving the blueprint. In many cases, the design review may result in a discussion on the mailing list to work through details. A link to this discussion should be left in the whiteboard of the blueprint for reference. This initial design review should be completed before the blueprint is approved. - includes information that describes the user impact (or lack of). Between the blueprint and text that comes with the DocImpact flag [3] in commits, the docs team should have *everything* they need to thoroughly document the feature. Once the review has been complete, the blueprint should be marked as approved and the priority should be set. A set priority is how we know from the blueprint list which ones have already been reviewed. 4) Blueprint Prioritization I would like to do a better job of using priorities in Icehouse. The priority field services a couple of purposes: - helps reviewers prioritize their time - helps set expectations for the submitter for how reviewing this work stacks up against other things In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. If we do this, I suspect we may end up with more blueprints at Low, but I also think we'll end up with a more realistic list of blueprints. The reality is if a feature doesn't have reviewers agreeing to do the review, it really is in a best effort, but no promises situation. 5) Blueprint Fall Cleaning Finally, it's about time we do some cleaning of the blueprint backlog. There are a bunch not currently being worked on. I propose that we close out all blueprints not targeted at a release milestone by November 22 (2 weeks after the end of the design summit), with the exception of anything just recently filed and still being drafted. ++ to the entire process, two comments though. * It would be great to get this into a wiki for future reference * We shouldn't merge patches with un-approved blueprints. And when that happens having a wiki page to point to would be great. [1] http://summit.openstack.org/cfp/details/341 [2] https://launchpad.net/~nova-drivers/+members#active [3] http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/ -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Climate] Weekly IRC team meeting
+1 Le 24/10/2013 09:45, Sylvain Bauza a écrit : Hi all, Climate is growing and time is coming for having a weekly meeting in between all of us. There is a huge number of reviews in progress, and at least the first agenda will be triaging those, making sure they are either coming to trunk as soon as possible, or splitted into smaller chunks of code. The Icehouse summit is also coming, and I would like to take opportunity to discuss about any topics we could raise during the Summit. Is Mondays 10:00am UTC [1] a convenient time for you ? http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166 -Sylvain -- Swann Croiset ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Cinder: create volume hold 'error' state.
Hi all, I want create volume in horizon, but it report error msg. And I have following action, 1. stop_service tgt 2. mv /etc/init/tgt.conf /etc/init/tgt.conf.disabled 3. restart_service iscsitarget And /var/log/cinder/* log is: - cinder-api.log:2013-10-24 20:24:20DEBUG [cinder.service] publish_errors : False cinder-api.log:2013-10-24 20:24:20DEBUG [cinder.service] fatal_exception_format_errors : False cinder-api.log:2013-10-24 20:24:48AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 10, 24, 12, 24, 45), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'7ebff319-838a-4f09-807b-372be8b26c13', 'size': 1L, 'user_id': u'90b47b1766924e078ca9fc03e5153fd0', 'attach_time': None, 'display_description': u'', 'project_id': u'f822eef7155046a68d20d71f3c37ac43', 'launched_at': None, 'scheduled_at': datetime.datetime(2013, 10, 24, 12, 24, 45), 'status': u'error', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'volume_glance_metadata': [], 'host': u'SDE-main-controller', 'source_volid': None, 'provider_auth': None, 'display_name': u'12', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 10, 24, 12, 24, 44), 'attach_status': u'detached', 'volume_type': None} cinder-scheduler.log:2013-10-24 20:24:20DEBUG [cinder.service] publish_errors : False cinder-scheduler.log:2013-10-24 20:24:20DEBUG [cinder.service] fatal_exception_format_errors : False cinder-volume.log:2013-10-24 20:24:45ERROR [cinder.volume.manager] volume volume-7ebff319-838a-4f09-807b-372be8b26c13: create failed cinder-volume.log:2013-10-24 20:24:45ERROR [cinder.openstack.common.rpc.amqp] Exception during message handling cinder-volume.log:LOG.error(_(volume %s: create failed), volume_ref['name']) cinder-volume.log:ProcessExecutionError: Unexpected error while running command. - Any body meet the same state? or, Any one have resolved it? regards, Thanks, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
So, we have 2 places for configuration management - database and config file Config file for tunning all datasource type behavior during installation and database for all changeable configurations during usage and administration of Trove installation. Database usecases: - update/custom image - update/custom packages - activating/deactivating datastore_type Config file usecases: - security group policy - provisioning mechanism - guest configuration parameters per database engine - provisioning parameters, templates - manager class ... In case if i need to register one more MySQL installation with following customization: - custom heat template - custom packages and additional monitoring tool package - open specific port for working with my monitoring tool on instance According to current concept should i add one more section in addition to existing mysql like below? [monitored_mysql] mount_point=/var/lib/mysql #8080 is port of my monitoring tool trove_security_group_rule_ports = 3306, 8080 heat_template=/etc/trove/heat_templates/monitored_mysql.yaml ... and put additional packages to database configuration? With best regards, Ilya Sviridov http://www.mirantis.ru/ On Wed, Oct 23, 2013 at 9:37 PM, Michael Basnight mbasni...@gmail.comwrote: On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote: Besides the strategy of selecting the default behavior. Let me share with you my ideas of configuration management in Trove and how the datastore concept can help with that. Initially there was only one database and all configuration was in one config file. With adding of new databases, heat provisioning mechanism, we are introducing more options. Not only assigning specific image_id, but custom packages, heat templates, probably specific strategies of working with security groups. Such needs already exist because we have a lot of optional things in config, and any new feature is implemented with back sight to already existing legacy installations of Trove. What is actually datastore_type + datastore_version? The model which glues all the bricks together, so let us use it for all variable part of *service type* configuration. from current config file # Trove DNS trove_dns_support = False # Trove Security Groups for Instances trove_security_groups_support = True trove_security_groups_rules_support = False trove_security_group_rule_protocol = tcp trove_security_group_rule_port = 3306 trove_security_group_rule_cidr = 0.0.0.0/0 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample #cloudinit_location = /etc/trove/cloudinit block_device_mapping = vdb device_path = /dev/vdb mount_point = /var/lib/mysql All that configurations can be moved to data_strore (some defined in heat templates) and be manageable by operator in case if any default behavior should be changed. The trove-config becomes core functionality specific only. Its fine for it to be in the config or the heat templates… im not sure it matters. what i would like to see is that specific thing to each service be in their own config group in the configuration. [mysql] mount_point=/var/lib/mysql … [redis] volume_support=False ….. and so on. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] VPNaaS questions...
I put the client code out for review as WIP: https://review.openstack.org/#/c/53602/ Regards, PCM (Paul Michali) MAIL p...@cisco.com IRC pcm_ (irc.freenode.net) TW @pmichali On Oct 23, 2013, at 4:53 PM, Nachi Ueno na...@ntti3.com wrote: Hi Paul I rebased the patch, and working on unit testing too https://review.openstack.org/#/c/41827/ 2013/10/23 Paul Michali p...@cisco.com: See PCM: in-line. PCM (Paul Michali) MAIL p...@cisco.com IRC pcm_ (irc.freenode.net) TW @pmichali On Oct 23, 2013, at 9:41 AM, Akihiro Motoki amot...@gmail.com wrote: Hi Paul, On Wed, Oct 23, 2013 at 9:56 PM, Paul Michali p...@cisco.com wrote: Hi guys, Some questions on VPNaaS… Can we get the review reopened of the service type framework changes for VPN on the server side? I was thinking of trying to rebase that patch, based on the latest from master, but before doing so, I ran TOX on the latest master commit. TOX fails with a bunch of errors, some reporting that the system is out of memory. I have a 4GB Ubuntu 12.04 VM for this and I see it max out on memory, when TOX is run on the whole Neutron code for py27. Anyone seen this? I see this too. On 4GB Ubuntu 13.04 VM, I have over 1GB swap while running the whole test and the test slows down after swap begins…. PCM: Whew! I was worried that it was something in my setup. Any idea on a root cause/workaround? Is this happening when Jenkins runs? I have tried the current patch of service type framework, and found that client changes are needed too. I have changes ready for review, should I post them, or do we need to wait (or indicate some dependency on the server side changes)? My suggestion is to post a patch with WIP status. We can test the server side patch with CLI. It really helps us all. PCM: Thanks! I wasn't sure how to proceed as the client change is useless w/o the server change. Yeah, please push wip :) I see that there is VPN connection status and VPN service status. What is the purpose of the latter? What is the status, if the service has multiple connections in different states? I see the same. PCM: Yeah, need to understand what the desired meaning is for the service status in this context. In openswan impl, vpnservice state is the state of openswan process. ipsec-site-connection state is actual connection state. so let's say we have two site. Vpnservice will be ACTIVE and ipsec-site-connection's state will be DOWN after we setup only one site. Have you guys tried VPNaaS with Havana and the now default ML2 plugin? I got a failure on connection create, saying that it could not find get_l3_agents_hosting_routers() attribute. I haven't looked into this yet, but will try as soon as I can. I think https://bugs.launchpad.net/neutron/+bug/1238846 is same as what you encountered. I believe this bug was fixed in the final RC. Doesn't it work? PCM: Ah, I missed that bug review. I probably need to update my repo with the latest to pick this up. Thanks! Regards, PCM Thanks, Akihiro Thanks! PCM (Paul Michali) Contact info for Cisco users http://twiki.cisco.com/Main/pcm ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
On 10/24/2013 08:38 AM, Joe Gordon wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/#/c/50891/ https://review.openstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https://review.openstack.org/#/c/51295/. Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki..python.org/moin/Vim https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Related Patches: https://review.openstack.org/#/c/51295/ https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z I agree with everything - both not caring about this topic really, and that we should just kill them and be done with it. Luckily, this is a suuper easy global search and replace. Also, since we gate on pep8, if your editor is configured incorrectly, you'll figure it out soon enough. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
+1 to remove them. -- dims On Thu, Oct 24, 2013 at 8:44 AM, Monty Taylor mord...@inaugust.com wrote: On 10/24/2013 08:38 AM, Joe Gordon wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/#/c/50891/ https://review.openstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https://review.openstack.org/#/c/51295/. Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki..python.org/moin/Vim https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Related Patches: https://review.openstack.org/#/c/51295/ https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z I agree with everything - both not caring about this topic really, and that we should just kill them and be done with it. Luckily, this is a suuper easy global search and replace. Also, since we gate on pep8, if your editor is configured incorrectly, you'll figure it out soon enough. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: http://davanum.wordpress.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi all, maybe a bit off track with respect to latest concrete discussions, but I noticed the announcement of project Solum on openstack-dev. Maybe this is playing on a different level, but I still see some relation to all the software orchestration we are having. What are your opinions on this? BTW, I just posted a similar short question in reply to the Solum announcement mail, but some of us have mail filters an might read [Heat] mail with higher prio, and I was interested in the Heat view. Cheers, Thomas Patrick Petit patrick.pe...@bull.net wrote on 24.10.2013 12:15:13: From: Patrick Petit patrick.pe...@bull.net To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, Date: 24.10.2013 12:18 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal Sorry, I clicked the 'send' button too quickly. On 10/24/13 11:54 AM, Patrick Petit wrote: Hi Clint, Thank you! I have few replies/questions in-line. Cheers, Patrick On 10/23/13 8:36 PM, Clint Byrum wrote: Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700: Dear Steve and All, If I may add up on this already busy thread to share our experience with using Heat in large and complex software deployments. Thanks for sharing Patrick, I have a few replies in-line. I work on a project which precisely provides additional value at the articulation point between resource orchestration automation and configuration management. We rely on Heat and chef-solo respectively for these base management functions. On top of this, we have developed an event-driven workflow to manage the life-cycles of complex software stacks which primary purpose is to support middleware components as opposed to end-user apps. Our use cases are peculiar in the sense that software setup (install, config, contextualization) is not a one-time operation issue but a continuous thing that can happen any time in life-span of a stack. Users can deploy (and undeploy) apps long time after the stack is created. Auto-scaling may also result in an asynchronous apps deployment. More about this latter. The framework we have designed works well for us. It clearly refers to a PaaS-like environment which I understand is not the topic of the HOT software configuration proposal(s) and that's absolutely fine with us. However, the question for us is whether the separation of software config from resources would make our life easier or not. I think the answer is definitely yes but at the condition that the DSL extension preserves almost everything from the expressiveness of the resource element. In practice, I think that a strict separation between resource and component will be hard to achieve because we'll always need a little bit of application's specific in the resources. Take for example the case of the SecurityGroups. The ports open in a SecurityGroup are application specific. Components can only be made up of the things that are common to all users of said component. Also components would, if I understand the concept correctly, just be for things that are at the sub-resource level. Security groups and open ports would be across multiple resources, and thus would be separately specified from your app's component (though it might be useful to allow components to export static values so that the port list can be referred to along with the app component). Okay got it. If that's the case then that would work Then, designing a Chef or Puppet component type may be harder than it looks at first glance. Speaking of our use cases we still need a little bit of scripting in the instance's user-data block to setup a working chef-solo environment. For example, we run librarian-chef prior to starting chef-solo to resolve the cookbook dependencies. A cookbook can present itself as a downloadable tarball but it's not always the case. A chef component type would have to support getting a cookbook from a public or private git repo (maybe subversion), handle situations where there is one cookbook per repo or multiple cookbooks per repo, let the user choose a particular branch or label, provide ssh keys if it's a private repo, and so forth. We support all of this scenarios and so we can provide more detailed requirements if needed. Correct me if I'm wrong though, all of those scenarios are just variations on standard inputs into chef. So the chef component really just has to allow a way to feed data to chef. That's correct. Boils down to specifying correctly all the constraints that apply to deploying a cookbook in an instance from it's component description. I am not sure adding component relations like the 'depends-on' would really help us since it is the job of config management to handle software dependencies. Also, it doesn't address the issue of circular dependencies. Circular dependencies occur in complex software
Re: [openstack-dev] Announcing Project Solum
Hi Adrian, really intersting! I wonder what the relation to all the software orchestration in Heat discussions is that have been going on for a while now. Regards, Thomas Adrian Otto adrian.o...@rackspace.com wrote on 23.10.2013 21:03:10: From: Adrian Otto adrian.o...@rackspace.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, Date: 23.10.2013 21:07 Subject: [openstack-dev] Announcing Project Solum OpenStack, OpenStack has emerged as the preferred choice for open cloud software worldwide. We use it to power our cloud, and we love it. We’re proud to be a part of growing its capabilities to address more needs every day. When we ask customers, partners, and community members about what problems they want to solve next, we have consistently found a few areas where OpenStack has room to grow in addressing the needs of software developers: 1) Ease of application development and deployment via integrated support for Git, CI/CD, and IDEs 2) Ease of application lifecycle management across dev, test, and production types of environments -- supported by the Heat project’s automated orchestration (resource deployment, monitoring-based self- healing, auto-scaling, etc.) 3) Ease of application portability between public and private clouds -- with no vendor-driven requirements within the application stack or control system Along with eBay, RedHat, Ubuntu/Canonical, dotCloud/Docker, Cloudsoft, and Cumulogic, we at Rackspace are happy to announce we have started project Solum as an OpenStack Related open source project. Solum is a community-driven initiative currently in its open design phase amongst the seven contributing companies with more to come. We plan to leverage the capabilities already offered in OpenStack in addressing these needs so anyone running an OpenStack cloud can make it easier to use for developers. By leveraging your existing OpenStack cloud, the aim of Project Solum is to reduce the number of services you need to manage in tackling these developer needs. You can use all the OpenStack services you already run instead of standing up overlapping, vendor-specific capabilities to accomplish this. We welcome you to join us to build this exciting new addition to the OpenStack ecosystem. Project Wiki https://wiki.openstack.org/wiki/Solum Lauchpad Project https://launchpad.net/solum IRC Public IRC meetings are held on Tuesdays 1600 UTC irc://irc.freenode.net:6667/solum Thanks, Adrian Otto___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
On 24/10/13 13:38 +0100, Joe Gordon wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https:// review.openstack.org/#/c/51295/. Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html# Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Related Patches: https://review.openstack.org/#/c/51295/ https://review.openstack.org/#/q/status:open+project:openstack/ nova+branch:master+topic:noboilerplate,n,z /me is a vim user! +1 on removing those lines! best, Joe ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
+1 on the topic How about we catch them in hacking so that they won't ever come back? On Thu, Oct 24, 2013 at 4:53 PM, Davanum Srinivas dava...@gmail.com wrote: +1 to remove them. -- dims On Thu, Oct 24, 2013 at 8:44 AM, Monty Taylor mord...@inaugust.com wrote: On 10/24/2013 08:38 AM, Joe Gordon wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/#/c/50891/ https://review.openstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https://review.openstack.org/#/c/51295/. Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs ( http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables ) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki..python.org/moin/Vim https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Related Patches: https://review.openstack.org/#/c/51295/ https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z I agree with everything - both not caring about this topic really, and that we should just kill them and be done with it. Luckily, this is a suuper easy global search and replace. Also, since we gate on pep8, if your editor is configured incorrectly, you'll figure it out soon enough. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: http://davanum.wordpress.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kind regards, Yuriy. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Cinder: create volume hold 'error' state.
Can you pastebin the full cinder-volume.log please, from the moment the create RPC comes in until the error (or just the full file if taht is easier) On 24 October 2013 13:36, ifzing ifz...@126.com wrote: Hi all, I want create volume in horizon, but it report error msg. And I have following action, 1. stop_service tgt 2. mv /etc/init/tgt.conf /etc/init/tgt.conf.disabled 3. restart_service iscsitarget And /var/log/cinder/* log is: - cinder-api.log:2013-10-24 20:24:20DEBUG [cinder.service] publish_errors : False cinder-api.log:2013-10-24 20:24:20DEBUG [cinder.service] fatal_exception_format_errors : False cinder-api.log:2013-10-24 20:24:48AUDIT [cinder.api.v1.volumes] vol={'volume_metadata': [], 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2013, 10, 24, 12, 24, 45), 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'7ebff319-838a-4f09-807b-372be8b26c13', 'size': 1L, 'user_id': u'90b47b1766924e078ca9fc03e5153fd0', 'attach_time': None, 'display_description': u'', 'project_id': u'f822eef7155046a68d20d71f3c37ac43', 'launched_at': None, 'scheduled_at': datetime.datetime(2013, 10, 24, 12, 24, 45), 'status': u'error', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'volume_glance_metadata': [], 'host': u'SDE-main-controller', 'source_volid': None, 'provider_auth': None, 'display_name': u'12', 'instance_uuid': None, 'created_at': datetime.datetime(2013, 10, 24, 12, 24, 44), 'attach_status': u'detached', 'volume_type': None} cinder-scheduler.log:2013-10-24 20:24:20DEBUG [cinder.service] publish_errors : False cinder-scheduler.log:2013-10-24 20:24:20DEBUG [cinder.service] fatal_exception_format_errors : False cinder-volume.log:2013-10-24 20:24:45ERROR [cinder.volume.manager] volume volume-7ebff319-838a-4f09-807b-372be8b26c13: create failed cinder-volume.log:2013-10-24 20:24:45ERROR [cinder.openstack.common.rpc.amqp] Exception during message handling cinder-volume.log:LOG.error(_(volume %s: create failed), volume_ref['name']) cinder-volume.log:ProcessExecutionError: Unexpected error while running command. - Any body meet the same state? or, Any one have resolved it? regards, Thanks, ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
On 10/24/2013 08:38 AM, Joe Gordon wrote: Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Another +1 from a Vim user. These patches are No Fun to review, so anyone who wants these gone, please pitch in. -- David Ripton Red Hat drip...@redhat.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps
It seems like adopting 0.1.8 is the right approach. If it doesn't work with other projects, we should work to help those projects get updated to work with it. --Morgan On Thursday, October 24, 2013, Zhi Yan Liu wrote: Hi all, Adopt 0.1.8 as iso8601 minimum version: https://review.openstack.org/#/c/53567/ zhiyan On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews dolph.math...@gmail.comjavascript:; wrote: On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins robe...@robertcollins.net javascript:; wrote: On 24 October 2013 07:34, Mark Washenberger mark.washenber...@markwash.net javascript:; wrote: Hi folks! 1) Adopt 0.1.8 as the minimum version in openstack-requirements. 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way, and just fix the tests so they don't care about these extra formats) 3) Make Glance work with the added formats even if 0.1.4 is installed. I think we should do (1) because both (2) will permit surprising, nonobvious changes in behaviour and (3) is just nasty engineering. Alternatively, add a (4) which is (2) with whinge on startup if 0.1.4 is installed to make identifying this situation easy. I'm in favor of (1), unless there's a reason why 0.1.8 not viable for another project or packager, in which case, I've never heard the term whinge before so there should definitely be some of that. The last thing a new / upgraded deployment wants is something like nova, or a third party API script failing in nonobvious ways with no breadcrumbs to lead them to 'upgrade iso8601' as an answer. -Rob -- Robert Collins rbtcoll...@hp.com javascript:; Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org javascript:; http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
+1 and likely this should be added to hacking so they don't sneak back in by accident / reviewers missing the line since we've had them for do long. On Thursday, October 24, 2013, Flavio Percoco wrote: On 24/10/13 13:38 +0100, Joe Gordon wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/** #/c/50891/ https://review.opeenstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https:// review.openstack.org/#/c/**51295/http://review.openstack.org/#/c/51295/ . Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/**Vimhttps://wiki.python.org/moin/Vim * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/**emacs/manual/html_mono/emacs.** html# http://www.gnu.org/software/emacs/manual/html_mono/emacs.html# Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. * There are other ways of making sure tabstop is set correctly for python files, see https://wiki.python.org/moin/**Vimhttps://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). Related Patches: https://review.openstack.org/#**/c/51295/https://review.openstack.org/#/c/51295/ https://review.openstack.org/#**/q/status:open+project:**openstack/https://review.openstack.org/#/q/status:open+project:openstack/ nova+branch:master+topic:**noboilerplate,n,z /me is a vim user! +1 on removing those lines! best, Joe __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- @flaper87 Flavio Percoco __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. Huge +1 to this. I'm in favor of the whole plan, but specifically the prioritization piece is very important, IMHO. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] LBaaS: should we check for associations when deleting health monitor?
Hi, Currently in LBaaS when health monitor object is deleted all associations with pools are deleted automatically. A bug was reported recently in Neutron where the reporter considers this behavior as wrong: https://bugs.launchpad.net/neutron/+bug/1243129 I have no strong opinion so I'd like to hear others thoughts on this (devs and vendors). One option may be to add kind of 'force_delete' parameter to the delete operation but that would require changes in api/docs/neutronclient/horizon what is probably an overkill. Please add your comments on bug in launchpad. Thanks, Oleg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Nova] Stable/Havana Gate broken
Hi, The gate for stable Hanava is broken in Nova with the following error: == 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | FAIL: setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern) 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern) 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | -- 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | _StringException: Traceback (most recent call last): 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | File tempest/scenario/manager.py, line 204, in setUpClass 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | cls.manager = OfficialClientManager(username, password, tenant_name) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 69, in __init__ 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | tenant_name) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 101, in _get_compute_client 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | http_log_debug=True) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 450, in Client 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | client_class = get_client_class(version) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 446, in get_client_class 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | return utils.import_class(client_path) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/utils.py, line 336, in import_class 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | __import__(mod_str) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/__init__.py, line 17, in module 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | from novaclient.v1_1.client import Client # noqa 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/client.py, line 18, in module 2013-10-24 12:48:35.632http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | from novaclient.v1_1 import agents 2013-10-24 12:48:35.632http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | File /opt/stack/new/python-novaclient/novaclient/v1_1/agents.py, line 22, in module 2013-10-24 12:48:35.632http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | from novaclient import base 2013-10-24 12:48:35.632http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | File /opt/stack/new/python-novaclient/novaclient/base.py, line 166, in module 2013-10-24
Re: [openstack-dev] RFC - Icehouse logging harmonization
Example 1: == n-conductor log in tempest/devstack - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz Total log lines: 84076 Total non DEBUG lines: 61 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude information change between INFO - DEBUG seems too steep a cliff. Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Warning, entering meeting time confusion zone
This is your semestrial public service announcement. This Sunday a large chunk of the world will drop DST, while most countries in North America will not (until November 3). This generally results in widespread confusion and chaos until things settle a few weeks later. Remember *our meetings are all set in UTC time*, which does not observe any DST. Therefore your favorite meeting time may or may not change next week. In doubt, check UTC times on https://wiki.openstack.org/wiki/Meetings and convert them to whatever timezone you are in using things like http://www.timeanddate.com/worldclock/fixedtime.html?hour=21min=0sec=0 First one to miss a meeting will be considered time-impaired and forced to wear a UTC watch at all times. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On Thu, Oct 24, 2013 at 07:05:19AM -0700, Dan Smith wrote: Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. I am completely unfamiliar with the code, so I apologize if these are dumb questions: - Is everything making use of Python's logging module? - Would this be a could use-case for that module's support of file-based configuration (http://docs.python.org/2/howto/logging.html#configuring-logging) This would let a cloud deployer have much more granular control over what log messages show up (and even what log messages go where). For example, maybe I don't care about messages from quantum.openstack.common.rpc.impl_qpid, and I generally only want to log WARN and above, but I want to see DEBUG messages for quantum.plugins.openvswitch.agent.ovs_quantum_agent. Or is that Too Much Flexibility? -- Lars Kellogg-Stedman l...@redhat.com pgp4FTOI0Dp6d.pgp Description: PGP signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Disable async network allocation
Yep, that was the feature I was referring to. As I said I don't have anything defiant that shows this to be not working (and the code looks fine) - just wanted to try and simplify the world a bit for a while. -Original Message- From: Melanie Witt [mailto:melw...@yahoo-inc.com] Sent: 24 October 2013 02:48 To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Disable async network allocation On Oct 23, 2013, at 5:56 PM, Aaron Rosen aro...@nicira.com wrote: I believe he's referring to: https://github.com/openstack/nova/blob/master/nova/network/model.py#L335 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1211 I found some more background on the feature (not configurable) which might help in trying revert it for testing. https://blueprints.launchpad.net/nova/+spec/async-network-alloc There was also addition of config option 'network_allocate_retries' which defaults to 0: https://review.openstack.org/#/c/34473/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Disable async network allocation
This is a quite interest findings. so If we use httplib, this won't happen? That's my understanding. It also looks like you might be able to configure the retrys in later versions of httplib2 -Original Message- From: Nachi Ueno [mailto:na...@ntti3.com] Sent: 24 October 2013 00:38 To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Disable async network allocation Hi Phil 2013/10/21 Day, Phil philip@hp.com: Hi Folks, I'm trying to track down a couple of obsecure issues in network port creation where it would be really useful if I could disable the async network allocation so that everything happens in the context of a single eventlet rather than two (and also rule out if there is some obscure eventlet threading issue in here). I thought it was configurable - but I don't see anything obvious in the code to go back to the old (slower) approach of doing network allocation in-lien in the main create thread ? May I ask the meaning of async network allocation ? One of the issues I'm trying to track is Neutron occasionally creating more than one port - I suspect a retry mechanism in the httplib2 is sending the port create request multiple times if Neutron is slow to reply, resulting in Neutron processing it multiple times. Looks like only the Neutron client has chosen to use httplib2 rather that httplib - anyone got any insight here ? This is a quite interest findings. so If we use httplib, this won't happen? Sometimes of course the Neutron timeout results in the create request being re-scheduled onto another node (which can it turn generate its own set of port create requests).Its the thread behavior around how the timeout exception is handled that I'm slightly nervous of (some of the retries seem to occur after the original network thread should have terminated). I agree. The kind of unintentional retry causes issues. Thanks Phil Best Nachi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] OpenStack Diagnostics project proposal
Hey folks! I'm totally excited to present to you a proposal of OpenStack Diagnostics project (codenamed Rubick). This project aims to provide a simple and convenient tool (or few tools) for OpenStack cloud operator to inspect and validate a consistency and correctness of configuration of their clouds. 'Configuration' I'm talking about is not limited to parameters in configuration files across all components of the platform, but also includes environment parameters (like hardware or network configurations). Look for the extended project proposal document, as well as some additional notes and suggestions, on the OpenStack Wiki: https://wiki.openstack.org/wiki/Rubick. Source code is on Github: https://github.com/MirantisLabs/rubick. We've also recorded a short video which shows basic workflow and gives some reasoning behind the project: http://www.youtube.com/watch?v=zTfRopx5bcA We're looking forward for your feedback. And if you want to improve it or integrate it with other tools and initiatives - reach us, we always welcome new collaborators. -- Best regards, Oleg Gelbukh Mirantis Labs ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On Thu, Oct 24, 2013 at 3:24 PM, Lars Kellogg-Stedman l...@redhat.comwrote: On Thu, Oct 24, 2013 at 07:05:19AM -0700, Dan Smith wrote: Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. I am completely unfamiliar with the code, so I apologize if these are dumb questions: - Is everything making use of Python's logging module? AFAIK yes, and if not we should be. - Would this be a could use-case for that module's support of file-based configuration (http://docs.python.org/2/howto/logging.html#configuring-logging) Yes, we do that in nova already and should do it in neutron as well, see http://git.openstack.org/cgit/openstack/nova/tree/etc/nova/nova.conf.sample#n1408 http://git.openstack.org/cgit/openstack/nova/tree/etc/nova/logging_sample.conf This would let a cloud deployer have much more granular control over what log messages show up (and even what log messages go where). For example, maybe I don't care about messages from quantum.openstack.common.rpc.impl_qpid, and I generally only want to log WARN and above, but I want to see DEBUG messages for quantum.plugins.openvswitch.agent.ovs_quantum_agent. Or is that Too Much Flexibility? While I think giving the deployer such fine grain control is a good idea, we want to include sane defaults when possible. -- Lars Kellogg-Stedman l...@redhat.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On 10/24/13 4:46 PM, Dan Smith d...@danplanet.com wrote: In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. Huge +1 to this. I'm in favor of the whole plan, but specifically the prioritization piece is very important, IMHO. I too am in favor of the idea. It is just not clear how 2 Nova cores will be signed up. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Announcing Project Solum
On 10/24/2013 08:51 AM, Thomas Spatzier wrote: Hi Adrian, really intersting! I wonder what the relation to all the software orchestration in Heat discussions is that have been going on for a while now. It's a good question. Personally, I would expect Heat to be a key element of how Solum works. There are a number of things Solum needs to do on top of Heat, though, such as the git integration bit. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On 10/24/2013 07:43 AM, Joe Gordon wrote: On Wed, Oct 23, 2013 at 4:33 PM, Russell Bryant rbry...@redhat.com mailto:rbry...@redhat.com wrote: Here is a first cut at the process. Let me know what you think is missing or should change. I'll get the result of this thread posted on the wiki. ++ to the entire process, two comments though. * It would be great to get this into a wiki for future reference * We shouldn't merge patches with un-approved blueprints. And when that happens having a wiki page to point to would be great. Yep, I do want to get this on the wiki. I just put it out on the list for comments before making it official. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On 10/24/2013 10:05 AM, Dan Smith wrote: Example 1: == n-conductor log in tempest/devstack - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz Total log lines: 84076 Total non DEBUG lines: 61 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude information change between INFO - DEBUG seems too steep a cliff. Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. Right, which is definitely why it's a conversation, to figure out what's useful, and what isn't. We definitely don't want to remove things that are useful. The amqp lines are 49562 of the DEBUG lines, so dropping those would drop our DEBUG output in more than half, which would be cool if it didn't have an impact of folks. I also just wanted to raise the question, are there multiple levels of DEBUG that might make sense here? For instance, every received seems to be followed by an unpacked, which actually has info that was in the received hash - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz#_2013-10-23_12_25_22_524 If we had DEBUG and DEBUG2 levels, where one of them would only be seen at the higher debug level, would that be useful? I'm not actually trying to pick on conductor here, but it makes a good example of a service that DEBUG level is extremely useful to development, and is used heavily, and might make us thing about multi levels of DEBUG to go deeper down the rabbit hole only if we really need to. -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On 10/24/2013 10:52 AM, Gary Kotton wrote: On 10/24/13 4:46 PM, Dan Smith d...@danplanet.com wrote: In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. Huge +1 to this. I'm in favor of the whole plan, but specifically the prioritization piece is very important, IMHO. I too am in favor of the idea. It is just not clear how 2 Nova cores will be signed up. Good point, there was no detail on that. I propose just comments on the blueprint whiteboard. It can be something simple like this to indicate that Dan and I have agreed to review the code for something: nova-core reviewers: russellb, dansmith -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
If we had DEBUG and DEBUG2 levels, where one of them would only be seen at the higher debug level, would that be useful? I'm fine with not seeing those for devstack runs, yeah. --Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Climate] Weekly IRC team meeting
+1 On Thu, Oct 24, 2013 at 5:43 PM, Nikolay Starodubtsev nstarodubt...@mirantis.com wrote: +1, but we need to wait for Dina. She has more problems with schedule than me Nikolay Starodubtsev Software Engineer Mirantis Inc. Skype: dark_harlequine1 On Thu, Oct 24, 2013 at 4:11 PM, Swann Croiset swann.croi...@bull.netwrote: +1 Le 24/10/2013 09:45, Sylvain Bauza a écrit : Hi all, Climate is growing and time is coming for having a weekly meeting in between all of us. There is a huge number of reviews in progress, and at least the first agenda will be triaging those, making sure they are either coming to trunk as soon as possible, or splitted into smaller chunks of code. The Icehouse summit is also coming, and I would like to take opportunity to discuss about any topics we could raise during the Summit. Is Mondays 10:00am UTC [1] a convenient time for you ? http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166 -Sylvain -- Swann Croiset ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kind regards, Yuriy. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On 10/24/13 at 11:07am, Russell Bryant wrote: On 10/24/2013 10:52 AM, Gary Kotton wrote: On 10/24/13 4:46 PM, Dan Smith d...@danplanet.com wrote: In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. Huge +1 to this. I'm in favor of the whole plan, but specifically the prioritization piece is very important, IMHO. I too am in favor of the idea. It is just not clear how 2 Nova cores will be signed up. Good point, there was no detail on that. I propose just comments on the blueprint whiteboard. It can be something simple like this to indicate that Dan and I have agreed to review the code for something: nova-core reviewers: russellb, dansmith +1 to everything in Russells original email. But for this point specifically I see it as resulting from conversations amongst Nova developers. If some of us decide that a blueprint is important or very nice to have then we should sign up to help it through. But there's nothing wrong with a low priority blueprint. We may want to communicate that core members don't need to be hunted and recruited for absolutely every blueprint that's proposed. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Climate] Weekly IRC team meeting
+1 On Thu, Oct 24, 2013 at 5:43 PM, Nikolay Starodubtsev nstarodubt...@mirantis.com wrote: +1, but we need to wait for Dina. She has more problems with schedule than me Nikolay Starodubtsev Software Engineer Mirantis Inc. Skype: dark_harlequine1 On Thu, Oct 24, 2013 at 4:11 PM, Swann Croiset swann.croi...@bull.netwrote: +1 Le 24/10/2013 09:45, Sylvain Bauza a écrit : Hi all, Climate is growing and time is coming for having a weekly meeting in between all of us. There is a huge number of reviews in progress, and at least the first agenda will be triaging those, making sure they are either coming to trunk as soon as possible, or splitted into smaller chunks of code. The Icehouse summit is also coming, and I would like to take opportunity to discuss about any topics we could raise during the Summit. Is Mondays 10:00am UTC [1] a convenient time for you ? http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166 -Sylvain -- Swann Croiset -- Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On 10/24/2013 11:00 AM, Sean Dague wrote: On 10/24/2013 10:05 AM, Dan Smith wrote: Example 1: == n-conductor log in tempest/devstack - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz Total log lines: 84076 Total non DEBUG lines: 61 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude information change between INFO - DEBUG seems too steep a cliff. Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. Right, which is definitely why it's a conversation, to figure out what's useful, and what isn't. We definitely don't want to remove things that are useful. The amqp lines are 49562 of the DEBUG lines, so dropping those would drop our DEBUG output in more than half, which would be cool if it didn't have an impact of folks. I also just wanted to raise the question, are there multiple levels of DEBUG that might make sense here? For instance, every received seems to be followed by an unpacked, which actually has info that was in the received hash - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz#_2013-10-23_12_25_22_524 If we had DEBUG and DEBUG2 levels, where one of them would only be seen at the higher debug level, would that be useful? I'm not actually trying to pick on conductor here, but it makes a good example of a service that DEBUG level is extremely useful to development, and is used heavily, and might make us thing about multi levels of DEBUG to go deeper down the rabbit hole only if we really need to. Note that we can't actually change the level used by other libs, like amqp. However, we can set more granular logger levels. We could re-define debug=True to only set up debug for openstack stuff, and add a new option debug_all=True that enables debugging for *everything*. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
Russell Bryant wrote: At the last Nova meeting we started talking about some updates to the Nova blueprint process for the Icehouse cycle. I had hoped we could talk about and finalize this in a Nova design summit session on Nova Project Structure and Process [1], but I think we need to push forward on finalizing this as soon as possible so that it doesn't block current work being done. Here is a first cut at the process. Let me know what you think is missing or should change. I'll get the result of this thread posted on the wiki. [...] +1 That's pretty much how I would like every project to handle their blueprints. For smaller projects I guess the 2 core signed up for =Medium blueprints requirement can be handled informally, but the rest is spot-on and should be applicable everywhere. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Climate] Weekly IRC team meeting
Cool. All the core devs gave a go, let's begin with Mondays 1000 UTC on #openstack-meeting I modified the Meetings wikipage [1] and created a wikipage for our own agenda [2] [1] : https://wiki.openstack.org/wiki/Meetings [2] : https://wiki.openstack.org/wiki/Meetings/Climate Thanks, -Sylvain Le 24/10/2013 17:23, Dina Belova a écrit : +1 On Thu, Oct 24, 2013 at 5:43 PM, Nikolay Starodubtsev nstarodubt...@mirantis.com mailto:nstarodubt...@mirantis.com wrote: +1, but we need to wait for Dina. She has more problems with schedule than me Nikolay Starodubtsev Software Engineer Mirantis Inc. Skype: dark_harlequine1 On Thu, Oct 24, 2013 at 4:11 PM, Swann Croiset swann.croi...@bull.net mailto:swann.croi...@bull.net wrote: +1 Le 24/10/2013 09:45, Sylvain Bauza a écrit : Hi all, Climate is growing and time is coming for having a weekly meeting in between all of us. There is a huge number of reviews in progress, and at least the first agenda will be triaging those, making sure they are either coming to trunk as soon as possible, or splitted into smaller chunks of code. The Icehouse summit is also coming, and I would like to take opportunity to discuss about any topics we could raise during the Summit. Is Mondays 10:00am UTC [1] a convenient time for you ? http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166 -Sylvain -- Swann Croiset -- Best regards, Dina Belova Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On a previous project, I wrote a library that provided a command-line option to set the logging levels for different loggers. This was handy for developers and for support. An example translated to Keystone would be like keystone-all --logging=keystone.identity=DEBUG Now the keystone.identity loggers are DEBUG while the rest of the loggers are still at INFO or whatever their default is. This would be used if you think the problem is in the identity backend. It's more convenient than editing a config file. Also, in our config file we listed the important loggers and had the default levels for them... for keystone it would be like # keystone.identity=INFO # keystone.assignment=INFO # dogpile=WARNING This was useful for developers and customers alike, because it was then easier to figure out what the loggers are. - Brant On Thu, Oct 24, 2013 at 10:22 AM, Russell Bryant rbry...@redhat.com wrote: On 10/24/2013 11:00 AM, Sean Dague wrote: On 10/24/2013 10:05 AM, Dan Smith wrote: Example 1: == n-conductor log in tempest/devstack - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz Total log lines: 84076 Total non DEBUG lines: 61 Question: do we need more than 1 level of DEBUG? 3 orders of magnitude information change between INFO - DEBUG seems too steep a cliff. Some of them are not useful to me (but might be to others), like the amqp channel lines. However, everything else has been pretty crucial at one point or another when debugging issues that span between the two tightly-coupled services. Right, which is definitely why it's a conversation, to figure out what's useful, and what isn't. We definitely don't want to remove things that are useful. The amqp lines are 49562 of the DEBUG lines, so dropping those would drop our DEBUG output in more than half, which would be cool if it didn't have an impact of folks. I also just wanted to raise the question, are there multiple levels of DEBUG that might make sense here? For instance, every received seems to be followed by an unpacked, which actually has info that was in the received hash - http://logs.openstack.org/70/52870/3/check/check-tempest-devstack-vm-full/f46b756/logs/screen-n-cond.txt.gz#_2013-10-23_12_25_22_524 If we had DEBUG and DEBUG2 levels, where one of them would only be seen at the higher debug level, would that be useful? I'm not actually trying to pick on conductor here, but it makes a good example of a service that DEBUG level is extremely useful to development, and is used heavily, and might make us thing about multi levels of DEBUG to go deeper down the rabbit hole only if we really need to. Note that we can't actually change the level used by other libs, like amqp. However, we can set more granular logger levels. We could re-define debug=True to only set up debug for openstack stuff, and add a new option debug_all=True that enables debugging for *everything*. -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
So if we decide to support any number of config options for each various datastore version, eventually we'll have large config files that will be hard to manage. What about storing the extra config info for each datastore version in its own independent config file? So rather than having one increasingly bloated config file used by everything, you could optionally specify a file in the datastore_versions table of the database that would be looked up similar to how we load template files on demand. - Tim From: Ilya Sviridov [isviri...@mirantis.com] Sent: Thursday, October 24, 2013 7:40 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance So, we have 2 places for configuration management - database and config file Config file for tunning all datasource type behavior during installation and database for all changeable configurations during usage and administration of Trove installation. Database usecases: - update/custom image - update/custom packages - activating/deactivating datastore_type Config file usecases: - security group policy - provisioning mechanism - guest configuration parameters per database engine - provisioning parameters, templates - manager class ... In case if i need to register one more MySQL installation with following customization: - custom heat template - custom packages and additional monitoring tool package - open specific port for working with my monitoring tool on instance According to current concept should i add one more section in addition to existing mysql like below? [monitored_mysql] mount_point=/var/lib/mysql #8080 is port of my monitoring tool trove_security_group_rule_ports = 3306, 8080 heat_template=/etc/trove/heat_templates/monitored_mysql.yaml ... and put additional packages to database configuration? With best regards, Ilya Sviridov http://www.mirantis.ru/ On Wed, Oct 23, 2013 at 9:37 PM, Michael Basnight mbasni...@gmail.commailto:mbasni...@gmail.com wrote: On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote: Besides the strategy of selecting the default behavior. Let me share with you my ideas of configuration management in Trove and how the datastore concept can help with that. Initially there was only one database and all configuration was in one config file. With adding of new databases, heat provisioning mechanism, we are introducing more options. Not only assigning specific image_id, but custom packages, heat templates, probably specific strategies of working with security groups. Such needs already exist because we have a lot of optional things in config, and any new feature is implemented with back sight to already existing legacy installations of Trove. What is actually datastore_type + datastore_version? The model which glues all the bricks together, so let us use it for all variable part of *service type* configuration. from current config file # Trove DNS trove_dns_support = False # Trove Security Groups for Instances trove_security_groups_support = True trove_security_groups_rules_support = False trove_security_group_rule_protocol = tcp trove_security_group_rule_port = 3306 trove_security_group_rule_cidr = 0.0.0.0/0http://0.0.0.0/0 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample #cloudinit_location = /etc/trove/cloudinit block_device_mapping = vdb device_path = /dev/vdb mount_point = /var/lib/mysql All that configurations can be moved to data_strore (some defined in heat templates) and be manageable by operator in case if any default behavior should be changed. The trove-config becomes core functionality specific only. Its fine for it to be in the config or the heat templates… im not sure it matters. what i would like to see is that specific thing to each service be in their own config group in the configuration. [mysql] mount_point=/var/lib/mysql … [redis] volume_support=False ….. and so on. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
I am 1% behind that idea. It makes things easier to manage. On Thu, Oct 24, 2013 at 10:44 AM, Tim Simpson tim.simp...@rackspace.comwrote: So if we decide to support any number of config options for each various datastore version, eventually we'll have large config files that will be hard to manage. What about storing the extra config info for each datastore version in its own independent config file? So rather than having one increasingly bloated config file used by everything, you could optionally specify a file in the datastore_versions table of the database that would be looked up similar to how we load template files on demand. - Tim -- *From:* Ilya Sviridov [isviri...@mirantis.com] *Sent:* Thursday, October 24, 2013 7:40 AM *To:* OpenStack Development Mailing List *Subject:* Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance So, we have 2 places for configuration management - database and config file Config file for tunning all datasource type behavior during installation and database for all changeable configurations during usage and administration of Trove installation. Database usecases: - update/custom image - update/custom packages - activating/deactivating datastore_type Config file usecases: - security group policy - provisioning mechanism - guest configuration parameters per database engine - provisioning parameters, templates - manager class ... In case if i need to register one more MySQL installation with following customization: - custom heat template - custom packages and additional monitoring tool package - open specific port for working with my monitoring tool on instance According to current concept should i add one more section in addition to existing mysql like below? [monitored_mysql] mount_point=/var/lib/mysql #8080 is port of my monitoring tool trove_security_group_rule_ports = 3306, 8080 heat_template=/etc/trove/heat_templates/monitored_mysql.yaml ... and put additional packages to database configuration? With best regards, Ilya Sviridov http://www.mirantis.ru/ On Wed, Oct 23, 2013 at 9:37 PM, Michael Basnight mbasni...@gmail.comwrote: On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote: Besides the strategy of selecting the default behavior. Let me share with you my ideas of configuration management in Trove and how the datastore concept can help with that. Initially there was only one database and all configuration was in one config file. With adding of new databases, heat provisioning mechanism, we are introducing more options. Not only assigning specific image_id, but custom packages, heat templates, probably specific strategies of working with security groups. Such needs already exist because we have a lot of optional things in config, and any new feature is implemented with back sight to already existing legacy installations of Trove. What is actually datastore_type + datastore_version? The model which glues all the bricks together, so let us use it for all variable part of *service type* configuration. from current config file # Trove DNS trove_dns_support = False # Trove Security Groups for Instances trove_security_groups_support = True trove_security_groups_rules_support = False trove_security_group_rule_protocol = tcp trove_security_group_rule_port = 3306 trove_security_group_rule_cidr = 0.0.0.0/0 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample #cloudinit_location = /etc/trove/cloudinit block_device_mapping = vdb device_path = /dev/vdb mount_point = /var/lib/mysql All that configurations can be moved to data_strore (some defined in heat templates) and be manageable by operator in case if any default behavior should be changed. The trove-config becomes core functionality specific only. Its fine for it to be in the config or the heat templates… im not sure it matters. what i would like to see is that specific thing to each service be in their own config group in the configuration. [mysql] mount_point=/var/lib/mysql … [redis] volume_support=False ….. and so on. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [State-Management] Agenda for today meeting at 2000 UTC
Hi all, The [state-management] project team holds a weekly meeting in #openstack-meeting on thursdays, 2000 UTC. The next meeting is today, 2013-10-24!!! As usual, everyone is welcome :-) Link: https://wiki.openstack.org/wiki/Meetings/StateManagement Taskflow: https://wiki.openstack.org/TaskFlow ## Agenda (30-60 mins): - Discuss any action items from last meeting. - Discuss ongoing status of the overall effort and any needed coordination. - Continue missed items from last week. - Discuss blueprints for icehouse. - Discuss about any other potential new use-cases for said library. - Discuss about any other ideas, problems, open-reviews, issues, solutions, questions (and more!). Any other topics are welcome :-) See you all soon! -- Joshua Harlow It's openstack, relax... | harlo...@yahoo-inc.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Distributed Virtual Router Discussion
On Wed, Oct 23, at 7:35 am, Sylvain Afchain sylvain.afch...@enovance.com wrote: I'm interested as well. On our side we are working on this BP https://blueprints.launchpad.net/neutron/+spec/l3-high-availability And might be good to revisit the Multi-host DHCP and L3 blueprint https://blueprints.launchpad.net/neutron/+spec/quantum-multihost ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] RFC - Icehouse logging harmonization
On 10/24/2013 11:46 AM, Brant Knudson wrote: On a previous project, I wrote a library that provided a command-line option to set the logging levels for different loggers. This was handy for developers and for support. An example translated to Keystone would be like keystone-all --logging=keystone.identity=DEBUG Now the keystone.identity loggers are DEBUG while the rest of the loggers are still at INFO or whatever their default is. This would be used if you think the problem is in the identity backend. It's more convenient than editing a config file. Also, in our config file we listed the important loggers and had the default levels for them... for keystone it would be like # keystone.identity=INFO # keystone.assignment=INFO # dogpile=WARNING This was useful for developers and customers alike, because it was then easier to figure out what the loggers are. We support something like that (from oslo-incubator's logging code): # list of logger=LEVEL pairs (list value) #default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
On 10/24/2013 11:32 AM, Thierry Carrez wrote: Russell Bryant wrote: At the last Nova meeting we started talking about some updates to the Nova blueprint process for the Icehouse cycle. I had hoped we could talk about and finalize this in a Nova design summit session on Nova Project Structure and Process [1], but I think we need to push forward on finalizing this as soon as possible so that it doesn't block current work being done. Here is a first cut at the process. Let me know what you think is missing or should change. I'll get the result of this thread posted on the wiki. [...] +1 That's pretty much how I would like every project to handle their blueprints. For smaller projects I guess the 2 core signed up for =Medium blueprints requirement can be handled informally, but the rest is spot-on and should be applicable everywhere. If that's the case, then I can just work on updating the main Blueprints page [1] with a little bit more detail. I suppose there's not that much missing. In particular, I would add: - notes on blueprint review criteria - the use of -driver teams by some projects to review - some more info on prioriization based on review bandwidth, and nova's specific requirement for reviewer support for a priority Low [1] https://wiki.openstack.org/wiki/Blueprints -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Remove vim modelines?
I guess I get to buck the trend :). On 25 October 2013 01:38, Joe Gordon joe.gord...@gmail.com wrote: Since the beginning of OpenStack we have had vim modelines all over the codebase, but after seeing this patch https://review.opeenstack.org/#/c/50891/ I took a further look into vim modelines and think we should remove them. Before going any further, I should point out these lines don't bother me too much but I figured if we could get consensus, then we could shrink our codebase by a little bit. Sidenote: This discussion is being moved to the mailing list because it 'would be better to have a mailing list thread about this rather than bits and pieces of discussion in gerrit' as this change requires multiple patches. https://review.openstack.org/#/c/51295/. I don't deeply care about them, but I think the logic being used to promote their removal is flawed. Why remove them? * Modelines aren't supported by default in debian or ubuntu due to security reasons: https://wiki.python.org/moin/Vim This affects folk who haven't turned it on. We have no idea about how many that is. The presumption is that if it's off by default everyone has it off - but one of the first things most folk end up doing with vim as they head down the path to poweruser is customing their vimrc... * Having modelines for vim means if someone wants we should support modelines for emacs (http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables) etc. as well. And having a bunch of headers for different editors in each file seems like extra overhead. This is a slippery slope argument. We could equally say 'it's up to the editors to support our style declaration, use a vim-compatible plugin in your editor'. We can also say 'we'll directly support the three most common editors' and gather data on that from our developer surveys. (We /do/ gather developer data don't we? :) ). * There are other ways of making sure tabstop is set correctly for python files, see https://wiki.python.org/moin/Vim. I am a vIm user myself and have never used modelines. The other ways won't help folk that * We have vim modelines in only 828 out of 1213 python files in nova (68%), so if anyone is using modelines today, then it only works 68% of the time in nova That seems like a reason to add it to all files to me ;). * Why have the same config 828 times for one repo alone? This violates the DRY principle (Don't Repeat Yourself). The same argument applies to copyright licences, .py file suffixes and common imports. I do agree that the repeated unchanging nature of modelines is suboptimal, and it would be nice to be able to define the style hints to vim for an entire subtree. This is probably possible through a little bit of scripting. Since you skipped it, I should add a case for pushing modelines everywhere. *) They help everyone when editing files where the format is less well known than Python (.yaml for instance) or our style guide doesn't match a global document (shell scripts, docbook). *) They help casual contributors *more* than long time core contributors : and those are the folk that are most likely to give up and walk away. Keeping barriers to entry low is an important part of making OpenStack development accessible to new participants. *) We can move them to the very end of the file where ~nobody will see them. *) We can teach hacking to enforce a specific modeline per file type, avoiding accidental mistakes. *) Possibly we can move the copyright licence grants to the end of the files as well, making opening our source code up much more pleasant. Cheers, Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Stable/Havana Gate broken
This is being tracked in bug https://bugs.launchpad.net/tempest/+bug/1244055and a fix is working its way through the gate now ( https://review.openstack.org/#/c/53699/) On Thu, Oct 24, 2013 at 2:51 PM, Gary Kotton gkot...@vmware.com wrote: Hi, The gate for stable Hanava is broken in Nova with the following error: ==2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | FAIL: setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | --2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | _StringException: Traceback (most recent call last):2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | File tempest/scenario/manager.py, line 204, in setUpClass2013-10-24 12:48:35.629 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | cls.manager = OfficialClientManager(username, password, tenant_name)2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 69, in __init__2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | tenant_name)2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 101, in _get_compute_client2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | http_log_debug=True)2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 450, in Client2013-10-24 12:48:35.630 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | client_class = get_client_class(version)2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 446, in get_client_class2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | return utils.import_class(client_path)2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/utils.py, line 336, in import_class2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | __import__(mod_str)2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/__init__.py, line 17, in module2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | from novaclient.v1_1.client import Client # noqa2013-10-24 12:48:35.631 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/client.py, line 18, in module2013-10-24 12:48:35.632 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | from novaclient.v1_1 import agents2013-10-24 12:48:35.632 http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | File /opt/stack/new/python-novaclient/novaclient/v1_1/agents.py, line 22, in module2013-10-24 12:48:35.632
Re: [openstack-dev] [Nova] Stable/Havana Gate broken
Thanks for doing this. Gary From: Joe Gordon joe.gord...@gmail.commailto:joe.gord...@gmail.com Reply-To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: Thursday, October 24, 2013 9:48 PM To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] Stable/Havana Gate broken This is being tracked in bug https://bugs.launchpad.net/tempest/+bug/1244055 and a fix is working its way through the gate now (https://review.openstack.org/#/c/53699/) On Thu, Oct 24, 2013 at 2:51 PM, Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com wrote: Hi, The gate for stable Hanava is broken in Nova with the following error: == 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | FAIL: setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern) 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | setUpClass (tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern) 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | -- 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | _StringException: Traceback (most recent call last): 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | File tempest/scenario/manager.py, line 204, in setUpClass 2013-10-24 12:48:35.629http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_629 | cls.manager = OfficialClientManager(username, password, tenant_name) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 69, in __init__ 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | tenant_name) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File tempest/scenario/manager.py, line 101, in _get_compute_client 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | http_log_debug=True) 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 450, in Client 2013-10-24 12:48:35.630http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_630 | client_class = get_client_class(version) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/client.py, line 446, in get_client_class 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | return utils.import_class(client_path) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/utils.py, line 336, in import_class 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | __import__(mod_str) 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/__init__.py, line 17, in module 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | from novaclient.v1_1.client import Client # noqa 2013-10-24 12:48:35.631http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_631 | File /opt/stack/new/python-novaclient/novaclient/v1_1/client.py, line 18, in module 2013-10-24 12:48:35.632http://logs.openstack.org/95/53595/1/check/check-tempest-devstack-vm-full/14c606f/console.html#_2013-10-24_12_48_35_632 | from
Re: [openstack-dev] [Nova] Blueprint review process
On 24 October 2013 04:33, Russell Bryant rbry...@redhat.com wrote: Greetings, At the last Nova meeting we started talking about some updates to the Nova blueprint process for the Icehouse cycle. I had hoped we could talk about and finalize this in a Nova design summit session on Nova Project Structure and Process [1], but I think we need to push forward on finalizing this as soon as possible so that it doesn't block current work being done. Cool Here is a first cut at the process. Let me know what you think is missing or should change. I'll get the result of this thread posted on the wiki. 1) Proposing a Blueprint Proposing a blueprint for Nova is not much different than other projects. You should follow the instructions here: https://wiki.openstack.org/wiki/Blueprints The particular important step that seems to be missed by most is: Once it is ready for PTL review, you should set: Milestone: Which part of the release cycle you think your work will be proposed for merging. That is really important. Due to the volume of Nova blueprints, it probably will not be seen until you do this. The other thing I'm seeing some friction on is 'significant features' : it sometimes feels like folk are filing blueprints for everything that isn't 'the code crashed' style problems, and while I appreciate folk wanting to work within the system, blueprints are a heavyweight tool, primarily suited for things that require significant coordination. 2) Blueprint Review Team Ensuring blueprints get reviewed is one of the responsibilities of the PTL. However, due to the volume of Nova blueprints, it's not practical for me to do it alone. A team of people (nova-drivers) [2], a subset of nova-core, will be doing blueprint reviews. Why a subset of nova-core? With nova-core defined as 'knows the code well *AND* reviews a lot', I can see that those folk are in a position to spot a large class of design defects. However, there are plenty of folk with expertise in e.g. SOA, operations, deployment @ scale, who are not nova-core but who will spot plenty of issues. Is there some way they can help out? By having more people reviewing blueprints, we can do a more thorough job and have a higher quality result. Note that even though there is a nova-drivers team, *everyone* is encouraged to participate in the review process by providing feedback on the mailing list. I'm not sure about this bit here: blueprints don't have the spec content, usually thats in an etherpad; etherpads are editable by everyone - wouldn't it be better to keep the conversation together? I guess part of my concern here comes back to the (ab)use of blueprints for shallow features. 3) Blueprint Review Criteria Here are some things that the team reviewing blueprints should look for: The blueprint ... - is assigned to the person signing up to do the work - has been targeted to the milestone when the code is planned to be completed - is an appropriate feature for Nova. This means it fits with the vision for Nova and OpenStack overall. This is obviously very subjective, but the result should represent consensus. - includes enough detail to be able to complete an initial design review before approving the blueprint. In many cases, the design review may result in a discussion on the mailing list to work through details. A link to this discussion should be left in the whiteboard of the blueprint for reference. This initial design review should be completed before the blueprint is approved. - includes information that describes the user impact (or lack of). Between the blueprint and text that comes with the DocImpact flag [3] in commits, the docs team should have *everything* they need to thoroughly document the feature. I'd like to add: - has an etherpad with the design (the blueprint summary has no markup and is a poor place for capturing the design). Once the review has been complete, the blueprint should be marked as approved and the priority should be set. A set priority is how we know from the blueprint list which ones have already been reviewed. 4) Blueprint Prioritization I would like to do a better job of using priorities in Icehouse. The priority field services a couple of purposes: - helps reviewers prioritize their time - helps set expectations for the submitter for how reviewing this work stacks up against other things In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. If we do this, I suspect we may end up with more blueprints at Low, but I also think we'll end up with a more realistic list of blueprints. The
Re: [openstack-dev] [GIT] Reset a commit in Gerrit
Thanks all for the information! Floren 2013/10/21 Thierry Carrez thie...@openstack.org: Dolph Mathews wrote: On Sun, Oct 20, 2013 at 1:31 PM, Edgar Magana emag...@plumgrid.com mailto:emag...@plumgrid.com wrote: Just just need to send a patch to gerrit. From you local repo, do the necessary fixes and be sure everything is just as you want. Then simply run: #git commit -a --amend #git review The only gotcha here is that you need to maintain the same Change-Id that was appended to your original commit message (and push it to the same branch in gerrit, if you specified one originally). Removing or altering the Change-Id will prevent your patch from updating your existing review. The rest of your commit message can be freely rewritten. Additionally here are a few links to our doc: https://wiki.openstack.org/wiki/GerritWorkflow https://wiki.openstack.org/wiki/GerritJenkinsGit Regards, -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] extend Network topology view in horizon
Hi Raja, I'm sorry, I haven't careated documents yet. When I create, I'll share you. Thanks, Toshi On Wed, Oct 23, 2013 at 6:39 AM, Raja Srinivasan raja.sriniva...@riverbed.com wrote: Hi Toshi If you have some documentation on the demo, please share it. Thanks Regards Raja Srinivasan E: raja.sriniva...@riverbed.com P: +1(408) 598-1175 -Original Message- From: Toshiyuki Hayashi [mailto:haya...@ntti3.com] Sent: Tuesday, October 22, 2013 10:47 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] extend Network topology view in horizon Hi, Regarding No.2, I'm gonna support FWaaS/LBaaS/VPNaaS, and I've just started creating a demo for that. So I'll add the blueprint soon. Thanks, Toshi On Tue, Oct 22, 2013 at 2:02 AM, Ofer Blaut obl...@redhat.com wrote: Hi It will be helpful to extend Network topology view in horizon 1. Admin should be able to see the entire/per tenant network topology (we might need a flag to enable/disable it). 2. Supporting ICON for FWaaS/LBaaS/VPNaaS on both admin tenant level, so it will be easy to see the deployments Are there any blueprints to support it ? Thanks Ofer ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Toshiyuki Hayashi NTT Innovation Institute Inc. Tel:650-579-0800 ex4292 mail:haya...@ntti3.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Toshiyuki Hayashi NTT Innovation Institute Inc. Tel:650-579-0800 ex4292 mail:haya...@ntti3.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
-2 to 10 minute downtimes. +1 to doing the evolution gracefully. There is a spec for doing that from the H summit; someone just needs to implement it. -Rob On 25 October 2013 09:30, Michael Still mi...@stillhq.com wrote: Hi. Because I am a grumpy old man I have just -2'ed https://review.openstack.org/#/c/39685/ and I wanted to explain my rationale. Mostly I am hoping for a consensus to form -- if I am wrong then I'll happy remove my vote from this patch. This patch does the reasonably sensible thing of converting two columns from being text to varchar, which reduces their expense to the database. Given the data stored is already of limited length, it doesn't impact our functionality at all either. However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. Discuss. Thanks, Michael PS: I could see a more complicated approach where we did these changes in flight by adding columns, using a periodic task to copy data to the new columns, and then dropping the old. That's a lot more complicated to implement though. -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
Hi, 1) If you have 30 million instance it means that you have 300 million instance_system_metadata records All these records will be downloaded every 6 seconds (in periodic tasks) to compute_nodes = which means that OpenStack on scale out of box doesn't work. If you have 3 million instance the situation is the same (OpenStack doesn't work). but migration will be done for 1 minute. So it is maximum 1 minute downtime (in actually non real case). This change is actually very important because VARCHAR works much much faster then BLOB (Text) records. So this is important change and shouldn't be -2. Best regards, Boris Pavlovic On Fri, Oct 25, 2013 at 12:30 AM, Michael Still mi...@stillhq.com wrote: Hi. Because I am a grumpy old man I have just -2'ed https://review.openstack.org/#/c/39685/ and I wanted to explain my rationale. Mostly I am hoping for a consensus to form -- if I am wrong then I'll happy remove my vote from this patch. This patch does the reasonably sensible thing of converting two columns from being text to varchar, which reduces their expense to the database. Given the data stored is already of limited length, it doesn't impact our functionality at all either. However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. Discuss. Thanks, Michael PS: I could see a more complicated approach where we did these changes in flight by adding columns, using a periodic task to copy data to the new columns, and then dropping the old. That's a lot more complicated to implement though. -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Disable async network allocation
On Oct 24, 2013, at 7:47 AM, Day, Phil philip@hp.com wrote: Yep, that was the feature I was referring to. As I said I don't have anything defiant that shows this to be not working (and the code looks fine) - just wanted to try and simplify the world a bit for a while. Of course. That's what I meant, that you might be able to use the info to revert it locally in your environment to help you track down the Neutron issue you're investigating. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [marconi] Sharding feature design
Folks, We’ve been discussing Marconi’s storage sharding architecture over the past couple of weeks and I took some time today to draw up our current thinking on the design. I’ve also added this link to the blueprint. http://grab.by/rsoI I’d like to get the team’s thoughts on this and get the design finalized. See also: https://blueprints.launchpad.net/marconi/+spec/storage-sharding Thanks! --- @kgriffs Kurt Griffiths ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: Because I am a grumpy old man I have just -2'ed https://review.openstack.org/#/c/39685/ and I wanted to explain my rationale. Mostly I am hoping for a consensus to form -- if I am wrong then I'll happy remove my vote from this patch. This patch does the reasonably sensible thing of converting two columns from being text to varchar, which reduces their expense to the database. Given the data stored is already of limited length, it doesn't impact our functionality at all either. However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. I'm not sure how you could have 30 million instances. That's a lot of hardware! :) However, in our Rackspace sized deploys (less than 30 million instances), we've seen many migrations take longer than 10 minutes. DB migrations are one of the biggest problems we've been facing lately. Especially since a lot of migrations have been done over the past number of months ended up causing a lot of pain considering the value they bring. For instance, migration 185 was particularly painful. It only renamed the indexes, but it required rebuilding them. This took a long time for such a simple task. So I'm very interested in figuring out some sort of solution that makes database migrations much less painful. That said, I'm hesitant to say that cleanups like these shouldn't be done. At a certain point we'll build a significant amount of technical debt around the database that we're afraid to touch. PS: I could see a more complicated approach where we did these changes in flight by adding columns, using a periodic task to copy data to the new columns, and then dropping the old. That's a lot more complicated to implement though. You mean an expand/contract style of migrations? It's been discussed at previous summits, but it's a lot of work. It's also at the mercy of the underlying database engine. For instance, MySQL (depending the version and the underlying database engine) will recreate the table when adding columns. This will grab a lock and take a long time. JE ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On 10/24/2013 04:40 PM, Boris Pavlovic wrote: Hi, 1) If you have 30 million instance it means that you have 300 million instance_system_metadata records All these records will be downloaded every 6 seconds (in periodic tasks) to compute_nodes = which means that OpenStack on scale out of box doesn't work. If you have 3 million instance the situation is the same (OpenStack doesn't work). but migration will be done for 1 minute. So it is maximum 1 minute downtime (in actually non real case). This change is actually very important because VARCHAR works much much faster then BLOB (Text) records. This is actually not always true. It depends on: * What RDBMS you are using * What storage engine within MySQL, if using MySQL * What the data access/modification patterns on the field are The last bullet is very important. For fields that: * Are not used in predicates or aggregates (i.e. not used in JOIN conditions, WHERE clauses, or GROUP BY/HAVING clauses) * Are rarely read or updated * Are typically longer than 200-300 bytes of data It's typically more efficient (both from a fill factor perspective and from a log structure perspective) to leave the field as TEXT rather than using VARCHAR. The reason is twofold: 1) Using TEXT saves space in the main data pages of the storage engine since a pointer to external data file is stored in the data page instead of the data itself, meaning more records of that table can fit in a single fixed-byte-size data page, which means fewer disk and memory reads to find a record, which means faster seeks and scans. 2) When modifying the value of the TEXT field, there is no chance that a clustered index-organized table layout (like, say, InnoDB uses) would need to perform a rebalancing action due to the new size of the TEXT data causing the record to not fit on its containing data page -- something that would happen a lot more if VARCHAR was used. If the data is: 1) Often used in predicates or aggregates 2) Often updated and the size of the updated field stays a similar range 3) ALWAYS within a small, defined range of bytes (say... 32-64 bytes) Then it's often advantageous to use VARCHAR (or CHAR) over TEXT. But it's wrong to say that it is ALWAYS faster to use VARCHAR vs. BLOB/TEXT. Best, -jay So this is important change and shouldn't be -2. Best regards, Boris Pavlovic On Fri, Oct 25, 2013 at 12:30 AM, Michael Still mi...@stillhq.com mailto:mi...@stillhq.com wrote: Hi. Because I am a grumpy old man I have just -2'ed https://review.openstack.org/#/c/39685/ and I wanted to explain my rationale. Mostly I am hoping for a consensus to form -- if I am wrong then I'll happy remove my vote from this patch. This patch does the reasonably sensible thing of converting two columns from being text to varchar, which reduces their expense to the database. Given the data stored is already of limited length, it doesn't impact our functionality at all either. However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. Discuss. Thanks, Michael PS: I could see a more complicated approach where we did these changes in flight by adding columns, using a periodic task to copy data to the new columns, and then dropping the old. That's a lot more complicated to implement though. -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On Fri, Oct 25, 2013 at 8:19 AM, Johannes Erdfelt johan...@erdfelt.com wrote: On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. I'm not sure how you could have 30 million instances. That's a lot of hardware! :) This has come up a couple of times on this thread, so I want to reinforce -- that database is a real user database. There are users out there _right_now_ with 30 million rows in their instance tables and using nova quite happily. Now, not all those instances are _running_, but they're still in the table. Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting
This is good discussion. +1 for using Neutron ports for defining zones. I see Kaiwei's point but for DELL, neutron ports makes more sense. I am not sure if I completely understood the bump-in-the-wire/zone discussion. DELL security appliance allows using different zones with bump-in-the-wire. If the firewall is inserted in bump-in-the-wire mode between router and LAN hosts, then it does makes sense to apply different zones on ports connected to LAN and Router. The there are cases where the end-users apply same zones on both sides but this is a decision we should leave to end customers. We should allow configuring zones in bump-in-the-wire mode as well. On Wed, Oct 23, 2013 at 12:08 PM, Sumit Naiksatam sumitnaiksa...@gmail.comwrote: Log from today's meeting: http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html Action items for some of the folks included. Please join us for the meeting next week. Thanks, ~Sumit. On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday 18:00 UTC (11 AM PDT). Agenda: * Tempest tests * Definition and use of zones * Address Objects * Counts API * Service Objects * Integration with service type framework * Open discussion - any other topics you would like to bring up for discussion during the summit. https://wiki.openstack.org/wiki/Meetings/FWaaS Thanks, ~Sumit. On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: Hi All, For the next of phase of FWaaS development we will be considering a number of features. I am proposing an IRC meeting on Oct 16th Wednesday 18:00 UTC (11 AM PDT) to discuss this. The etherpad for the summit session proposal is here: https://etherpad.openstack.org/p/icehouse-neutron-fwaas and has a high level list of features under consideration. Thanks, ~Sumit. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: On Fri, Oct 25, 2013 at 8:19 AM, Johannes Erdfelt johan...@erdfelt.com wrote: On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. I'm not sure how you could have 30 million instances. That's a lot of hardware! :) This has come up a couple of times on this thread, so I want to reinforce -- that database is a real user database. There are users out there _right_now_ with 30 million rows in their instance tables and using nova quite happily. Now, not all those instances are _running_, but they're still in the table. Why no pruning? The easiest way to reduce the amount of time migrations take to run is to reduce the amount of rows that need to be migrated. The amount of unnecessary data in tables has been steadily increasing. I'm looking at you reservations table. JE ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On 25 October 2013 10:04, Chris Behrens cbehr...@codestud.com wrote: On Oct 24, 2013, at 1:33 PM, Robert Collins robe...@robertcollins.net wrote: -2 to 10 minute downtimes. +1 to doing the evolution gracefully. There is a spec for doing that from the H summit; someone just needs to implement it. +1. IMO, we need to move to a model where code can understand multiple schemas and migrate to newer schema on the fly. The object code in nova will be able to help us do this. Combine this with some sort of background task if you need to speed up the conversion. Any migrations that need to run through all of the data in a table during downtime is just not going to scale. I am personally tired of having to deal with DB migrations having to run for 1 hour during upgrades that happened numerous times throughout the Havana development cycle. We had a clear design at the H summit, and folk committed to working on it (Johannes and Mark W); not sure what happened... https://etherpad.openstack.org/p/HavanaNoDowntimeDBMigrations -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
Johannes, +1, purging should help here a lot. Best regards, Boris Pavlovic On Fri, Oct 25, 2013 at 2:01 AM, Johannes Erdfelt johan...@erdfelt.comwrote: On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: On Fri, Oct 25, 2013 at 8:19 AM, Johannes Erdfelt johan...@erdfelt.com wrote: On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. I'm not sure how you could have 30 million instances. That's a lot of hardware! :) This has come up a couple of times on this thread, so I want to reinforce -- that database is a real user database. There are users out there _right_now_ with 30 million rows in their instance tables and using nova quite happily. Now, not all those instances are _running_, but they're still in the table. Why no pruning? The easiest way to reduce the amount of time migrations take to run is to reduce the amount of rows that need to be migrated. The amount of unnecessary data in tables has been steadily increasing. I'm looking at you reservations table. JE ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Hi, We are going to implement 2-arm type lbaas using LVS, and have submitted the following BPs. https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features Maybe the first one is same as yours. We are happy if we just concentrate making a provider driver. Thanks. Itsuro Oda On Thu, 24 Oct 2013 11:56:53 -0700 Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Blueprint review process
Is this really a viable solution? I believe its more democratic to ensure everyone gets a chance to present the blueprint someone has spent time to write. This was no favoritism or biased view will ever take place and we let the community gauge the interest. /Alan -Original Message- From: Russell Bryant [mailto:rbry...@redhat.com] Sent: October-24-13 5:08 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Nova] Blueprint review process On 10/24/2013 10:52 AM, Gary Kotton wrote: On 10/24/13 4:46 PM, Dan Smith d...@danplanet.com wrote: In the last meeting we discussed an idea that I think is worth trying at least for icehouse-1 to see if we like it or not. The idea is that *every* blueprint starts out at a Low priority, which means best effort, but no promises. For a blueprint to get prioritized higher, it should have 2 nova-core members signed up to review the resulting code. Huge +1 to this. I'm in favor of the whole plan, but specifically the prioritization piece is very important, IMHO. I too am in favor of the idea. It is just not clear how 2 Nova cores will be signed up. Good point, there was no detail on that. I propose just comments on the blueprint whiteboard. It can be something simple like this to indicate that Dan and I have agreed to review the code for something: nova-core reviewers: russellb, dansmith -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On Fri, Oct 25, 2013 at 9:07 AM, Boris Pavlovic bo...@pavlovic.me wrote: Johannes, +1, purging should help here a lot. Sure, but my point is more: - pruning isn't done by the system automatically, so we have to assume it never happens - we need to have a clearer consensus about what we think the maximum size of a nova deployment is. Are we really saying we don't support nova installs with a million instances? If so what is the maximum number of instances we're targeting? Having a top level size in mind isn't a bad thing, but I don't think we have one at the moment that we all agree on. Until that happens I'm going to continue targeting the largest databases people have told me about (plus a fudge factor). Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron]LBaaS] - Questions on IP Allocation Logic for VIP
Hi, We encounter the following error in our lab when we auto allocate the VIP IP Address for create_vip API. == ERROR: quantumclient.shell Unable to complete operation for network f51735cc-c62e-438d-bc9f-26792c2486b9. The IP address 172.21.72.8 is in use. == We basically have the following questions. 1. How does neutron/quantum checks if IP address is already used or not? (Engineer data with NULL ID on ipallocation table, so he wonders if data with NULL ID has something to do with IP address allocation) 2. Will Data with NULL ID on ipallocations be removed sometime after? If so what would be the timing ? 3. Value of first_ip in ipavailabilityranges seems to be increasing rapidly, but when this value will be reset ? Regards, Pattabi ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On Thu, Oct 24, 2013 at 3:06 PM, Robert Collins robe...@robertcollins.netwrote: On 25 October 2013 10:04, Chris Behrens cbehr...@codestud.com wrote: On Oct 24, 2013, at 1:33 PM, Robert Collins robe...@robertcollins.net wrote: -2 to 10 minute downtimes. +1 to doing the evolution gracefully. There is a spec for doing that from the H summit; someone just needs to implement it. +1. IMO, we need to move to a model where code can understand multiple schemas and migrate to newer schema on the fly. The object code in nova will be able to help us do this. Combine this with some sort of background task if you need to speed up the conversion. Any migrations that need to run through all of the data in a table during downtime is just not going to scale. I am personally tired of having to deal with DB migrations having to run for 1 hour during upgrades that happened numerous times throughout the Havana development cycle. We had a clear design at the H summit, and folk committed to working on it (Johannes and Mark W); not sure what happened... /me runs from room crying https://etherpad.openstack.org/p/HavanaNoDowntimeDBMigrations -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
Michael, - pruning isn't done by the system automatically, so we have to assume it never happens We are working around it https://blueprints.launchpad.net/nova/+spec/db-purge-engine - we need to have a clearer consensus about what we think the maximum size of a nova deployment is. Are we really saying we don't support nova installs with a million instances? If so what is the maximum number of instances we're targeting? Having a top level size in mind isn't a bad thing, but I don't think we have one at the moment that we all agree on. Until that happens I'm going to continue targeting the largest databases people have told me about (plus a fudge factor). Rally https://wiki.openstack.org/wiki/Rally should help us to determine this. At this moment I can just use theoretical knowledges. (and they said even 1mln instances in current nova implementation won't work) Best regards, Boris Pavlovic On Fri, Oct 25, 2013 at 2:35 AM, Michael Still mi...@stillhq.com wrote: On Fri, Oct 25, 2013 at 9:07 AM, Boris Pavlovic bo...@pavlovic.me wrote: Johannes, +1, purging should help here a lot. Sure, but my point is more: - pruning isn't done by the system automatically, so we have to assume it never happens - we need to have a clearer consensus about what we think the maximum size of a nova deployment is. Are we really saying we don't support nova installs with a million instances? If so what is the maximum number of instances we're targeting? Having a top level size in mind isn't a bad thing, but I don't think we have one at the moment that we all agree on. Until that happens I'm going to continue targeting the largest databases people have told me about (plus a fudge factor). Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Hi, Oda-san, Thanks for your response. L3 agent function should remain the same, as one driver implementation of L3 router plugin. My understanding is your lbaas driver can be running on top of L3 agent and LVS' own routing services. Is my understanding correct? Thanks, Gary On Thu, Oct 24, 2013 at 3:16 PM, Itsuro ODA o...@valinux.co.jp wrote: Hi, We are going to implement 2-arm type lbaas using LVS, and have submitted the following BPs. https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features Maybe the first one is same as yours. We are happy if we just concentrate making a provider driver. Thanks. Itsuro Oda On Thu, 24 Oct 2013 11:56:53 -0700 Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Gary, In the context of the nvp plugin we use a mechanism for enabling 'advanced' capabilities of a router leveraging a 'router_service_type' extension. this allow us to configure two types of routers, one which does just L3 forwarding, NAT and a few other things, and another one which also has the capability of hosting adv services such as firewall and load balancing. Is this similar to what you want to achieve with this blueprint? Regards, Salvatore On 25 October 2013 01:03, Gary Duan gd...@varmour.com wrote: Hi, Oda-san, Thanks for your response. L3 agent function should remain the same, as one driver implementation of L3 router plugin. My understanding is your lbaas driver can be running on top of L3 agent and LVS' own routing services. Is my understanding correct? Thanks, Gary On Thu, Oct 24, 2013 at 3:16 PM, Itsuro ODA o...@valinux.co.jp wrote: Hi, We are going to implement 2-arm type lbaas using LVS, and have submitted the following BPs. https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features Maybe the first one is same as yours. We are happy if we just concentrate making a provider driver. Thanks. Itsuro Oda On Thu, 24 Oct 2013 11:56:53 -0700 Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Hi Gray, Thanks for your response. Our plan is as follows: * LVS driver is one of lbaas provider driver. It communicates with l3_agent instead of lbaas_agent. * For l3_agent side, I think the implementation is same as fwaas. L3NATAgent inherits like LVSL3AgentRpcCallback class added which communicates with LVS provider driver. So l3_agent functions already existed are not changed, just added LB function. I think the implementation would change depending on the service chaining discussion. Thanks, Itsuto Oda # note that Toshihiro Iwamoto, a main developer of our BPs # may replay instead of me. He will attend the HK summit. On Thu, 24 Oct 2013 16:03:25 -0700 Gary Duan gd...@varmour.com wrote: Hi, Oda-san, Thanks for your response. L3 agent function should remain the same, as one driver implementation of L3 router plugin. My understanding is your lbaas driver can be running on top of L3 agent and LVS' own routing services. Is my understanding correct? Thanks, Gary On Thu, Oct 24, 2013 at 3:16 PM, Itsuro ODA o...@valinux.co.jp wrote: Hi, We are going to implement 2-arm type lbaas using LVS, and have submitted the following BPs. https://blueprints.launchpad.net/neutron/+spec/lbaas-support-routed-service-insertion https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-driver https://blueprints.launchpad.net/neutron/+spec/lbaas-lvs-extra-features Maybe the first one is same as yours. We are happy if we just concentrate making a provider driver. Thanks. Itsuro Oda On Thu, 24 Oct 2013 11:56:53 -0700 Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Itsuro ODA o...@valinux.co.jp ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
I’m getting a “Not allowed here” error when I click through to the BP. (Yes, I’m subscribed.) On Oct 24, 2013, at 11:56 AM, Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Ironic] Nominating Lucas Gomes to ironic core
Hi all, I'd like to nominate Lucas Gomes for ironic-core. He's been consistently doing reviews for several months and has led a lot of the effort on the API and client libraries. Thanks for the great work! -Deva http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi Thomas, here's my opinion: Heat and Solum contributors will work closely together to figure out where specific feature implementations belong... But, in general, Solum is working at a level above Heat. To write a Heat template, you have to know about infrastructure setup and configuration settings of infrastructure and API services. I believe Solum intends to provide the ability to tweak and configure the amount of complexity that gets exposed or hidden so that it becomes easier for cloud consumers to just deal with their application and not have to necessarily know or care about the underlying infrastructure and API services, but that level of detail can be exposed to them if necessary. Solum will know what infrastructure and services to set up to run applications, and it will leverage Heat and Heat templates for this. The Solum project has been very vocal about leveraging Heat under the hood for the functionality and vision of orchestration that it intends to provide. It seems, based on this thread (and +1 from me), enough people are interested in having Heat provide some level of software orchestration, even if it's just bootstrapping other CM tools and coordinating the when are you done, and I haven't heard any Solum folks object to Heat implementing software orchestration capabilities... So, I'm looking forward to great discussions on this topic for Heat at the summit. If you recall, Adrian Otto (who announced project Solum) was also the one who was vocal at the Portland summit about the need for HOT syntax. I think both projects are on a good path with a lot of fun collaboration time ahead. Kind regards, -Keith On 10/24/13 7:56 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Hi all, maybe a bit off track with respect to latest concrete discussions, but I noticed the announcement of project Solum on openstack-dev. Maybe this is playing on a different level, but I still see some relation to all the software orchestration we are having. What are your opinions on this? BTW, I just posted a similar short question in reply to the Solum announcement mail, but some of us have mail filters an might read [Heat] mail with higher prio, and I was interested in the Heat view. Cheers, Thomas Patrick Petit patrick.pe...@bull.net wrote on 24.10.2013 12:15:13: From: Patrick Petit patrick.pe...@bull.net To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, Date: 24.10.2013 12:18 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal Sorry, I clicked the 'send' button too quickly. On 10/24/13 11:54 AM, Patrick Petit wrote: Hi Clint, Thank you! I have few replies/questions in-line. Cheers, Patrick On 10/23/13 8:36 PM, Clint Byrum wrote: Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700: Dear Steve and All, If I may add up on this already busy thread to share our experience with using Heat in large and complex software deployments. Thanks for sharing Patrick, I have a few replies in-line. I work on a project which precisely provides additional value at the articulation point between resource orchestration automation and configuration management. We rely on Heat and chef-solo respectively for these base management functions. On top of this, we have developed an event-driven workflow to manage the life-cycles of complex software stacks which primary purpose is to support middleware components as opposed to end-user apps. Our use cases are peculiar in the sense that software setup (install, config, contextualization) is not a one-time operation issue but a continuous thing that can happen any time in life-span of a stack. Users can deploy (and undeploy) apps long time after the stack is created. Auto-scaling may also result in an asynchronous apps deployment. More about this latter. The framework we have designed works well for us. It clearly refers to a PaaS-like environment which I understand is not the topic of the HOT software configuration proposal(s) and that's absolutely fine with us. However, the question for us is whether the separation of software config from resources would make our life easier or not. I think the answer is definitely yes but at the condition that the DSL extension preserves almost everything from the expressiveness of the resource element. In practice, I think that a strict separation between resource and component will be hard to achieve because we'll always need a little bit of application's specific in the resources. Take for example the case of the SecurityGroups. The ports open in a SecurityGroup are application specific. Components can only be made up of the things that are common to all users of said component. Also components would, if I understand the concept correctly, just be for things that are at the sub-resource level. Security groups and open ports would be across multiple resources, and thus would be
Re: [openstack-dev] [Neutron] L3 router service integration with Service Type Framework
Hi, Geoff, This is because I haven't added spec to the BP yet. Gary On Thu, Oct 24, 2013 at 4:51 PM, Geoff Arnold ge...@geoffarnold.com wrote: I’m getting a *“**Not allowed here”* error when I click through to the BP. (Yes, I’m subscribed.) On Oct 24, 2013, at 11:56 AM, Gary Duan gd...@varmour.com wrote: Hi, I've registered a BP for L3 router service integration with service framework. https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type In general, the implementation will align with how LBaaS is integrated with the framework. One consideration we heard from several team members is to be able to support vendor specific features and extensions in the service plugin. Any comment is welcome. Thanks, Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
So, I observe a consensus here of long migrations suckm +1 to that. I also observe a consensus that we need to get no-downtime schema changes working. It seems super important. Also +1 to that. Getting back to the original review, it got -2'd because Michael would like to make sure that the benefit outweighs the cost of the downtime. I completely agree with that, so far we've heard arguments from both Jay and Boris as to why this is faster/slower but I think some sort of evidence other than hearsay is needed. Can we get some sort of benchmark result that clearly illustrates the performance consequences of the migration in the long run? -Mike On Thu, Oct 24, 2013 at 4:53 PM, Boris Pavlovic bo...@pavlovic.me wrote: Michael, - pruning isn't done by the system automatically, so we have to assume it never happens We are working around it https://blueprints.launchpad.net/nova/+spec/db-purge-engine - we need to have a clearer consensus about what we think the maximum size of a nova deployment is. Are we really saying we don't support nova installs with a million instances? If so what is the maximum number of instances we're targeting? Having a top level size in mind isn't a bad thing, but I don't think we have one at the moment that we all agree on. Until that happens I'm going to continue targeting the largest databases people have told me about (plus a fudge factor). Rally https://wiki.openstack.org/wiki/Rally should help us to determine this. At this moment I can just use theoretical knowledges. (and they said even 1mln instances in current nova implementation won't work) Best regards, Boris Pavlovic On Fri, Oct 25, 2013 at 2:35 AM, Michael Still mi...@stillhq.com wrote: On Fri, Oct 25, 2013 at 9:07 AM, Boris Pavlovic bo...@pavlovic.me wrote: Johannes, +1, purging should help here a lot. Sure, but my point is more: - pruning isn't done by the system automatically, so we have to assume it never happens - we need to have a clearer consensus about what we think the maximum size of a nova deployment is. Are we really saying we don't support nova installs with a million instances? If so what is the maximum number of instances we're targeting? Having a top level size in mind isn't a bad thing, but I don't think we have one at the moment that we all agree on. Until that happens I'm going to continue targeting the largest databases people have told me about (plus a fudge factor). Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
On 24/10/13 11:54 +0200, Patrick Petit wrote: Hi Clint, Thank you! I have few replies/questions in-line. Cheers, Patrick On 10/23/13 8:36 PM, Clint Byrum wrote: Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700: Dear Steve and All, If I may add up on this already busy thread to share our experience with using Heat in large and complex software deployments. Thanks for sharing Patrick, I have a few replies in-line. I work on a project which precisely provides additional value at the articulation point between resource orchestration automation and configuration management. We rely on Heat and chef-solo respectively for these base management functions. On top of this, we have developed an event-driven workflow to manage the life-cycles of complex software stacks which primary purpose is to support middleware components as opposed to end-user apps. Our use cases are peculiar in the sense that software setup (install, config, contextualization) is not a one-time operation issue but a continuous thing that can happen any time in life-span of a stack. Users can deploy (and undeploy) apps long time after the stack is created. Auto-scaling may also result in an asynchronous apps deployment. More about this latter. The framework we have designed works well for us. It clearly refers to a PaaS-like environment which I understand is not the topic of the HOT software configuration proposal(s) and that's absolutely fine with us. However, the question for us is whether the separation of software config from resources would make our life easier or not. I think the answer is definitely yes but at the condition that the DSL extension preserves almost everything from the expressiveness of the resource element. In practice, I think that a strict separation between resource and component will be hard to achieve because we'll always need a little bit of application's specific in the resources. Take for example the case of the SecurityGroups. The ports open in a SecurityGroup are application specific. Components can only be made up of the things that are common to all users of said component. Also components would, if I understand the concept correctly, just be for things that are at the sub-resource level. That isn't cop Security groups and open ports would be across multiple resources, and thus would be separately specified from your app's component (though it might be useful to allow components to export static values so that the port list can be referred to along with the app component). Then, designing a Chef or Puppet component type may be harder than it looks at first glance. Speaking of our use cases we still need a little bit of scripting in the instance's user-data block to setup a working chef-solo environment. For example, we run librarian-chef prior to starting chef-solo to resolve the cookbook dependencies. A cookbook can present itself as a downloadable tarball but it's not always the case. A chef component type would have to support getting a cookbook from a public or private git repo (maybe subversion), handle situations where there is one cookbook per repo or multiple cookbooks per repo, let the user choose a particular branch or label, provide ssh keys if it's a private repo, and so forth. We support all of this scenarios and so we can provide more detailed requirements if needed. Correct me if I'm wrong though, all of those scenarios are just variations on standard inputs into chef. So the chef component really just has to allow a way to feed data to chef. I am not sure adding component relations like the 'depends-on' would really help us since it is the job of config management to handle software dependencies. Also, it doesn't address the issue of circular dependencies. Circular dependencies occur in complex software stack deployments. Example. When we setup a Slum virtual cluster, both the head node and compute nodes depend on one another to complete their configuration and so they would wait for each other indefinitely if we were to rely on the 'depends-on'. In addition, I think it's critical to distinguish between configuration parameters which are known ahead of time, like a db name or user name and password, versus contextualization parameters which are known after the fact generally when the instance is created. Typically those contextualization parameters are IP addresses but not only. The fact packages x,y,z have been properly installed and services a,b,c successfully started is contextualization information (a.k.a facts) which may be indicative that other components can move on to the next setup stage. The form of contextualization you mention above can be handled by a slightly more capable wait condition mechanism than we have now. I've been suggesting that this is the interface that workflow systems should use. The case of complex deployments with or without circular dependencies is typically resolved by making the system converge toward the desirable
Re: [openstack-dev] Cinder: create volume hold 'error' state. ( Full cinder-volume.log)
Done! In https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/issues/89 this page, I meet the problem is same to this guys. its because --tid=1 for ietadm allready exists. delete the new created volume with status error and type via console the following command: ietadm --op delete --tid=1 At 2013-10-25 09:46:46,ifzing ifz...@126.com wrote: Hi Thomas all, Thomas, Thank you for your proposal. In this time, I bring my full log of 'cinder-volume.log'. Indeed, In log file have a large number of same information. The following section (Seems to be error for me) from cinder-volume.log and attach this log file to you. -- 2013-10-24 20:24:20DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf vgs --noheadings --nosuffix --unit=G -o name,size,free cinder-volumes 2013-10-24 20:24:20DEBUG [cinder.manager] Notifying Schedulers of capabilities ... 2013-10-24 20:24:20DEBUG [cinder.openstack.common.rpc.amqp] Making asynchronous fanout cast... 2013-10-24 20:24:20DEBUG [cinder.openstack.common.rpc.amqp] UNIQUE_ID is b22a669463ce4ad49613ab6d60b599c4. 2013-10-24 20:24:20DEBUG [cinder.openstack.common.rpc.amqp] Pool creating new connection 2013-10-24 20:24:20 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672 2013-10-24 20:24:20 INFO [cinder.openstack.common.rpc.common] Connected to AMQP server on localhost:5672 2013-10-24 20:24:20DEBUG [cinder.service] Creating Consumer connection for Service cinder-volume 2013-10-24 20:24:45DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'_member_', u'Member', u'admin'], u'_context_request_id': u'req-dc624558-e49a-4e3d-b9a2-16f5bd9babfd', u'_context_quota_class': None, u'_unique_id': u'e0295e309aaa4b1d8a48c434be8a0307', u'args': {u'request_spec': {u'volume_id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'12', u'availability_zone': u'nova', u'size': 1, u'attach_status': u'detached', u'source_volid': None, u'volume_metadata': [], u'display_description': u'', u'snapshot_id': None, u'user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'project_id': u'f822eef7155046a68d20d71f3c37ac43', u'id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'metadata': {}}, u'source_volid': None, u'image_id': None, u'volume_type': {}, u'snapshot_id': None, u'resource_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'12', u'availability_zo ne': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': u'', u'snapshot_id': None, u'user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'project_id': u'f822eef7155046a68d20d71f3c37ac43', u'id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'size': 1}}, u'volume_id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'allow_reschedule': True, u'filter_properties': {u'request_spec': {u'volume_id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'12', u'availability_zone': u'nova', u'size': 1, u'attach_status': u'detached', u'source_volid': None, u'volume_metadata': [], u'display_description': u'', u'snapshot_id': None, u'user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'project_id': u'f822eef7155046a68d20d71f3c37ac43', u'id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'metadata': {}}, u'source_volid': None, u'image_id': None, u'volume_type': {}, u'snapshot_id': None, u'resource_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': u'12', u'availability_zone': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': u'', u'snapshot_id': None, u'user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'project_id': u'f822eef7155046a68d20d71f3c37ac43', u'id': u'7ebff319-838a-4f09-807b-372be8b26c13', u'size': 1}}, u'user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'availability_zone': u'nova', u'volume_type': {}, u'config_options': {}, u'retry': {u'num_attempts': 1, u'hosts': [u'SDE-main-controller']}, u'size': 1, u'resource_type': {}, u'metadata': {}}, u'source_volid': None, u'image_id': None, u'snapshot_id': None}, u'_context_tenant': u'f822eef7155046a68d20d71f3c37ac43', u'_context_auth_token': 'SANITIZED', u'_context_timestamp': u'2013-10-24T12:24:44.857271', u'_context_is_admin': False, u'version': u'1.4', u'_context_project_id': u'f82 2eef7155046a68d20d71f3c37ac43', u'_context_user': u'90b47b1766924e078ca9fc03e5153fd0', u'_context_read_deleted': u'no', u'_context_user_id': u'90b47b1766924e078ca9fc03e5153fd0', u'method': u'create_volume', u'_context_remote_address': u'9.186.91.128'} 2013-10-24 20:24:45DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': u'90b47b1766924e078ca9fc03e5153fd0', 'roles':
[openstack-dev] [savanna] team meeting minutes Oct 24
Thanks everyone who have joined Savanna meeting. Here are the logs from the meeting: Minutes: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-24-18.05.html Minutes (text): http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-24-18.05.txt Log: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-24-18.05.log.html Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Does DB schema hygiene warrant long migrations?
On 10/24/2013 05:19 PM, Johannes Erdfelt wrote: On Fri, Oct 25, 2013, Michael Still mi...@stillhq.com wrote: Because I am a grumpy old man I have just -2'ed https://review.openstack.org/#/c/39685/ and I wanted to explain my rationale. Mostly I am hoping for a consensus to form -- if I am wrong then I'll happy remove my vote from this patch. This patch does the reasonably sensible thing of converting two columns from being text to varchar, which reduces their expense to the database. Given the data stored is already of limited length, it doesn't impact our functionality at all either. However, when I run it with medium sized (30 million instances) databases, the change does cause a 10 minute downtime. I don't personally think the change is worth such a large outage, but perhaps everyone else disagrees. I'm not sure how you could have 30 million instances. That's a lot of hardware! :) However, in our Rackspace sized deploys (less than 30 million instances), we've seen many migrations take longer than 10 minutes. DB migrations are one of the biggest problems we've been facing lately. Especially since a lot of migrations have been done over the past number of months ended up causing a lot of pain considering the value they bring. For instance, migration 185 was particularly painful. It only renamed the indexes, but it required rebuilding them. This took a long time for such a simple task. So I'm very interested in figuring out some sort of solution that makes database migrations much less painful. That said, I'm hesitant to say that cleanups like these shouldn't be done. At a certain point we'll build a significant amount of technical debt around the database that we're afraid to touch. PS: I could see a more complicated approach where we did these changes in flight by adding columns, using a periodic task to copy data to the new columns, and then dropping the old. That's a lot more complicated to implement though. You mean an expand/contract style of migrations? It's been discussed at previous summits, but it's a lot of work. It's also at the mercy of the underlying database engine. For instance, MySQL (depending the version and the underlying database engine) will recreate the table when adding columns. This will grab a lock and take a long time. http://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl.html http://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html#innodb-online-ddl-summary-grid Add column is an online operation in modern MySQL. If you are running a real production system, you should ALWAYS use current MySQL. If you are out there, and you have a schema large enough for this to be an issue, you need to be running modern MySQL. That said - I TOTALLY support all of the statements above about doing the schema upgrades in a sane manner. It's the right thing to do. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna] Savanna on Bare Metal and Base Requirements
Hello Savanna team, I've just skimmed through the online documentation and I'm very interested in this project. We have a grizzly environment with all the latest patches as well as several Havana backports applied. We are are doing bare metal provisioning through Nova. It is limited to flat networking. Would Savanna work in this environment? What are the requirements? What are the minimum set of API calls that need to supported (for example, we can't support snapshots)? Thank you, Travis ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Announcing Project Solum
Thomas and Russell, On Oct 24, 2013, at 8:03 AM, Russell Bryant rbry...@redhat.com wrote: On 10/24/2013 08:51 AM, Thomas Spatzier wrote: Hi Adrian, really intersting! I wonder what the relation to all the software orchestration in Heat discussions is that have been going on for a while now. It's a good question. Personally, I would expect Heat to be a key element of how Solum works. There are a number of things Solum needs to do on top of Heat, though, such as the git integration bit. Yes, that's exactly right. Heat is a key component for Solum. We are going in a direction that reaches further into the realm of developer productivity, but are not planning to re-implement stuff that's already in OpenStack. As Heat and Nova and Glance and Keystone, etc… all get better, so will Solum by nature of leveraging them. Adrian -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] About single entry point in trove-guestagent
On Oct 23, 2013, at 7:03 AM, Illia Khudoshyn wrote: Hi Denis, Michael, Vipul and all, I noticed a discussion in irc about adding a single entry point (sort of 'SuperManager') to the guestagent. Let me add my 5cent. I agree with that we would ultimately avoid code duplication. But from my experience, only very small part of GA Manager can be considered as really duplicated code, namely Manager#prepare(). A 'backup' part may be another candidate, but I'm not yet. It may still be rather service type specific. All the rest of the code was just delegating. Yes, currently that is the case :) If we add a 'SuperManager' all we'll have -- just more delegation: 1. There is no use for dynamic loading of corresponding Manager implementation because there will never be more than one service type supported on a concrete guest. So current implementation with configurable dictionary service_type-ManagerImpl looks good for me. 2. Neither the 'SuperManager' provide common interface for Manager -- due to dynamic nature of python. As it has been told, trove.guestagent.api.API provides list of methods with parameters we need to implement. What I'd like to have is a description of types for those params as well as return types. (Man, I miss static typing). All we can do for that is make sure we have proper unittests with REAL values for params and returns. As for the common part of the Manager's code, I'd go for extracting that into a mixin. When we started talking about it, i mentioned to one of the rackspace trove developers privately we might be able to solve effectively w/ a mixin instead of more parent classes :) I would like to see an example of both of them. At the end of the day all i care about is not having more copy pasta between manager impls as we grow the common stuff. even if that is just a method call in each guest to call each bit of common code. Thanks for your attention. -- Best regards, Illia Khudoshyn, Software Engineer, Mirantis, Inc. 38, Lenina ave. Kharkov, Ukraine www.mirantis.com www.mirantis.ru Skype: gluke_work ikhudos...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting
The bump-in-the-wire mode we were referring to here is the one where the firewall has both legs on the same subnet/network. The point that was trying to be made was that applying zones in that case would not make as much sense. At this point there is no proposal though to validate and restrict this particular case, or for that matter any combination of ports for the zone. If anyone has suggestions on what criteria to use to restrict the port membership for zones, we can definitely discuss it, but there is none on the table at the moment. Thanks, ~Sumit. On Thu, Oct 24, 2013 at 2:48 PM, Rajesh Mohan rajesh.mli...@gmail.comwrote: This is good discussion. +1 for using Neutron ports for defining zones. I see Kaiwei's point but for DELL, neutron ports makes more sense. I am not sure if I completely understood the bump-in-the-wire/zone discussion. DELL security appliance allows using different zones with bump-in-the-wire. If the firewall is inserted in bump-in-the-wire mode between router and LAN hosts, then it does makes sense to apply different zones on ports connected to LAN and Router. The there are cases where the end-users apply same zones on both sides but this is a decision we should leave to end customers. We should allow configuring zones in bump-in-the-wire mode as well. On Wed, Oct 23, 2013 at 12:08 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: Log from today's meeting: http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html Action items for some of the folks included. Please join us for the meeting next week. Thanks, ~Sumit. On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday 18:00 UTC (11 AM PDT). Agenda: * Tempest tests * Definition and use of zones * Address Objects * Counts API * Service Objects * Integration with service type framework * Open discussion - any other topics you would like to bring up for discussion during the summit. https://wiki.openstack.org/wiki/Meetings/FWaaS Thanks, ~Sumit. On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam sumitnaiksa...@gmail.com wrote: Hi All, For the next of phase of FWaaS development we will be considering a number of features. I am proposing an IRC meeting on Oct 16th Wednesday 18:00 UTC (11 AM PDT) to discuss this. The etherpad for the summit session proposal is here: https://etherpad.openstack.org/p/icehouse-neutron-fwaas and has a high level list of features under consideration. Thanks, ~Sumit. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev