Re: [Openstack] Cactus Release Preparation
Rick Clark wrote: In Bexar was a feature release. We pushed lots of new features. The focus of Nova development in Cactus is going to be testing and stabilization. I wonder if we shouldn't say consistency, testing and stabilization. Feature work should be concentrated in areas where the resulting software is not consistent, in covering the gaps left after a featureful release. The different groups have been pursuing specific scenarios, but as a project we want to make sure that the other combinations also work. Support IPv6 on FlatManager, for example, is clearly part of that. A complete toolset around the Openstack API, maybe have a plan to deprecate the objectstore... -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Cactus Release Preparation
Hello, I wonder that deferred project has to submit new blueprint or just changing Series goal described the below URL. https://blueprints.launchpad.net/nova/+spec/bexar-migration-live Please let me know what I should do... Regards, Kei Masumoto -Original Message- From: openstack-bounces+masumotok=nttdata.co...@lists.launchpad.net [mailto:openstack-bounces+masumotok=nttdata.co...@lists.launchpad.net] On Behalf Of Thierry Carrez Sent: Monday, January 31, 2011 5:59 PM To: openstack@lists.launchpad.net Subject: Re: [Openstack] Cactus Release Preparation Rick Clark wrote: In Bexar was a feature release. We pushed lots of new features. The focus of Nova development in Cactus is going to be testing and stabilization. I wonder if we shouldn't say consistency, testing and stabilization. Feature work should be concentrated in areas where the resulting software is not consistent, in covering the gaps left after a featureful release. The different groups have been pursuing specific scenarios, but as a project we want to make sure that the other combinations also work. Support IPv6 on FlatManager, for example, is clearly part of that. A complete toolset around the Openstack API, maybe have a plan to deprecate the objectstore... -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Cactus Release Preparation
masumo...@nttdata.co.jp wrote: I wonder that deferred project has to submit new blueprint or just changing Series goal described the below URL. https://blueprints.launchpad.net/nova/+spec/bexar-migration-live Please let me know what I should do... You should just use the existing deferred blueprint and change the series goal to Cactus (you can set Implementation to Beta available at the same time). Regards, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
Hello, We, NTT DATA, also agree with majority of folks. It's realistic shooting for the the Diablo time frame to have the new network service. Here are my suggestions: - I know that there were several documents on the new network service issue that were locally exchanged so far. Why not collecting them into one place and share them publicly? - I know that the discussion went into a bit implementation details. But now, what about starting the discussion from the higher level design things (again)? Especially, from the requirements level. Any thoughts? Masanori From: John Purrier j...@openstack.org Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint Date: Sat, 29 Jan 2011 06:06:26 +0900 You are correct, the networking service will be more complex than the volume service. The existing blueprint is pretty comprehensive, not only encompassing the functionality that exists in today's network service in Nova, but also forward looking functionality around flexible networking/openvswitch and layer 2 network bridging between cloud deployments. This will be a longer term project and will serve as the bedrock for many future OpenStack capabilities. John -Original Message- From: openstack-bounces+john=openstack@lists.launchpad.net [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of Thierry Carrez Sent: Friday, January 28, 2011 1:52 PM To: openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint John Purrier wrote: Here is the suggestion. It is clear from the response on the list that refactoring Nova in the Cactus timeframe will be too risky, particularly as we are focusing Cactus on Stability, Reliability, and Deployability (along with a complete OpenStack API). For Cactus we should leave the network and volume services alone in Nova to minimize destabilizing the code base. In parallel, we can initiate the Network and Volume Service projects in Launchpad and allow the teams that form around these efforts to move in parallel, perhaps seeding their projects from the existing Nova code. Once we complete Cactus we can have discussions at the Diablo DS about progress these efforts have made and how best to move forward with Nova integration and determine release targets. I agree that there is value in starting the proof-of-concept work around the network services, without sacrificing too many developers to it, so that a good plan can be presented and discussed at the Diablo Summit. If volume sounds relatively simple to me, network sounds significantly more complex (just looking at the code ,network manager code is currently used both by nova-compute and nova-network to modify the local networking stack, so it's more than just handing out IP addresses through an API). Cheers, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
I will collect the documents together as you suggest, and I agree that we need to get the requirements laid out again. Please subscribe to the blueprint on Launchpad -- that way you will be notified of updates. https://blueprints.launchpad.net/nova/+spec/bexar-network-service Thanks, Ewan. -Original Message- From: openstack-bounces+ewan.mellor=citrix@lists.launchpad.net [mailto:openstack-bounces+ewan.mellor=citrix@lists.launchpad.net] On Behalf Of Masanori ITOH Sent: 31 January 2011 10:31 To: openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint Hello, We, NTT DATA, also agree with majority of folks. It's realistic shooting for the the Diablo time frame to have the new network service. Here are my suggestions: - I know that there were several documents on the new network service issue that were locally exchanged so far. Why not collecting them into one place and share them publicly? - I know that the discussion went into a bit implementation details. But now, what about starting the discussion from the higher level design things (again)? Especially, from the requirements level. Any thoughts? Masanori From: John Purrier j...@openstack.org Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint Date: Sat, 29 Jan 2011 06:06:26 +0900 You are correct, the networking service will be more complex than the volume service. The existing blueprint is pretty comprehensive, not only encompassing the functionality that exists in today's network service in Nova, but also forward looking functionality around flexible networking/openvswitch and layer 2 network bridging between cloud deployments. This will be a longer term project and will serve as the bedrock for many future OpenStack capabilities. John -Original Message- From: openstack-bounces+john=openstack@lists.launchpad.net [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of Thierry Carrez Sent: Friday, January 28, 2011 1:52 PM To: openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint John Purrier wrote: Here is the suggestion. It is clear from the response on the list that refactoring Nova in the Cactus timeframe will be too risky, particularly as we are focusing Cactus on Stability, Reliability, and Deployability (along with a complete OpenStack API). For Cactus we should leave the network and volume services alone in Nova to minimize destabilizing the code base. In parallel, we can initiate the Network and Volume Service projects in Launchpad and allow the teams that form around these efforts to move in parallel, perhaps seeding their projects from the existing Nova code. Once we complete Cactus we can have discussions at the Diablo DS about progress these efforts have made and how best to move forward with Nova integration and determine release targets. I agree that there is value in starting the proof-of-concept work around the network services, without sacrificing too many developers to it, so that a good plan can be presented and discussed at the Diablo Summit. If volume sounds relatively simple to me, network sounds significantly more complex (just looking at the code ,network manager code is currently used both by nova-compute and nova-network to modify the local networking stack, so it's more than just handing out IP addresses through an API). Cheers, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
In order to bring this discussion to a close and get everyone on the same page for Cactus development, here is where we have landed: 1. We will *not* be separating the network and volume controllers and API servers from the Nova project. 2. On-going work to extend the Nova capabilities in these areas will be done within the existing project and be based on extending the existing implementation. The folks working on these projects will determine the best approach for code re-use, extending functionality, and potential integration of additional community contributions in each area. 3. Like all efforts for Cactus, correct trade-offs must be made to maintain deployability, stability, and reliability (key themes of the release). 4. Core design concepts allowing each service to horizontally scale independently, present public/management/event interfaces through a documented OpenStack API, and allow services to be deployed independently of each other must be maintained. If issues arise that do not allow the current code structure to support these concepts the teams should raise the issues and open discussions on how to best address. We will target the Diablo design summit to discuss and review the progress made on these services and determine if the best approach to the project has been made. Thoughts? John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 4:06 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote: Thanks for the response, Andy. I think we actually agree on this J. You said: This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making sure that volume code does not rely on compute code for example (which at this point it doesn't particularly). The fact that the volume code and the compute code are not coupled make the separation easy. One factor that I did not mention is that each service will present public, management, and optional extension APIs, allowing each service to be deployed independently. So far that is all possible under the existing auspices of Nova. DirectAPI will happily sit in front of any of the services independently, any of the services when run can be configured with different instances of RabbitMQ to point at, DirectAPI supports a large amount of extensibility and pluggable managers/drivers support a bunch more. Decoupling of the code has always been a goal, as have been providing public, management, and extension APIs and we aren't doing so bad. I don't think we disagree about wanting to run things independently, but for the moment I have seen no convincing arguments for separating the codebase. You said: That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus. This is exactly my suggestion below. Keep Nova monolithic until Cactus, then integrate the new services once Cactus is shipped. There is work to be done to create the service frameworks, API engines, extension mechanisms, and porting the existing functionality. All of this can be done in parallel to the stability work being done in the Nova code base. As far as I know there are not major updates coming in either the volume or network management code for this milestone. Where is this parallel work being done if not in a separate project? --andy John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 12:45 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote: Some clarification and a suggestion regarding Nova and the two new proposed services (Network/Volume). To be clear, Nova today contains both volume and network services. We can specify, attach, and manage block devices and also specify network related items, such as IP assignment and VLAN creation. I have heard there is some confusion on this, since we started talking about creating OpenStack services around these areas that will be separate from the cloud controller (Nova). The driving factors to consider creating independent services for VM, Images, Network, and Volumes are 1) To allow deployment
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
This has my support. For our time frame and the goal of robustness and stability for the upcoming release, this is the most reasonable course of action. Devin On Jan 31, 2011, at 10:40 AM, John Purrier wrote: In order to bring this discussion to a close and get everyone on the same page for Cactus development, here is where we have landed: 1. We will *not* be separating the network and volume controllers and API servers from the Nova project. 2. On-going work to extend the Nova capabilities in these areas will be done within the existing project and be based on extending the existing implementation. The folks working on these projects will determine the best approach for code re-use, extending functionality, and potential integration of additional community contributions in each area. 3. Like all efforts for Cactus, correct trade-offs must be made to maintain deployability, stability, and reliability (key themes of the release). 4. Core design concepts allowing each service to horizontally scale independently, present public/management/event interfaces through a documented OpenStack API, and allow services to be deployed independently of each other must be maintained. If issues arise that do not allow the current code structure to support these concepts the teams should raise the issues and open discussions on how to best address. We will target the Diablo design summit to discuss and review the progress made on these services and determine if the best approach to the project has been made. Thoughts? John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 4:06 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote: Thanks for the response, Andy. I think we actually agree on this J. You said: This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making sure that volume code does not rely on compute code for example (which at this point it doesn't particularly). The fact that the volume code and the compute code are not coupled make the separation easy. One factor that I did not mention is that each service will present public, management, and optional extension APIs, allowing each service to be deployed independently. So far that is all possible under the existing auspices of Nova. DirectAPI will happily sit in front of any of the services independently, any of the services when run can be configured with different instances of RabbitMQ to point at, DirectAPI supports a large amount of extensibility and pluggable managers/drivers support a bunch more. Decoupling of the code has always been a goal, as have been providing public, management, and extension APIs and we aren't doing so bad. I don't think we disagree about wanting to run things independently, but for the moment I have seen no convincing arguments for separating the codebase. You said: That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus. This is exactly my suggestion below. Keep Nova monolithic until Cactus, then integrate the new services once Cactus is shipped. There is work to be done to create the service frameworks, API engines, extension mechanisms, and porting the existing functionality. All of this can be done in parallel to the stability work being done in the Nova code base. As far as I know there are not major updates coming in either the volume or network management code for this milestone. Where is this parallel work being done if not in a separate project? --andy John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 12:45 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote: Some clarification and a suggestion regarding Nova and the two new proposed services (Network/Volume). To be clear, Nova today contains both volume and network services. We can specify, attach, and manage block devices and also specify network related items, such as IP assignment and VLAN creation. I have heard there is some
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
On Mon, Jan 31, 2011 at 1:42 PM, Devin Carlen devin.car...@gmail.com wrote: This has my support. For our time frame and the goal of robustness and stability for the upcoming release, this is the most reasonable course of action. Seconded. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
+1 On Jan 31, 2011, at 10:40 AM, John Purrier wrote: In order to bring this discussion to a close and get everyone on the same page for Cactus development, here is where we have landed: 1. We will *not* be separating the network and volume controllers and API servers from the Nova project. 2. On-going work to extend the Nova capabilities in these areas will be done within the existing project and be based on extending the existing implementation. The folks working on these projects will determine the best approach for code re-use, extending functionality, and potential integration of additional community contributions in each area. 3. Like all efforts for Cactus, correct trade-offs must be made to maintain deployability, stability, and reliability (key themes of the release). 4. Core design concepts allowing each service to horizontally scale independently, present public/management/event interfaces through a documented OpenStack API, and allow services to be deployed independently of each other must be maintained. If issues arise that do not allow the current code structure to support these concepts the teams should raise the issues and open discussions on how to best address. We will target the Diablo design summit to discuss and review the progress made on these services and determine if the best approach to the project has been made. Thoughts? John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 4:06 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote: Thanks for the response, Andy. I think we actually agree on this J. You said: This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making sure that volume code does not rely on compute code for example (which at this point it doesn't particularly). The fact that the volume code and the compute code are not coupled make the separation easy. One factor that I did not mention is that each service will present public, management, and optional extension APIs, allowing each service to be deployed independently. So far that is all possible under the existing auspices of Nova. DirectAPI will happily sit in front of any of the services independently, any of the services when run can be configured with different instances of RabbitMQ to point at, DirectAPI supports a large amount of extensibility and pluggable managers/drivers support a bunch more. Decoupling of the code has always been a goal, as have been providing public, management, and extension APIs and we aren't doing so bad. I don't think we disagree about wanting to run things independently, but for the moment I have seen no convincing arguments for separating the codebase. You said: That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus. This is exactly my suggestion below. Keep Nova monolithic until Cactus, then integrate the new services once Cactus is shipped. There is work to be done to create the service frameworks, API engines, extension mechanisms, and porting the existing functionality. All of this can be done in parallel to the stability work being done in the Nova code base. As far as I know there are not major updates coming in either the volume or network management code for this milestone. Where is this parallel work being done if not in a separate project? --andy John From: Andy Smith [mailto:andys...@gmail.com] Sent: Friday, January 28, 2011 12:45 PM To: John Purrier Cc: Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net Subject: Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote: Some clarification and a suggestion regarding Nova and the two new proposed services (Network/Volume). To be clear, Nova today contains both volume and network services. We can specify, attach, and manage block devices and also specify network related items, such as IP assignment and VLAN creation. I have heard there is some confusion on this, since we started talking about creating OpenStack services around these areas that will be separate from the cloud controller (Nova). The driving
Re: [Openstack] Cactus Release Preparation
I would suggest that the theme(s) for the Cactus release be: a. Deployability. This includes consistent packaging and deployment tools support; but also includes good consistent documentation, approachability to the project (how quickly can a novice get a running system going for proof of concept), and deployability at larger scale (includes reference materials around hardware and networking choices, operational concerns, and multi-machine deployment orchestration). b. Stability. Agree with both Rick and Thierry, we need to get the existing features stable and available for additional and larger scale testing environments. We will be focusing on providing additional test automation, beyond testing into automated functional testing. Contributors such as Rackspace will be setting up larger testing environments (on the order of hundreds of machines) to ensure that we are stable at scale, as well. c. Reliability. Once a configuration is stood up and operational, it needs to run with only normal operational attention. This will mean additional attention to operational concerns such as longer term test runs, memory leak detection, working set evaluation, etc. d. Consistency. Thierry is right on, we need to have OpenStack be consistent intra-project and across projects. This will include looking at scenarios that break our goals of being hypervisor agnostic, API definitions and approach, developer documentation, and other areas that teams might be optimizing locally but create a not finished view of the project. e. OpenStack API completed. We need to complete a working set of API's that are consistent and inclusive of all the exposed functionality. The OpenStack API will be an amalgam of the underlying services, we need to ensure that the application developer experience is smooth and logical. The DirectAPI calls will be exposed to project developers and committers, but the public OpenStack API for application developers will need to be stable, repeatable, versioned, and extensible. Developer documentation will need to address the fact that the OpenStack API will consist of fixed and well known core calls, plus additional calls that will be introduced by services via the extension mechanisms. Thoughts? John -Original Message- From: openstack-bounces+john=openstack@lists.launchpad.net [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of Thierry Carrez Sent: Monday, January 31, 2011 2:59 AM To: openstack@lists.launchpad.net Subject: Re: [Openstack] Cactus Release Preparation Rick Clark wrote: In Bexar was a feature release. We pushed lots of new features. The focus of Nova development in Cactus is going to be testing and stabilization. I wonder if we shouldn't say consistency, testing and stabilization. Feature work should be concentrated in areas where the resulting software is not consistent, in covering the gaps left after a featureful release. The different groups have been pursuing specific scenarios, but as a project we want to make sure that the other combinations also work. Support IPv6 on FlatManager, for example, is clearly part of that. A complete toolset around the Openstack API, maybe have a plan to deprecate the objectstore... -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Multi Clusters in a Region ...
Hi y'all, Now that the Network and API discussions have settled down a little I thought I'd kick up the dust again. I'm slated to work on the Multi Cluster in a Region BP for Cactus. This also touches on Zone/Host Capabilities and Distributed Scheduler, so feedback is important. https://blueprints.launchpad.net/nova/+spec/multi-cluster-in-a-region Here is my first draft at a spec. I'm putting it out there as strawman. Please burn as needed. Links to previous spec/notes are at the top of the spec. http://wiki.openstack.org/MultiClusterZones I will adjust as feedback is gathered. We can discuss this in this thread, or on the Etherpad (I prefer the etherpad since it's linked to the wiki page): http://etherpad.openstack.org/multiclusterdiscussion Thanks in advance, Sandy Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at ab...@rackspace.com, and delete the original message. Your cooperation is appreciated. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint
On Mon, Jan 31, 2011 at 10:40 AM, John Purrier j...@openstack.org wrote: In order to bring this discussion to a close and get everyone on the same page for Cactus development, here is where we have landed: 1. We will **not** be separating the network and volume controllers and API servers from the Nova project. I think this is definitely the right move. 2. On-going work to extend the Nova capabilities in these areas will be done within the existing project and be based on extending the existing implementation. The folks working on these projects will determine the best approach for code re-use, extending functionality, and potential integration of additional community contributions in each area. 3. Like all efforts for Cactus, correct trade-offs must be made to maintain deployability, stability, and reliability (key themes of the release). 4. Core design concepts allowing each service to horizontally scale independently, present public/management/event interfaces through a documented OpenStack API, and allow services to be deployed independently of each other must be maintained. If issues arise that do not allow the current code structure to support these concepts the teams should raise the issues and open discussions on how to best address. We will target the Diablo design summit to discuss and review the progress made on these services and determine if the best approach to the project has been made. Thoughts? John *From:* Andy Smith [mailto:andys...@gmail.com] *Sent:* Friday, January 28, 2011 4:06 PM *To:* John Purrier *Cc:* Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net *Subject:* Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 1:19 PM, John Purrier j...@openstack.org wrote: Thanks for the response, Andy. I think we actually agree on this J. You said: *This statement is invalid, nova is already broken into services, each of which can be dealt with individually and scaled as such, whether the code is part of the same repository has little bearing on that. The goals of scaling are orthogonal to the location of the code and are much more related to separation of concerns in the code, making* sure *that volume code does not rely on compute code for example (which at this point it doesn't particularly).* The fact that the volume code and the compute code are not coupled make the separation easy. One factor that I did not mention is that each service will present public, management, and optional extension APIs, allowing each service to be deployed independently. So far that is all possible under the existing auspices of Nova. DirectAPI will happily sit in front of any of the services independently, any of the services when run can be configured with different instances of RabbitMQ to point at, DirectAPI supports a large amount of extensibility and pluggable managers/drivers support a bunch more. Decoupling of the code has always been a goal, as have been providing public, management, and extension APIs and we aren't doing so bad. I don't think we disagree about wanting to run things independently, but for the moment I have seen no convincing arguments for separating the codebase. You said: *That suggestion is contradictory, first you say not to separate then you suggest creating separate projects. I am against creating separate projects, the development is part of Nova until at least Cactus.* This is exactly my suggestion below. Keep Nova monolithic until Cactus, then integrate the new services once Cactus is shipped. There is work to be done to create the service frameworks, API engines, extension mechanisms, and porting the existing functionality. All of this can be done in parallel to the stability work being done in the Nova code base. As far as I know there are not major updates coming in either the volume or network management code for this milestone. Where is this parallel work being done if not in a separate project? --andy John *From:* Andy Smith [mailto:andys...@gmail.com] *Sent:* Friday, January 28, 2011 12:45 PM *To:* John Purrier *Cc:* Rick Clark; Jay Pipes; Ewan Mellor; Søren Hansen; openstack@lists.launchpad.net *Subject:* Re: [Openstack] Network Service for L2/L3 Network Infrastructure blueprint On Fri, Jan 28, 2011 at 10:18 AM, John Purrier j...@openstack.org wrote: Some clarification and a suggestion regarding Nova and the two new proposed services (Network/Volume). To be clear, Nova today contains both volume and network services. We can specify, attach, and manage block devices and also specify network related items, such as IP assignment and VLAN creation. I have heard there is some confusion on this, since we started talking about creating OpenStack services around these areas that will be separate from the
Re: [Openstack] Cactus Release Preparation
John, I would agree with putting deployability at the top of the list. Right now, it is operational from a developers point of view. I think a true operations team would struggle supporting it at scale. A change I might suggest in priority is moving the API up in the list. While the OS API is usable from a developers perspective, it isn't yet in a place where it can drive real value to the community. If we miss the Cactus release without having a complete API I think we run a risk of it not being relevant in the long term. Paul From: John Purrier j...@openstack.orgmailto:j...@openstack.org Date: Mon, 31 Jan 2011 13:05:34 -0600 To: 'Thierry Carrez' thie...@openstack.orgmailto:thie...@openstack.org, openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Subject: Re: [Openstack] Cactus Release Preparation I would suggest that the theme(s) for the Cactus release be: a. Deployability. This includes consistent packaging and deployment tools support; but also includes good consistent documentation, approachability to the project (how quickly can a novice get a running system going for proof of concept), and deployability at larger scale (includes reference materials around hardware and networking choices, operational concerns, and multi-machine deployment orchestration). b. Stability. Agree with both Rick and Thierry, we need to get the existing features stable and available for additional and larger scale testing environments. We will be focusing on providing additional test automation, beyond testing into automated functional testing. Contributors such as Rackspace will be setting up larger testing environments (on the order of hundreds of machines) to ensure that we are stable at scale, as well. c. Reliability. Once a configuration is stood up and operational, it needs to run with only normal operational attention. This will mean additional attention to operational concerns such as longer term test runs, memory leak detection, working set evaluation, etc. d. Consistency. Thierry is right on, we need to have OpenStack be consistent intra-project and across projects. This will include looking at scenarios that break our goals of being hypervisor agnostic, API definitions and approach, developer documentation, and other areas that teams might be optimizing locally but create a not finished view of the project. e. OpenStack API completed. We need to complete a working set of API's that are consistent and inclusive of all the exposed functionality. The OpenStack API will be an amalgam of the underlying services, we need to ensure that the application developer experience is smooth and logical. The DirectAPI calls will be exposed to project developers and committers, but the public OpenStack API for application developers will need to be stable, repeatable, versioned, and extensible. Developer documentation will need to address the fact that the OpenStack API will consist of fixed and well known core calls, plus additional calls that will be introduced by services via the extension mechanisms. Thoughts? John -Original Message- From: openstack-bounces+john=openstack@lists.launchpad.netmailto:openstack-bounces+john=openstack@lists.launchpad.net [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf Of Thierry Carrez Sent: Monday, January 31, 2011 2:59 AM To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Subject: Re: [Openstack] Cactus Release Preparation Rick Clark wrote: In Bexar was a feature release. We pushed lots of new features. The focus of Nova development in Cactus is going to be testing and stabilization. I wonder if we shouldn't say consistency, testing and stabilization. Feature work should be concentrated in areas where the resulting software is not consistent, in covering the gaps left after a featureful release. The different groups have been pursuing specific scenarios, but as a project we want to make sure that the other combinations also work. Support IPv6 on FlatManager, for example, is clearly part of that. A complete toolset around the Openstack API, maybe have a plan to deprecate the objectstore... -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to