Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions
Dolph, Thanks! It is good that python example is provided! Qing From: Dolph Mathews [mailto:dolph.math...@gmail.com] Sent: Thursday, October 31, 2013 9:25 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions On Mon, Oct 28, 2013 at 5:46 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: In my hard drive-less use case, I need an in-core-db/cache that can be in the same db cluster with real db (with hard drive) with the same sql api so that the current openstack code do not need to be changed, instead, just a pluggin with some configurations. This is pretty much the original use case that dogpile.cache grew out of, see: http://docs.sqlalchemy.org/en/rel_0_9/orm/examples.html#dogpile-caching -Original Message- From: Morgan Fainberg [mailto:m...@metacloud.commailto:m...@metacloud.com] Sent: Monday, October 28, 2013 10:22 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions In light of what Dolph said with regards to Keystone, we are using dogpile.cache to implement memoization in front of our driver calls. It it has the ability to cache directly as well, but it has been effective (so far) for our use-case. That being said, I am unsure if caching in front of MySQL is really what we want. I believe that we should be caching after processing work (hence memoization mechanism) instead of at the SQL layer. This also means we can be measured in what we cache (oh hey, it makes no sense to cache X because it needs to be real time or there isn't a performance issue with that query / call, but Y does a ton of processing and is an expensive join/temptable query). In my experience, unless the whole application is designed with caching in mind, caching something as broad as MySQL calls (or any SQL store) is likely going to net exactly what Shawn Hartsock stated, adding a second performance issue. --Morgan On Mon, Oct 28, 2013 at 10:05 AM, Shawn Hartsock hartso...@vmware.commailto:hartso...@vmware.com wrote: I once heard a quote.. I had a performance problem, so I added caching. now I have two performance problems. this. 1,000 times this. Just to float this thought ... make sure it's considered... I've seen a *lot* of people misuse caching when what the really want is memoization. * http://stackoverflow.com/questions/1988804/what-is-memoization-and-how -can-i-use-it-in-python * http://stackoverflow.com/questions/10879137/how-can-i-memoize-a-class- instantiation-in-python ... I'm not sure what you're trying to do. So YMMV, TTFN, BBQ. # Shawn Hartsock - Original Message - From: Clint Byrum cl...@fewbar.commailto:cl...@fewbar.com To: openstack-dev openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Sent: Monday, October 28, 2013 12:12:49 PM Subject: Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions Excerpts from Dolph Mathews's message of 2013-10-28 08:40:19 -0700: It's not specific to mysql (or sql at all), but keystone is using dogpile.cache around driver calls to a similar effect. http://dogpilecache.readthedocs.org/en/latest/ It can persist to memcache, redis, etc. I once heard a quote.. I had a performance problem, so I added caching. now I have two performance problems. Caching is unbelievably awesome in the jobs it can do well. When the problem is straight forward and the requirements are few, it is just the right thing to relieve engineering pressure to make an application more scalable. However, IMO, more than narrow, well defined cache usage is a sign that the application needs some reworking to scale. I like the principle of let's use dogpile so we don't have to reinvent multi-level caching. However, let's make sure we look at each performance issue individually, rather than just throwing them all in a cache box and waving the memcache wand. https://github.com/openstack/keystone/blob/master/keystone/common/c ache/core.py On Fri, Oct 25, 2013 at 6:53 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: All, Has anyone looked at the options of putting a distributed caching system in front of mysql server to improve performance? This should be similar to Oracle Coherence, or VMware VFabric SQLFire. ** ** Thanks, ** ** Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev
Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey
Has anyone looked at any lock-free solution? -Original Message- From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] Sent: Wednesday, October 30, 2013 12:20 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey On 10/30/2013 03:10 PM, Steven Dake wrote: I will -2 any patch that adds zookeeper as a dependency to Heat. Certainly any distributed locking solution should be plugin based and optional. Just as a database-oriented solution could be the default plugin. Re: the Java issue, we already have optional components in other languages. I know Java is a different league of pain, but if it's an optional component and left as a choice of the deployer, should we care? -S PS As an aside, what are your issues with ZK? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements ...
Sandy, Does the framework account for customizable events?: In my use case, I have a network (or some other proprietary way, e.g., some special BUS protocol) attached device that does not fit into openstack node structure. It can generate events. According to these events, Openstack orchestration layer (HEAT?) may need to do thing like failing over all vms on one system to another system (compute node). If I would like to use the alarm/notification system, I would need to add a Customized collector, and my events would need to be routed to, say HEAT, and I would need to define the action ( a plugging/callback?) corresponding to the event for HEAT to take on my behalf. Is my approach right/supported under the framework and the current HEAT (or some other components) release? Thanks, Qing -Original Message- From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] Sent: Tuesday, October 29, 2013 6:34 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [Ceilometer] Suggestions for alarm improvements ... Hey y'all, Here are a few notes I put together around some ideas for alarm improvements. In order to set it up I spent a little time talking about the Ceilometer architecture in general, including some of the things we have planned for IceHouse. I think Parts 1-3 will be useful to anyone looking into Ceilometer. Part 4 is where the meat of it is. https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements Look forward to feedback from everyone and chatting about it at the summit. If I missed something obvious, please mark it up so we can address it. -S ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions
Thanks Morgan. In my embedded situation, There is no hard drive where I run the Openstack controller, keystone, and other control functions, thus, my only choice has to be to use an in-core-db (cache) for the realtime transactions into the database, and then use another process to sync the data into the real database cluster through network located off the controller node. From: Dolph Mathews [mailto:dolph.math...@gmail.com] Sent: Monday, October 28, 2013 8:40 AM To: OpenStack Development Mailing List Cc: Morgan Fainberg Subject: Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions It's not specific to mysql (or sql at all), but keystone is using dogpile.cache around driver calls to a similar effect. http://dogpilecache.readthedocs.org/en/latest/ It can persist to memcache, redis, etc. https://github.com/openstack/keystone/blob/master/keystone/common/cache/core.py On Fri, Oct 25, 2013 at 6:53 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: All, Has anyone looked at the options of putting a distributed caching system in front of mysql server to improve performance? This should be similar to Oracle Coherence, or VMware VFabric SQLFire. Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -Dolph ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification
All, I found multiple places/components you can get event alarms, e.g., Heat, Ceilometer, Oslo, Nova etc, notification. But I fail to find any documents as to how to do it in the respective component documents. I 'm wondering if there is document as to if there is a single API entry point where you can subscribe and get event notification from all components, such as Nova, Neutron. Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification
Sandy, Thanks for your comprehensive report card and detailed explanation! That helps a lot! I'll follow the route of using Yagi for now. Qing -Original Message- From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] Sent: Monday, October 28, 2013 4:56 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification Here's the current adoption of notifications in OpenStack ... hope it helps! http://www.sandywalsh.com/2013/09/notification-usage-in-openstack-report.html -S From: Qing He [qing...@radisys.com] Sent: Monday, October 28, 2013 8:48 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification Thanks Angus! Yes, if this rpc notification mechanism works for all other components, e.g., Neutron, in addition to Nova, which seems to be the only documented component working with this notification system. For example, can we do something like Network.instance.shutdown/.end Or Storage.instance.shutdown/.end Or Image.instance.shutdown/.end ... -Original Message- From: Angus Salkeld [mailto:asalk...@redhat.com] Sent: Monday, October 28, 2013 4:36 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification On 28/10/13 22:30 +, Qing He wrote: All, I found multiple places/components you can get event alarms, e.g., Heat, Ceilometer, Oslo, Nova etc, notification. But I fail to find any documents as to how to do it in the respective component documents. I 'm wondering if there is document as to if there is a single API entry point where you can subscribe and get event notification from all components, such as Nova, Neutron. Hi, If you are talking about rpc notifications, then this is one wiki page I know about: https://wiki.openstack.org/wiki/SystemUsageData (I have just added some heat notifications to it). -Angus Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] distibuted caching system in front of mysql server for openstack transactions
All, Has anyone looked at the options of putting a distributed caching system in front of mysql server to improve performance? This should be similar to Oracle Coherence, or VMware VFabric SQLFire. Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux
Thanks, Balaji. Did you keep it upto date with openstack releases or do you still stay with that Diablo? Qing From: Balaji Patnala [mailto:patnala...@gmail.com] Sent: Wednesday, October 23, 2013 3:17 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux Hi Qing, Freescale SoCs like P4080 and T4240 etc are supported for OpenStack as well. We have been using them from OpenStack Diablo release onwards. We demonstrated at ONS 2013, Interop 2013 and China Road Show. Regards, Balaji.P On 23 October 2013 08:57, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: Matt, Great. Yes, what processor and free scale version you are running on? Do you have something for tryout? Thanks, Qing From: Matt Riedemann [mailto:mrie...@us.ibm.commailto:mrie...@us.ibm.com] Sent: Tuesday, October 22, 2013 8:11 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux Yeah, my team does. We're using openvswitch 1.10, qpid 0.22, DB2 10.5 (but MySQL also works). Do you have specific issues/questions? We're working on getting continuous integration testing working for the nova powervm driver in the icehouse release, so you can see some more details about what we're doing with openstack on power in this thread: http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html Thanks, MATT RIEDEMANN Advisory Software Engineer Cloud Solutions and OpenStack Development Phone: 1-507-253-7622 | Mobile: 1-507-990-1889 E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com [IBM] 3605 Hwy 52 N Rochester, MN 55901-1407 United States From:Qing He qing...@radisys.commailto:qing...@radisys.com To:OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, Date:10/22/2013 07:43 PM Subject:Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux Thanks Matt. I'd like know if anyone has tried to run the controller, API server and MySql database, msg queue, etc-the brain of the openstack, on ppc. Qing From: Matt Riedemann [mailto:mrie...@us.ibm.com] Sent: Tuesday, October 22, 2013 4:17 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt driver. What do you want to know? Thanks, MATT RIEDEMANN Advisory Software Engineer Cloud Solutions and OpenStack Development Phone: 1-507-253-7622 | Mobile: 1-507-990-1889 E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com [IBM] 3605 Hwy 52 N Rochester, MN 55901-1407 United States From:Qing He qing...@radisys.commailto:qing...@radisys.com To:OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, Date:10/22/2013 05:49 PM Subject:[openstack-dev] [nova] Openstack on power pc/Freescale linux All, I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev inline: image001.gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Openstack on power pc/Freescale linux
All, I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux
Thanks Matt. I'd like know if anyone has tried to run the controller, API server and MySql database, msg queue, etc-the brain of the openstack, on ppc. Qing From: Matt Riedemann [mailto:mrie...@us.ibm.com] Sent: Tuesday, October 22, 2013 4:17 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt driver. What do you want to know? Thanks, MATT RIEDEMANN Advisory Software Engineer Cloud Solutions and OpenStack Development Phone: 1-507-253-7622 | Mobile: 1-507-990-1889 E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com [IBM] 3605 Hwy 52 N Rochester, MN 55901-1407 United States From:Qing He qing...@radisys.commailto:qing...@radisys.com To:OpenStack Development Mailing List openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, Date:10/22/2013 05:49 PM Subject:[openstack-dev] [nova] Openstack on power pc/Freescale linux All, I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev inline: image001.gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon] dashboard not showing all hard drive
Hi, My system hard drive of 250G was divided into two volumes, one 50G and the rest. But the dashboard only shows 50G, I'm wondering if anyone knows how to make it show the other 200G? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Data Forwarding Nodes
Hi, There is a section in Openstack Network installation Guide called Install Software on Data Forwarding Nodes. I'm wondering What is difference between the concept of Data Forwarding Nodes here and the SDN control and forwarding plane? Is it possible to have a node in openstack act as a SDN control plane while another, as forwarding plane (termed Data Forwarding Nodes here)? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Does Heat support checkpointing for guest application
Steven, Thanks! Will look into it. Qing From: Steven Dake [mailto:sd...@redhat.com] Sent: Friday, September 20, 2013 12:48 PM To: OpenStack Development Mailing List Cc: Qing He Subject: Re: [openstack-dev] [Heat] Does Heat support checkpointing for guest application On 09/13/2013 02:18 PM, Qing He wrote: All, I'm wondering if Heat provide service for checkpointing the guest application for HA/redundancy similar to what corosync/pacemaker/openais provided for bare medal applications. Thanks, Qing Qing, Heat is an orchestration framework, whereas corosync is a distributed data transfer service. I think Marconi would better solve the problem you have. http://wiki.openstack.org/marconi Regards -steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Issues with IPTables
The follow up question is: Has anyone walked through the guides faithfully posted there and see if it works without back door tricks/tricks not documented there? -Original Message- From: Qing He Sent: Monday, September 16, 2013 10:37 AM To: 'Solly Ross' Cc: OpenStack Development Mailing List Subject: RE: Issues with IPTables Solly, It would be great if you can share the notes. The reason I asked the question is that I'm trying to decide If I need to allocate development time in installation following the installation guide. The usual wisdom is that installation with detailed instruction would take no time. However, your experience and mine showed the contrary. I have not finished mine following the Ubuntu installation guide. Thus, I was interested in knowing your effort spent on it so that I would know that it was not just me who had issues with the supposedly plug and play installation with the packages. Thanks, Qing -Original Message- From: Solly Ross [mailto:sr...@redhat.com] Sent: Monday, September 16, 2013 10:24 AM To: Qing He Cc: OpenStack Development Mailing List Subject: Re: Issues with IPTables Quite a while. RDO's documentation for configuring multinode Packstack with Neutron was a bit lacking, so after attempting to get that working for a while, I switched to following the Basic Install Guide (http://docs.openstack.org/trunk/basic-install/content/basic-install_intro.html). I also found the basic install guide catered for Fedora (http://docs.openstack.org/trunk/basic-install/yum/content/basic-install_intro.html), but that is sorely lacking in the actual instruction department, and is missing several steps. If you would like, I can attach the raw draft of my notes. Eventually, some of the changes or clairifications should make their way into the actual OpenStack Docs. Best Regards, Solly Ross - Original Message - From: Qing He qing...@radisys.com To: sr...@redhat.com Sent: Monday, September 16, 2013 1:14:42 PM Subject: RE: Issues with IPTables Solly, A side question, how long did this process take you? Thanks, Qing -Original Message- From: Solly Ross [mailto:sr...@redhat.com] Sent: Monday, September 16, 2013 10:11 AM To: OpenStack Development Mailing List Subject: [openstack-dev] Issues with IPTables In a enfort to improve/verify the Openstack Documentation with regards to RHEL and Fedora, I've been attempting to follow the basic install guides. I've managed to create a working installation and set of instructions. However, to do so I needed to disable the Neutron IPTables firewall, as it was blocking non-VM traffic. Namely, it was blocking the GRE packets being used by Neutron. Did I miss something, or is this a bug? Best Regards, Solly Ross ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat] Does Heat support checkpointing for guest application
All, I'm wondering if Heat provide service for checkpointing the guest application for HA/redundancy similar to what corosync/pacemaker/openais provided for bare medal applications. Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] Propose Liang Chen for heat-core
+1 -Original Message- From: Angus Salkeld [mailto:asalk...@redhat.com] Sent: Thursday, August 22, 2013 4:33 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [heat] Propose Liang Chen for heat-core On 22/08/13 16:57 +0100, Steven Hardy wrote: Hi, I'd like to propose that we add Liang Chen to the heat-core team[1] Liang has been doing some great work recently, consistently providing good review feedback[2][3], and also sending us some nice patches[4][5], implementing several features and fixes for Havana. Please respond with +1/-1. +1 Thanks! [1] https://wiki.openstack.org/wiki/Heat/CoreTeam [2] http://russellbryant.net/openstack-stats/heat-reviewers-90.txt [3] https://review.openstack.org/#/q/reviewer:cbjc...@linux.vnet.ibm.com,n, z [4] https://github.com/openstack/heat/graphs/contributors?from=2013-04-18t o=2013-08-18type=c [5] https://review.openstack.org/#/q/owner:cbjc...@linux.vnet.ibm.com,n,z ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron]can you create network between a VM and a physical machine in neutron?
Thanks Aaron! From: Aaron Rosen [mailto:aro...@nicira.com] Sent: Wednesday, August 21, 2013 12:24 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Neutron]can you create network between a VM and a physical machine in neutron? Hi, Yes, if you use the providernetwork extension you can create a network on a vlan and have physical hosts accessible on that vlan. If using the NVP plugin another option is to use its networkgw extension to do this in conjunction with overlay networks. Aaron On Wed, Aug 21, 2013 at 11:25 AM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: All, I'm trying to find way to create a network with a mixture of VM and physical machines. Is this possible? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] can openstack compute manage both physical and machine machines?
Thanks, Yapeng! From: Yapeng Wu [mailto:yapeng...@huawei.com] Sent: Wednesday, August 21, 2013 1:20 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Nova] can openstack compute manage both physical and machine machines? Qing, you can take a look at this link: https://wiki.openstack.org/wiki/Baremetal Yapeng From: Qing He [mailto:qing...@radisys.com]mailto:[mailto:qing...@radisys.com] Sent: Wednesday, August 21, 2013 2:23 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Nova] can openstack compute manage both physical and machine machines? All, From the documents, It seems to me Nova can only manage virtual machines, but not mixed with physical machines. Am I wrong? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] clay.gerrard: Your email has virus?
Clay, Is your computer affected by some virus? I don't believe post has anything to do with OpenStack! -Original Message- From: Clay Gerrard [mailto:clay.gerr...@gmail.com] Sent: Saturday, July 20, 2013 9:32 PM To: stacy...@eazymail.org; m...@not.mn; sale-708497...@craigslist.org; cth...@gmail.com; openstack-dev@lists.openstack.org; supp...@rentmethod.com; brandon.n.stell...@wellsfargo.com; anthony.chris...@cbnorcal.com; swplot...@amherst.edu Subject: [openstack-dev] clay.gerrard http://domeincheck.belgon.nl/pfape/yob.cckwcysypwqaxpjciyf ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Netron] lbaas installation guide
In the network installation guide( http://docs.openstack.org/grizzly/openstack-network/admin/content/install_ubuntu.html ) there is a sentence “quantum-lbaas-agent, etc (see below for more information about individual services agents).” in the pluggin installation section. However, lbaas is never mentioned again after that in the doc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] lbaas installation guide
By the way, I'm wondering if lbaas has a separate doc somewhere else? From: Anne Gentle [mailto:a...@openstack.org] Sent: Friday, July 19, 2013 6:33 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [Neutron] lbaas installation guide Thanks for bringing it to the attention of the list -- I've logged this doc bug. https://bugs.launchpad.net/openstack-manuals/+bug/1203230 Hopefully a Neutron team member can pick it up and investigate. Anne On Fri, Jul 19, 2013 at 7:35 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: In the network installation guide( http://docs.openstack.org/grizzly/openstack-network/admin/content/install_ubuntu.html ) there is a sentence quantum-lbaas-agent, etc (see below for more information about individual services agents). in the pluggin installation section. However, lbaas is never mentioned again after that in the doc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] lbaas installation guide
Thanks Anne! From: Anne Gentle [mailto:a...@openstack.org] Sent: Friday, July 19, 2013 6:33 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [Neutron] lbaas installation guide Thanks for bringing it to the attention of the list -- I've logged this doc bug. https://bugs.launchpad.net/openstack-manuals/+bug/1203230 Hopefully a Neutron team member can pick it up and investigate. Anne On Fri, Jul 19, 2013 at 7:35 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: In the network installation guide( http://docs.openstack.org/grizzly/openstack-network/admin/content/install_ubuntu.html ) there is a sentence quantum-lbaas-agent, etc (see below for more information about individual services agents). in the pluggin installation section. However, lbaas is never mentioned again after that in the doc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] [cinder] Proposal for Ollie Leahy to join cinder-core
+1 From: Huang Zhiteng [mailto:winsto...@gmail.com] Sent: Wednesday, July 17, 2013 2:56 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [Openstack] [cinder] Proposal for Ollie Leahy to join cinder-core +1 for Ollie. On Wed, Jul 17, 2013 at 2:42 PM, Avishay Traeger avis...@il.ibm.commailto:avis...@il.ibm.com wrote: Walter A. Boring IV walter.bor...@hp.commailto:walter.bor...@hp.com wrote on 07/18/2013 12:04:07 AM: snip +1 to Ollie from me. +1 to John's points. If a company is colluding with other core members, from the same company, to do bad things within a project, it should become pretty obvious at some point and the project's community should take action. If someone is putting in an extra effort to provide quality code and reviews on a regular basis, then why wouldn't we want that person on the team? Besides, being a core member really just means that you are required to do reviews and help out with the community. You do get some gerrit privileges for reviews, but that's about it. I for one think that we absolutely can use more core members to help out with reviews during the milestone deadlines :) Walt, As I said, I really wasn't worried about anyone colluding or doing bad things. As you said, that would be obvious and could be handled. I was concerned about creating a limited view, and I thank you and everyone who replied for easing those concerns. And BTW, I don't think there is an HP conspiracy to take over Cinder and make it FC-only :) Thanks, Avishay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards Huang Zhiteng ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Event Service
All, Does open stack have pub/sub event service? I would like to be notified of the event of VM creation/deletion/Migration etc. What is the best way to do this? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Event Service
Thanks Michael! I found it: https://wiki.openstack.org/wiki/NotificationEventExamples -Original Message- From: Michael Still [mailto:mi...@stillhq.com] Sent: Friday, July 12, 2013 6:38 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Event Service OpenStack has a system called notifications which does what you're looking for. I've never used it, but I am sure its documented. Cheers, Michael On Sat, Jul 13, 2013 at 10:12 AM, Qing He qing...@radisys.com wrote: All, Does open stack have pub/sub event service? I would like to be notified of the event of VM creation/deletion/Migration etc. What is the best way to do this? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [openstack-community] [User-committee] [OpenStack Marketing] openstack only deployment
Hi All, I'm wondering if anyone has deployed openstack without any proprietary software/component? Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Email is not registered problem
The emails from this list stopped coming to my email address, is this related? -Original Message- From: Jeremy Stanley [mailto:fu...@yuggoth.org] Sent: Wednesday, June 26, 2013 7:38 AM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Email is not registered problem On 2013-06-26 17:31:26 +0800 (+0800), Wenhao Xu wrote: I am reworking on on one of my old patch today. After that, when I tried to do git review, I got the following error message: [...] remote: ERROR: committer email address wen...@zelin.io remote: ERROR: does not match your user account. remote: ERROR: remote: ERROR: The following addresses are currently registered: remote: ERROR:xuwenhao2...@gmail.com [...] I have updated my primary email address to wen...@zelin.io in launchpad, Gerrit and openstack foundation profile as well. I am wondering is there anything I missed? It's possible you've got two accounts in Gerrit, one you're updating through the WebUI (tied to your current Launchpad OpenID) and another which is associated with your SSH username but has the same SSH key along with your old E-mail address. We've run into a few like that which needed cleaning up, usually following LP account/OpenID changes. I'll look into it and follow up with you privately to resolve the issue. -- Jeremy Stanley ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant?
In this case, the exception is just hard to debug, and not something we expected in our design. The thorough solution would be to debug further to see the root cause, and deal with it as I suggested in the code review. However, If it is time consuming to debug, we can put the patch in there with logging to record the circumstances the 'unexpected' exception occurs, so that we can put in new patches to deal it when it becomes expected. After all, this is an iterative process. When the software evolves, a lot of unexpected has become expected. -Original Message- From: Raymond Pekowski (Code Review) [mailto:rev...@openstack.org] Sent: Tuesday, June 25, 2013 9:52 AM Cc: Dirk Mueller; Andrea Rosa; Ben Nemec; Chris Behrens; Eric Windisch; Russell Bryant; Qing He Subject: Change in openstack/oslo-incubator[master]: Make AMQP based RPC consumer threads more robust Raymond Pekowski has posted comments on this change. Change subject: Make AMQP based RPC consumer threads more robust .. Patch Set 12: By definition, unexpected exceptions are unexpected, so there is nothing yet to do root cause analysis on. We could argue as to what the appropriate action to take is, but that has already been discussed on the mailing list started by this thread: http://lists.openstack.org/pipermail/openstack-dev/2013-June/010040.html Please comment on that thread since you don't agree with the consensus that was reached. -- To view, visit https://review.openstack.org/32235 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I0d6ec8a5e3a310314da201656ee862bb40b41616 Gerrit-PatchSet: 12 Gerrit-Project: openstack/oslo-incubator Gerrit-Branch: master Gerrit-Owner: Raymond Pekowski raymond_pekow...@dell.com Gerrit-Reviewer: Andrea Rosa andrea.r...@hp.com Gerrit-Reviewer: Ben Nemec openst...@nemebean.com Gerrit-Reviewer: Chris Behrens cbehr...@codestud.com Gerrit-Reviewer: Dirk Mueller d...@dmllr.de Gerrit-Reviewer: Eric Windisch e...@cloudscaling.com Gerrit-Reviewer: Jenkins Gerrit-Reviewer: Qing He qing...@radisys.com Gerrit-Reviewer: Raymond Pekowski raymond_pekow...@dell.com Gerrit-Reviewer: Russell Bryant rbry...@redhat.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant?
Agree! Let someone know and keep going unless someone wants to interrupt it or do something. (Does there exist a mechanism already to do this?) -Original Message- From: Russell Bryant [mailto:rbry...@redhat.com] Sent: Tuesday, June 25, 2013 12:21 PM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant? On 06/25/2013 03:15 PM, Ray Pekowski wrote: On Jun 25, 2013 1:09 PM, Qing He qing...@radisys.com mailto:qing...@radisys.com wrote: Basically, when 'unexpected' happens, someone (e.g., operator) needs to know about it and look into it to see if it is something benign or fatal. If it is masked, the system may degrade overtime unnoticed into unusable. The approach implemented in the patch is to log the exception and retry at a rate of one per second. An alternative would be a log and a sys.exit() to kill the entire process. Be aware that the code affected by this patch is rpc created dispatcher like threads. Let's have a vote on which option is preferrable. I like it how it's implemented, *not* killing the process ... -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant?
Does the log alert operator? Something like SNMP trap? From: Ray Pekowski [mailto:pekow...@gmail.com] Sent: Tuesday, June 25, 2013 12:16 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant? On Jun 25, 2013 1:09 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: Basically, when 'unexpected' happens, someone (e.g., operator) needs to know about it and look into it to see if it is something benign or fatal. If it is masked, the system may degrade overtime unnoticed into unusable. The approach implemented in the patch is to log the exception and retry at a rate of one per second. An alternative would be a log and a sys.exit() to kill the entire process. Be aware that the code affected by this patch is rpc created dispatcher like threads. Let's have a vote on which option is preferrable. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant?
Clarify, operator does not have to go through a long log to find the issue. Instead, he/she needs to be notified that something severe/unexpected just happened and he/she needs to check it out. From: Qing He Sent: Tuesday, June 25, 2013 1:09 PM To: 'OpenStack Development Mailing List' Subject: RE: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant? Does the log alert operator? Something like SNMP trap? From: Ray Pekowski [mailto:pekow...@gmail.com]mailto:[mailto:pekow...@gmail.com] Sent: Tuesday, June 25, 2013 12:16 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] Should RPC consume_in_thread() be more fault tolerant? On Jun 25, 2013 1:09 PM, Qing He qing...@radisys.commailto:qing...@radisys.com wrote: Basically, when 'unexpected' happens, someone (e.g., operator) needs to know about it and look into it to see if it is something benign or fatal. If it is masked, the system may degrade overtime unnoticed into unusable. The approach implemented in the patch is to log the exception and retry at a rate of one per second. An alternative would be a log and a sys.exit() to kill the entire process. Be aware that the code affected by this patch is rpc created dispatcher like threads. Let's have a vote on which option is preferrable. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev