Re: [Openstack-operators] [nova] VM HA support in trunk
Hi Kazu, thanks for this update. Sorry I am a bit late in replying to this thread, but one of my students just ran into an issue running pacemaker-based evacuation of hosts. It seems that pacemaker 1.1.10 is not supposed to work with remote, and the 14.04 distro comes with that version. Did you get remote to work, if so how? The pull request [1] indicates that remote support was added, but its unclear how the above version difference was handled. Did you people resort to compiling the latest pcm from source or something else? Affan [1] https://github.com/ntt-sic/masakari/pull/11 On Fri, 19 Feb 2016 at 09:19 Toshikazu Ichikawa < ichikawa.toshik...@lab.ntt.co.jp> wrote: > Hi Affan, > > > > Pacemaker works fine on either a canonical distribution or RDO. > > I use our tool [1] using Pacemaker on Ubuntu without any specific issue. > > > > [1] https://github.com/ntt-sic/masakari > > > > Thanks, > > Kazu > > > > *From:* Affan Syed [mailto:affan.syed@gmail.com] > *Sent:* Tuesday, February 16, 2016 2:02 PM > *To:* Matt Fischer ; Toshikazu Ichikawa < > ichikawa.toshik...@lab.ntt.co.jp> > *Cc:* openstack-operators@lists.openstack.org > *Subject:* Re: [Openstack-operators] [nova] VM HA support in trunk > > > > Hi Kazu and Matt, > > Thanks for the pointers. I think the discussion around pacemaker and > pacemaker remote seems most promising, esp with Russel's blog post I found > after I emailed earlier [1]. > > > > Not sure how tooling would be different, but pacemaker, given its use in > the controller cluster anyways, seems a more logical choice. Any issues you > people think with a canonical distribution instead of RDO? > > > > Affan > > > > > > [1] > http://blog.russellbryant.net/2015/03/10/the-different-facets-of-openstack-ha/ > > > > On Mon, 15 Feb 2016 at 20:59 Matt Fischer wrote: > > I believe that either have your customers design their apps to handle > failures or have tools that are reactive to failures. > > > > Unfortunately like many other private cloud operators we deal a lot with > legacy applications that aren't scaled horizontally or fault tolerant and > so we've built tooling to handle customer notifications (reactive). When we > lose a compute host we generate a notice to customers and then work on > evacuating their instances. For the evac portion nova host-evacuate or > host-evacuate-live work fairly well, although we rarely get a functioning > floating-IP after host-evacuate without other work. > > > > Getting adoption of heat or other automation tooling to educate customers > is a long process, especially when they're used to VMware where I think > they get the VM HA stuff for "free". > > > > > > On Mon, Feb 15, 2016 at 8:25 AM, Toshikazu Ichikawa < > ichikawa.toshik...@lab.ntt.co.jp> wrote: > > Hi Affan, > > > > > > I don’t think any components in Liberty provide HA VM support directly. > > > > However, many works are published and open-sourced, here. > > https://etherpad.openstack.org/p/automatic-evacuation > > You may find ideas and solutions. > > > > And, the discussion on this topic is on-going at HA meeting. > > https://wiki.openstack.org/wiki/Meetings/HATeamMeeting > > > > thanks, > > Kazu > > > > *From:* Affan Syed [mailto:affan.syed@gmail.com] > *Sent:* Monday, February 15, 2016 12:51 PM > *To:* openstack-operators@lists.openstack.org > *Subject:* [Openstack-operators] [nova] VM HA support in trunk > > > > reposting with the correct tag, hopefully. Would really appreciate some > pointers. > > -- Forwarded message - > From: Affan Syed > Date: Sat, 13 Feb 2016 at 15:13 > Subject: [nova] VM HA support in trunk > To: > > > > Hi all, > > I have been trying to understand if we currently have some VM HA support > as part of Liberty? > > > > To be precise, how are host being down due to power failure handled, > specifically in terms of migrating the VMs but possibly even their > networking configs (tunnels etc). > > > > The VM migration like XEN-HA or KVM cluster seem to require 1+1 HA, I have > read a few places about celiometer+heat templates to launch VMs for an N+1 > backup scenario, but these all seem like one-off setups. > > > > > > This issue seems to be very much important for legacy enterprises to move > their "pets" --- not sure if we can simply wish away that mindset! > > > > Affan > > > > > > > > ___ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Configure cinder to allow volume delete when RBD snapshots present
You could make a backup of the snapshot and then transfer it to a different tenant In the event that you need to restore it, you could transfer it back to the user. I think you could do this in an automatic fashion as well, but you'll have to do some testing though to make sure it does what you need it to do. On Mon, Apr 11, 2016 at 11:18 AM, Forrest Flagg wrote: > All, > > I have a working Kilo cloud running with ceph for the storage backend. > I'd like to use RBD snapshots for backups because they're so fast, but > cinder doesn't allow volume deletion when an RBD snapshot exists. I want > to keep daily backups in case a user terminates an instance and we need > recover it or for disaster recovery. Is there a way to mark the volumes as > deleted when a tenant deletes them so they don't show up in OpenStack but > still exist within ceph for backup purposes? Thanks, > > -- > Forrest Flagg > Cloud System Administrator > Advanced Computing Group > (207) 561-3575 > raymond.fl...@maine.edu > > ___ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
Sean, I think this is exactly what I think we are looking to change. There is a lot of work that is captured via etherpad's at each of the Midcycle and other meetups that is probably not sent up to each of the groups. The reason I think this is because I see the some of the same issues that Operators are discussing over and over again. Now, there is work that Kris mentioned that they have submitted and keep on top of, and I know there is other work from a few others that contribute, but I'm not sure if the rest of the Operators have the opportunity to get their information over to the right people. That last point is what I am looking to help change. There is a lot that I think the group can help out with to make sure we capture what we get via etherpads and turn them into "Blueprints", "Bugs", etc so we can help follow-up with projects and track them for the entire Operators group. Maybe I'm wrong about this, maybe all the entries from the etherpads are read by projects and fed into their pipelines. --Joe On Mon, Apr 11, 2016 at 11:11 AM, Sean M. Collins wrote: > To be blunt: Are we ensuring that all this work that people are > capturing in these working groups is actually getting updated and > communicated to the developers? > > As I become more involved with rolling upgrades, I will try and attend > meetings and be available from the WG side, but I don't believe I've > ever seen someone from the WG side come over to Neutron and say "We need > XYZ and here's a link to what we've captured in our repo to explain what > we mean" > > But then again I'm not on the neutron-drivers team or a core. > > Anyway, I updated what I've been involved with in the Mitaka cycle, when > it comes to Neutron and upgrades (https://review.openstack.org/304181) > > -- > Sean M. Collins > > ___ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] User Committee IRC Meeting
Dear Users and Operators, This is a kind reminder for the User Committee IRC meeting that will be hosted today Monday 04/11, 2016 at 1900 UTC in (freenode) #openstack-meeting Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee Thank you all! Edgar ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
Sean, This is a very good concern. I can't talk for all projects but during the Ops Meet-ups we normally collect the feedback and send it to the PTLs or anyone from the project team who can help us. The best answer should be provide by the Product Working Group from User Committee: https://wiki.openstack.org/wiki/ProductTeam Adding Shamail and Carol to provide more details. They are leading the Product WG. Thanks, Edgar On 4/11/16, 8:58 AM, "Sean M. Collins" wrote: >Kris G. Lindgren wrote: >> You mean outside of the LDT filing an RFE bug with neutron to get > >Sorry, I don't know what LDT is. Can you explain? > >As for the RFE bug and the contributions that GoDaddy has been involved >with, my statement is not about "if" operators are contributing, because >obviously they are. But an RFE bug and coming to the midcycle is part of >Neutron's development process. Not a working group. > > >-- >Sean M. Collins > >___ >OpenStack-operators mailing list >OpenStack-operators@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
LDT is large deployment team, its a working group for large deployments. Like Rackspace, Cern, NeCTAR, Yahoo, GoDaddy, Bluebox. Talk about issues scaling openstack, Nova cells, monitoring, all the stuff that becomes hard when you have thousands of servers or hundreds of clouds. Also, the public-cloud working group is part of the LDT working group as well. Since a large portion of us also happen to run public clouds. Sorry - but your post came off (to me) as: Working groups don’t do anything actionable, atleast I have never seen it in neutron. I was just giving actionable work that has come from LDT, alone, in neutron. ___ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 4/11/16, 9:58 AM, "Sean M. Collins" wrote: >Kris G. Lindgren wrote: >> You mean outside of the LDT filing an RFE bug with neutron to get > >Sorry, I don't know what LDT is. Can you explain? > >As for the RFE bug and the contributions that GoDaddy has been involved >with, my statement is not about "if" operators are contributing, because >obviously they are. But an RFE bug and coming to the midcycle is part of >Neutron's development process. Not a working group. > > >-- >Sean M. Collins ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
Kris G. Lindgren wrote: > You mean outside of the LDT filing an RFE bug with neutron to get Sorry, I don't know what LDT is. Can you explain? As for the RFE bug and the contributions that GoDaddy has been involved with, my statement is not about "if" operators are contributing, because obviously they are. But an RFE bug and coming to the midcycle is part of Neutron's development process. Not a working group. -- Sean M. Collins ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
You mean outside of the LDT filing an RFE bug with neutron to get segmented/routed network support added to neutron complete with an etherpad of all the ways we are using that at our companies and our use cases [1] . Or where we (GoDaddy) came to the neutron Mid-cycle in Fort Collins to further talk about said use case as well as to put feelers out for ip-usages-extension. Which was commited to Neutron in the Mitaka release [2]. These are just the things that I was am aware of and have been involved in neutron alone in the past 6 months, I am sure there are many more. [1] - https://etherpad.openstack.org/p/Network_Segmentation_Usecases & https://bugs.launchpad.net/neutron/+bug/1458890 [2] - https://github.com/openstack/neutron/commit/2f741ca5f9545c388270ddab774e9e030b006d8a ___ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 4/11/16, 9:11 AM, "Sean M. Collins" wrote: >To be blunt: Are we ensuring that all this work that people are >capturing in these working groups is actually getting updated and >communicated to the developers? > >As I become more involved with rolling upgrades, I will try and attend >meetings and be available from the WG side, but I don't believe I've >ever seen someone from the WG side come over to Neutron and say "We need >XYZ and here's a link to what we've captured in our repo to explain what >we mean" > >But then again I'm not on the neutron-drivers team or a core. > >Anyway, I updated what I've been involved with in the Mitaka cycle, when >it comes to Neutron and upgrades (https://review.openstack.org/304181) > >-- >Sean M. Collins > >___ >OpenStack-operators mailing list >OpenStack-operators@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] Configure cinder to allow volume delete when RBD snapshots present
All, I have a working Kilo cloud running with ceph for the storage backend. I'd like to use RBD snapshots for backups because they're so fast, but cinder doesn't allow volume deletion when an RBD snapshot exists. I want to keep daily backups in case a user terminates an instance and we need recover it or for disaster recovery. Is there a way to mark the volumes as deleted when a tenant deletes them so they don't show up in OpenStack but still exist within ceph for backup purposes? Thanks, -- Forrest Flagg Cloud System Administrator Advanced Computing Group (207) 561-3575 raymond.fl...@maine.edu ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA
To be blunt: Are we ensuring that all this work that people are capturing in these working groups is actually getting updated and communicated to the developers? As I become more involved with rolling upgrades, I will try and attend meetings and be available from the WG side, but I don't believe I've ever seen someone from the WG side come over to Neutron and say "We need XYZ and here's a link to what we've captured in our repo to explain what we mean" But then again I'm not on the neutron-drivers team or a core. Anyway, I updated what I've been involved with in the Mitaka cycle, when it comes to Neutron and upgrades (https://review.openstack.org/304181) -- Sean M. Collins ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators