Hi,
it is great to know, is it related to the improved monitoring in ONE? I
am getting ready for trials of ONE 4.6.
We are using shared_lvm driver, so I would like to know if the new LVM
driver is compatible with it? Or I can expect some major issues? I
havent had enough time to check..
Thanks Milos
Dne 14-06-27 12:18 PM, Tino Vazquez napsal(a):
Hi,
Issues with delete and shutdown have been greatly improved in
OpenNebula 4.4+, I would recommend upgrading as far as possible.
Best,
-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple
--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova
--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.
On 26 June 2014 16:42, Steven Timm <t...@fnal.gov> wrote:
We have also seen this behavior in OpenNebula 3.2.
It appears that the failure mode occurs because the onevm delete
(or shutdown or migrate) doesn't correctly verify that the virtual machine
has gone away.
It sends the acpi terminate signal to the virtual machine
but if that fails, the VM will keep running. There is no
signal sent to libvirt to kill the machine regardless.
OpenNebula
deletes the disk.0 from underneath it but that doesn't stop
the vm from running, it stays running on the deleted file handle.
On the plus side, I once was able to recover the full disk image
of a VM that shouldn't have been deleted, that way, by going
to the /proc file system and dd'ing from the still-open file
handle of the process.
We've written a set of utilities to check the consistency
of the leases database with what is actually running on the cloud,
and alert us if there are any differences.
Steve Timm
On Thu, 26 Jun 2014, Milos Kozak wrote:
Hi, I would like to add that I have experienced it few times with ONE
3.8..
On 6/26/2014 9:34 AM, Robert Tanase wrote:
Hi all,
We are using Opennebula 4.2 system with several hosts ( KVM +
network storage) .
Recently we have discovered, by having disk r/w issues on a VM, that
after a delete - recreate action, specific VM is running on two
different hosts: the old placement host and the new placement host.
We are using the hooks system for host failure and a cron job at 5
minutes which is (re)deploying pending machines on available running
hosts.
By checking oned log files we couldn't find any abnormal behavior
and we are stuck.
Please guide us to find the root cause of this issue if is possible.
--
Thank you,
Robert Tanase
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
------------------------------------------------------------------
Steven C. Timm, Ph.D (630) 840-8525
t...@fnal.gov http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org