[ovirt-devel] How much longer can Red Hat's oVirt remain covert?

2015-06-19 Thread Rizwan Ashraf
How much longer can Red Hat's oVirt remain covert? | ZDNet

|   |
|   |  |   |   |   |   |   |
| How much longer can Red Hat's oVirt remain covert? | ZD...Red Hat's oVirt 
project has been around since 2008, when it purchased the technology from 
Qumranet. Since that time it has grown into a full-fledged hypervisor comp... |
|  |
| View on www.zdnet.com | Preview by Yahoo |
|  |
|   |


Hello friends,
Thought to share this good article about OVIRT.
Regards,
Riz
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [VDSM] Live snapshot with ceph disks

2015-06-19 Thread Nir Soffer
Hi all,

For 3.6, we will not support live vm snapshot, but this is a must for the next
release.

It is trivial to create a disk snapshot in ceph (using cinder apis). The 
snapshot
is transparent to libvirt, qmeu and the guest os.

However, we want to create a consistent snapshot, so you can revert to the disk
snapshot and get a consistent file system state.

We also want to create a complete vm snapshot, including all disks and vm 
memory.
Libvirt and qemu provides that when given a new disk for the active layer, but
when using ceph disk, we don't change the active layer - we continue to use the
same disk.

Since 1.2.5, libvirt provides virDomainFSFreeze and virDomainFSThaw:
https://libvirt.org/hvsupport.html

So here is possible flows (ignoring engine side stuff like locking vms and 
disks)

Disk snapshot
-

1. Engine invoke VM.freezeFileSystems
2. Vdsm invokes libvirt.virDomainFSFreeze
3. Engine creates snapshot via cinder
4. Engine invokes VM.thawFileSystems
5. Vdsm invokes livbirt.virDomainFSThaw

Vm snapshot
---

1. Engine invoke VM.freezeFileSystems
2. Vdsm invokes libvirt.virDomainFSFreeze
3. Engine creates snapshot via cinder
4. Engine invokes VM.snapshot
5. Vdsm creates snapshot, skipping ceph disks
6. Engine invokes VM.thawFileSystems
7. Vdsm invokes livbirt.virDomainFSThaw

API changes
---

New verbs:
- VM.freezeFileSystems - basically invokes virDomainFSFreeze
- VM.thawFileSystems - basically invokes virDomainFSThaw


What do you think?

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Live export/import

2015-06-19 Thread Christopher Pereira
Hi,

I would like to migrate a VM between two different ovirt installations with 
minimum down time.

Using a export domain is slow (requires copying twice) and requires to stop the 
VM.

It seems like the best option is to 1) create a snapshot, 2) transfer the 
backing chain images and OVF files to the destination storage domain 3) stop 
the VM, 4) transfer the active snapshot and 5) import the VM from the 
destination storage domain.

Can you please comment and suggest other alternatives?

Thanks.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [VDSM] Live snapshot with ceph disks

2015-06-19 Thread Christopher Pereira
Hi Nir,

Regarding "3. Engine creates snapshot *via cinder*"...

What are the benefits of creating snapshots via cinder vs via libvirt?

Libvirt and qemu are offering core VM-aware storage and memory snapshot 
features.
Besides, snapshot-create-as has no VM downtime.
It would be a mistake to implement snapshoting on the ceph layer.
At some point, you would need VM-aware code (eg: the VM memory state) and 
organically go back to the libvirt + qemu way.
There seems to be qemu + libvirt support for ceph snapshots (via rbd commands) 
which probably offers some (?) VM-awareness, but what are the benefits of not 
using the good old core libvirt + qemu snapshot features?
I must be missing something...

2) Not related:

It seems like oVirt shifted focus towards Ceph recently...

I would like to drop Gluster for Ceph if the latter supports SEEK HOLE reading 
and optimal sparse files operations. Can someone please confirm if Ceph is 
supporting SEEK_HOLE? I saw some related code, but would like to ask for 
comments before setting up and benchmarking Ceph sparse image file operations.


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel