Hi Tim,

Thanks for your feedback!

On 9 Jun 2014, at 16:44, Tim Mackey <tmac...@gmail.com> wrote:

> Dave,
> 
> Thanks for putting this up on the wiki. A few things jumped out at me...
> 
> - Please change "Xen" to "XenProject" or "Xen Project" as appropriate.
> There's already a ton of confusion out there, and I'd like to see us
> get our terms correct from the outset where ever possible.

Sure — will do.

> - It would be good to see a UI mock up for how users would configure
> the Xen Project hypervisor option.  I think that would go a long way
> to helping with the mixed hypervisor cluster concept and how it could
> be blocked.

My comments about clusters were to emphasise that you shouldn’t expect to live 
migrate a VM between KVM and Xen, even if both are using libvirt underneath. 
Actually perhaps this is a misunderstanding of mine: is it possible today to 
mix hypervisors within a single CloudStack cluster? I’m not trying to change 
anything, but just point out the obvious. Maybe I got that wrong :-)

> - It would also be good to see examples of how the APIs might need to
> be changed to support this.  Minimally I'd expect to see things like
> supported disk/network/os types and that sort of thing.

I can talk about some of this more explicitly in the doc. Since Xen can use 
qemu for disks (for both PV and HVM guests) there should be no difference in 
supported disk formats between this and the existing KVM support. I’m not 
proposing to add anything to the Xen support which isn’t supported by KVM such 
as .vhd via tapdisk. Similarly, networking is handled by the regular Linux 
network stack so that should all work in the same way.

> - I see you have a todo to document the supported Xen Project
> hypervisor and libvirt versions, but also dependencies on libxl
> changes.  Are these critical dependencies, or if someone doesn't have
> latest upstream will things work in a reduced feature set?

In this proposal they would be critical dependencies (I’ll go make that clear). 
It is possible to make transitional arrangements but I didn’t want to 
overburden this proposal with backwards compat.

> - C6.1 talks about exposing a config setting.  Is that really
> required?  Couldn't that be set correctly based on hypervisor type?

You’re right that this would be a hypervisor-specific thing.

I’m still pondering the choice between PV and HVM. Using PV mode is convenient 
because it would allow the VMs to boot under Xen under virtualbox, like current 
devcloud. Using HVM might be more future-proof.

> - Would QCOW2 be used for the Xen Project disk type for all templates
> to keep with KVM consistency?  I'm actually thinking  about support
> for VMDK, but perhaps that's a different proposal?

That sounds a separate proposal, but it shouldn’t conflict with this one 
(assuming the VMDK support is in qemu)

> - Since we're talking about sharing a libvirt plugin, I'm not clear on
> if the shared work is done in a new libvirt plugin which is then
> exposed to a KVM and a XenProject plugin or if the existing KVM plugin
> is refactored to encompass both.

Not totally sure what the best thing to do is — I’ll have to play with the code 
a little more.

Thanks,
Dave


> 
> -tim
> 
> 
> 
> 
> On Mon, Jun 9, 2014 at 2:35 AM, Wido den Hollander <w...@widodh.nl> wrote:
>> 
>> 
>> On 06/08/2014 11:14 PM, Dave Scott wrote:
>>> 
>>> Hi Wido,
>>> 
>>> Thanks for your mail!
>>> 
>>> On 8 Jun 2014, at 19:02, Wido den Hollander <w...@widodh.nl> wrote:
>>> 
>>>> On 06/08/2014 06:23 PM, Dave Scott wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> Following on from the earlier "[PROPOSAL] Support pure Xen as a
>>>>> hypervisor”, I’ve added a design doc to the wiki:
>>>>> 
>>>>> 
>>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Allow+hosts+running+the+Xen+hypervisor+to+be+managed+via+libvirt
>>>>> 
>>>>> This design would allow people who want to manage their hypervisors
>>>>> purely through the libvirt tools to choose the Xen hypervisor.
>>>>> 
>>>>> From the code point of view, I want to maximise sharing between the KVM
>>>>> and Xen code paths, partly to make QA easier and partly to maximise the
>>>>> chance that adding a feature for “Xen” causes it to work for “KVM” and
>>>>> vice-versa. In particular this means that, if a genuinely-useful 
>>>>> capability
>>>>> is currently missing from the libvirt libxl driver, I want to implement it
>>>>> rather than work around it.
>>>>> 
>>>> 
>>>> Seems like a great route to me! You also want to support Xen+Qemu with
>>>> this way?
>>> 
>>> 
>>> Yes, it should be possible to run fully virtualised VMs with Xen + Qemu. I
>>> think we’ll be able to choose whether to run VMs as PV or HVM.
>> 
>> 
>> Ok, but those will be different code paths at some level.
>> 
>> 
>>> 
>>>> We have to be aware that there might be some storage differences between
>>>> KVM and Xen like Ceph which is not fully supported yet by Xen.
>>> 
>>> 
>>> Ceph is an interesting one. Xen itself doesn’t know anything about
>>> storage— instead the dom0 takes care of it either via a kernel driver
>>> (blkback) or userspace program (qemu or tapdisk). When I tried to make Ceph
>>> work about a year ago[1] I hit a bug in libxl (the Xen control library). The
>>> good news is the fix made it into Xen 4.4, so with luck we can get it to
>>> work.
>>> 
>> 
>> When Xen runs with Qemu as full HVM it's Qemu which takes care of the Ceph
>> storage, so in that case it's fixed.
>> 
>> I haven't got a lot of experience with PV Xen. I heard stories of Ceph being
>> integrated in blktap(2), but never tested it.
>> 
>> 
>>> 
>>>> If anything is missing in libvirt or the Java bindings we have to fix
>>>> that indeed instead of hacking around it.
>>> 
>>> 
>>> Great :)
>>> 
>>> Cheers,
>>> Dave
>>> 
>>> [1]
>>> http://xenserver.org/discuss-virtualization/virtualization-blog/entry/tech-preview-of-xenserver-libvirt-ceph.html
>>> 
>>>> 
>>>> Wido
>>>> 
>>>>> Comments appreciated!
>>>>> 
>>>>> Cheers,
>>>>> Dave
>>>>> 
>>>>> [1]
>>>>> http://mail-archives.apache.org/mod_mbox/cloudstack-users/201403.mbox/%3ccajgxtbnbmqtq81ralgh2kma7v5wjyzkr3xnyasmkc_br+uk...@mail.gmail.com%3e
>>>>> 
>>> 
>> 

Reply via email to