On Sat, Jul 13, 2013 at 9:49 AM, harryxiyou <[email protected]> wrote:
> On Sat, Jul 13, 2013 at 1:45 PM, Guido Trotter <[email protected]> wrote:
>> On Fri, Jul 12, 2013 at 11:10 PM, Lance Albertson <[email protected]> wrote:
>>> I'm asking this question for Harry (the Ganeti+GlusterFS GSoC student) to
>>> try and make sure its understood correctly the first time.
>>>
>>> So we're needing to decide how to deal with the virtual disks that Ganeti
>>> will interact with for the KVM+GlusterFS support. On the surface its going
>>> to be using file-based storage similar to "file" and "sharedfile", however
>>> it looks like in lib/storage/bdev.py that you don't use tools such as
>>> qemu-img to create the raw file, you literally just make a file of a certain
>>> size.
>>>
>>
>> this is indeed the case for now. The glusterfs implementation should
>> be similar to sharedfile and rbd.
>
> Yeah, I think XEN VMs should also support glusterfs. If we just realize
> KVM+glusterfs, QEMU (KVM) could only support for glusterfs. So I think
> we should realize glusterfs type of sharedfile first, which would
> afford back-end
> storage not only QEMU (KVM) but also XEN. After this one finishes, 
> KVM+glusterfs
> would be finished, right?
>
>>
>>> According to this blog post [1] talking about this new feature, we will need
>>> to add support for creating this file using qemu-img instead of just a raw
>>> file. You can create the file-based disk storage on gluster directly using
>>> qemu-img using a command such as:
>>>
>>>    qemu-img create gluster://server/volname/path/to/image size
>>>
>>
>> Ack, makes sense.
>
> Just the case of KVM+GlusterFS way.
>
>>
>>> So to me this means we need a new device type in bdev.py which could open
>>> the possibility of finally adding support for other image formats that
>>> qemu-img support (such as qcow2). But for this project we can just stick to
>>> raw files.
>>>
>>
>> I don't understand if raw files are supported or not, and qemu-img is
>> a strict dependency. Anyway, either way we need a new class called
>> GlusterBlockDevice that is able to create/delete etc devices on
>> gluster. *if* this wants to (perhaps optionally) depend on a common
>> QemuImgDevice class, this is OK too. Then in the future perhaps
>> file/sharedfile could add Qemu Img support.
>>
>>
>>> So my question for Ganeti devs is, should Harry make a new device type in
>>> bdev.py that specifically supports qemu-img (say QemuImgDevice)? It doesn't
>>> have to only work for this GlusterFS feature but could potentially be used
>>> for file and sharedfile as well.
>>>
>>
>> This would definitely be a good design (GlusterBlockDevice using
>> QemuImgDevice), but of course there's no need to change
>> file/sharedfile to support it right away. This can be done later
>> either by harry (after the summer of code) or by someone else.
>
> I am glad to do it later on ;)
>
>>
>>> I think initially we were looking at the rbd patch as an example however
>>> that doesn't exactly align 1:1 since we aren't directly accessing a block
>>> device. We're doing all of the operations via qemu-img or qemu (kvm)
>>> directly.
>>>
>>
>> Didn't we say there would also be (optional) direct access? Or was
>> that deemed not needed/too complex?
>> Anyway it's OK to behave similarly to rbd, but ignoring the direct
>> access part (or make them optional), as this is the plan for rbd
>> itself as well.
>>
>
> Also the case KVM+GlusterFS.
>
> Summary here. Actually, I think we should realize "GlusterFS Ganeti
> Support" by two parts as follows.
>
> After run "gnt-instance -t gluster xxxx", if the VM is QEMU (KVM), it would
> run as KVM+GlusterFS way. However, if the VM is XEN, it would run
> as glusterfs file way (Like gnt-instance add -t file xxx). For me, I would
> finish the glusterfs file way firstly because I have installed Ganeti with XEN
> VMs. After this part finished, I would complete KVM+Glusterfs way for
> QEMU (KVM) VM.
>
> Any comments?
>

Looks like a good plan. Please consider the option, if the user wants,
to do qemu+kernel backend too.
This should be encoded as a disk parameter.

Thanks,

Guido

Reply via email to