Mohammed Gamal wrote:
> On Tue, Apr 20, 2010 at 12:54 AM, jvrao <jv...@linux.vnet.ibm.com> wrote:
>> Mohammed Gamal wrote:
>>> On Tue, Apr 13, 2010 at 9:08 PM, jvrao <jv...@linux.vnet.ibm.com> wrote:
>>>> jvrao wrote:
>>>>> Alexander Graf wrote:
>>>>>> On 12.04.2010, at 13:58, Jamie Lokier wrote:
>>>>>>
>>>>>>> Mohammed Gamal wrote:
>>>>>>>> On Mon, Apr 12, 2010 at 12:29 AM, Jamie Lokier <ja...@shareable.org> 
>>>>>>>> wrote:
>>>>>>>>> Javier Guerra Giraldez wrote:
>>>>>>>>>> On Sat, Apr 10, 2010 at 7:42 AM, Mohammed Gamal 
>>>>>>>>>> <m.gamal...@gmail.com> wrote:
>>>>>>>>>>> On Sat, Apr 10, 2010 at 2:12 PM, Jamie Lokier <ja...@shareable.org> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>> To throw a spanner in, the most widely supported filesystem across
>>>>>>>>>>>> operating systems is probably NFS, version 2 :-)
>>>>>>>>>>> Remember that Windows usage on a VM is not some rare use case, and
>>>>>>>>>>> it'd be a little bit of a pain from a user's perspective to have to
>>>>>>>>>>> install a third party NFS client for every VM they use. Having
>>>>>>>>>>> something supported on the VM out of the box is a better option IMO.
>>>>>>>>>> i don't think virtio-CIFS has any more support out of the box (on any
>>>>>>>>>> system) than virtio-9P.
>>>>>>>>> It doesn't, but at least network-CIFS tends to work ok and is the
>>>>>>>>> method of choice for Windows VMs - when you can setup Samba on the
>>>>>>>>> host (which as previously noted you cannot always do non-disruptively
>>>>>>>>> with current Sambas).
>>>>>>>>>
>>>>>>>>> -- Jamie
>>>>>>>>>
>>>>>>>> I think having support for both 9p and CIFS would be the best option.
>>>>>>>> In that case the user will have the option to use either one,
>>>>>>>> depending on how their guests support these filesystems. In that case
>>>>>>>> I'd prefer to work on CIFS support while the 9p effort can still go
>>>>>>>> on. I don't think both efforts are mutually exclusive.
>>>>>>>>
>>>>>>>> What do the rest of you guys think?
>>>>>>> I only noted NFS because most old OSes do not support CIFS or 9P -
>>>>>>> especially all the old unixes.
>>>>>>>
>>>>>>> I don't think old versions of MS-DOS and Windows (95, 98, ME, Nt4?)
>>>>>>> even support current CIFS.  They need extra server settings to work
>>>>>>> - such as setting passwords on the server to non-encrypted and other 
>>>>>>> quirks.
>>>>>>>
>>>>>>> Meanwhile Windows Vista/2008/7 works better with SMB2, not CIFS, to
>>>>>>> properly see symlinks and hard links.
>>>>>>>
>>>>>>> So there is no really nice out of the box file service which works
>>>>>>> easily with all guest OSes.
>>>>>>>
>>>>>>> I'm guessing that out of all the filesystems, CIFS is the most widely
>>>>>>> supported in recent OSes (released in the last 10 years).  But I'm not
>>>>>>> really sure what the state of CIFS is for non-Windows, non-Linux,
>>>>>>> non-BSD guests.
>>>>>> So what? If you want to have direct host fs access, install guest 
>>>>>> drivers. If you can't, set up networking and use CIFS or NFS or whatever.
>>>>>>
>>>>>>> I'm not sure why 9P is being pursued.  Does anything much support it,
>>>>>>> or do all OSes except quite recent Linux need a custom driver for 9P?
>>>>>>> Even Linux only got the first commit in the kernel 5 years ago, so
>>>>>>> probably it was only about 3 years ago that it will have begun
>>>>>>> appearing in stable distros, if at all.  Filesystem passthrough to
>>>>>>> Linux guests installed in the last couple of years is a useful
>>>>>>> feature, and I know that for many people that is their only use of
>>>>>>> KVM, but compared with CIFS' broad support it seems like quite a
>>>>>>> narrow goal.
>>>>>> The goal is to have something simple and fast. We can fine-tune 9P to 
>>>>>> align with the Linux VFS structures, making it really little overhead 
>>>>>> (and little headache too). For Windows guests, nothing prevents us to 
>>>>>> expose yet another 9P flavor. That again would be aligned well with 
>>>>>> Windows's VFS and be slim and fast there.
>>>>>>
>>>>>> The biggest problem I see with CIFS is that it's a huge beast. There are 
>>>>>> a lot of corner cases where it just doesn't fit in. See my previous mail 
>>>>>> for more details.
>>>>>>
>>>>> As Alex mentioned, 9P is chosen for its mere simplicity and easy 
>>>>> adaptability.
>>>>> NFS and CIFS does not give that flexibility. As we mentioned in the patch 
>>>>> series, we are
>>>>> already seeing better numbers with 9P. Looking ahead 9P can embed 
>>>>> KVM/QEMU knowledge
>>>>> to share physical resources like memory/cache between the host and the 
>>>>> guest.
>>>>>
>>>>> I think looking into the windows side of 9P client would be great option 
>>>>> too.
>>>>> The current patch on the mailing list supports .U (unix) protocol and 
>>>>> will be introducing
>>>>> .L (Linux) variant as we move forward.
>>>> Hi Mohammed,
>>>> Please let us know once you decide on where your interest lies.
>>>> Will be glad to have you on VirtFS (9P) though. :)
>>>>
>>>>
>>>> - JV
>>>>
>>> It seems the community is more keen on getting 9P support merged than
>>> getting CIFS supported, and they have made good points to support
>>> their argument. I'm not sure whether work on this project could fit in
>>> as a GSoC project and if there is much remaining work that could make
>>> it fit in that direction. But I'd be glad to volunteer anyway :)
>> I was thinking over the wk-end what fits your schedule and your interest 
>> areas. :)
>>
>> One thing I can think of is, making NFS server export VirtFS mount on the 
>> guest to
>> the external world. This works fine now if we enable loose cache option in 
>> 9P mount.
>> But I think we should identify and make this exports work even otherwise.
>> Please let me know if this is something that you want to sign up. :)
>>
>> Thanks,
>> JV
>>
> 
> This'd be something interesting to do. I wonder if that would fit in
> the GSoC timeframe, or whether it'd be a little too short. So how long
> you'd estimate something like that would take?

I think it would take ~3PM for someone with decent VFS/NFS knowledge. 
They key is fh-to-dentry mapping. In the loose cache mode client caches 
this information .. but even in this mode we can't assume that it will be cached
forever. Need protocol amendments, client/server side changes to implement
this in the no-cache mode which can be used even in the loose cache mode when
we get a cache-miss.

Thanks,
JV
> 
> Regards,
> Mohammed
> 
> 




Reply via email to