On 2019/8/9 16:21, Stefan Hajnoczi wrote:
> On Thu, Aug 08, 2019 at 10:53:16AM +0100, Dr. David Alan Gilbert wrote:
>> * Stefan Hajnoczi (stefa...@redhat.com) wrote:
>>> On Wed, Aug 07, 2019 at 04:57:15PM -0400, Vivek Goyal wrote:
>>> 2. Can MAP/UNMAP be performed directly in QEMU via a separate virtqueue?
>>
>> I think there's two things to solve here that I don't currently know the
>> answer to:
>> 2a) We'd need to get the fd to qemu for the thing to mmap;
>> we might be able to cache the fd on the qemu side for existing
>> mappings, so when asking for a new mapping for an existing file then
>> it would already have the fd.
>>
>> 2b) Running a device with a mix of queues inside QEMU and on
>> vhost-user; I don't think we have anything with that mix
>
> vhost-user-net works in the same way. The ctrl queue is handled by QEMU
> and the rx/tx queues by the vhost device. This is in fact how vhost was
> initially designed: the vhost device is not a full virtio device, only
> the dataplane.
Agreed.
>
>>> 3. Can READ/WRITE be performed directly in QEMU via a separate virtqueue
>>> to eliminate the bad address problem?
>>
>> Are you thinking of doing all read/writes that way, or just the corner
>> cases? It doesn't seem worth it for the corner cases unless you're
>> finding them cropping up in real work loads.
>
> Send all READ/WRITE requests to QEMU instead of virtiofsd.
>
> Only handle metadata requests in virtiofsd (OPEN, RELEASE, READDIR,
> MKDIR, etc).
>
Sorry for not catching your point, and I like the virtiofsd to do
READ/WRITE requests and qemu handle metadata requests, as virtiofsd is
good at processing dataplane things due to thread-pool and CPU
affinity(maybe in the future). As you said, virtiofsd is just acting as
a vhost-user device which should care less about ctrl request.
If our concern is improving mmap/write/read performance, why not adding
a delay worker for unmmap which could decrease the ummap times. Maybe
virtiofsd could still handle both data and meta requests by this way.
Jun