On Sun, Aug 11, 2019 at 10:26:18AM +0800, piaojun wrote:
> On 2019/8/9 16:21, Stefan Hajnoczi wrote:
> > On Thu, Aug 08, 2019 at 10:53:16AM +0100, Dr. David Alan Gilbert wrote:
> >> * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> >>> On Wed, Aug 07, 2019 at 04:57:15PM -0400, Vivek Goyal wrote:
> >>> 3. Can READ/WRITE be performed directly in QEMU via a separate virtqueue
> >>>    to eliminate the bad address problem?
> >>
> >> Are you thinking of doing all read/writes that way, or just the corner
> >> cases? It doesn't seem worth it for the corner cases unless you're
> >> finding them cropping up in real work loads.
> > 
> > Send all READ/WRITE requests to QEMU instead of virtiofsd.
> > 
> > Only handle metadata requests in virtiofsd (OPEN, RELEASE, READDIR,
> > MKDIR, etc).
> > 
> 
> Sorry for not catching your point, and I like the virtiofsd to do
> READ/WRITE requests and qemu handle metadata requests, as virtiofsd is
> good at processing dataplane things due to thread-pool and CPU
> affinity(maybe in the future). As you said, virtiofsd is just acting as
> a vhost-user device which should care less about ctrl request.
> 
> If our concern is improving mmap/write/read performance, why not adding
> a delay worker for unmmap which could decrease the ummap times. Maybe
> virtiofsd could still handle both data and meta requests by this way.

Doing READ/WRITE in QEMU solves the problem that vhost-user slaves only
have access to guest RAM regions.  If a guest transfers other memory,
like an address in the DAX Window, to/from the vhost-user device then
virtqueue buffer address translation fails.

Dave added a code path that bounces such accesses through the QEMU
process using the VHOST_USER_SLAVE_FS_IO slavefd request, but it would
be simpler, faster, and cleaner to do I/O in QEMU in the first place.

What I don't like about moving READ/WRITE into QEMU is that we need to
use even more virtqueues for multiqueue operation :).

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to