Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-06-16 Thread Dr. David Alan Gilbert
* Stefan Hajnoczi (stefa...@redhat.com) wrote:
> On Wed, Jun 16, 2021 at 01:36:10PM +0100, Dr. David Alan Gilbert wrote:
> > * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > > On Thu, Jun 10, 2021 at 04:29:42PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > > > > * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > > > +uint64_t addr; /* In the bus address of the device */
> > > 
> > > Please check the spec for preferred terminology. "bus address" isn't
> > > used in the spec, so there's probably another term for it.
> > 
> > I'm not seeing anything useful in the virtio spec; it mostly talks about
> > guest physical addresses; it does say 'bus addresses' in the definition
> > of 'VIRTIO_F_ACCESS_PLATFORM' .
> 
> I meant the docs/interop/vhost-user.rst spec.

I think they use the phrase 'guest address' so I've changed that to:

uint64_t guest_addr; 

   Elsewhere in the vhost-user.rst it says:
   
   When the ``VIRTIO_F_IOMMU_PLATFORM`` feature has not been negotiated:

   * Guest addresses map to the vhost memory region containing that guest
 address.

   When the ``VIRTIO_F_IOMMU_PLATFORM`` feature has been negotiated:

   * Guest addresses are also called I/O virtual addresses (IOVAs).  They are
 translated to user addresses via the IOTLB.
   
> Stefan


-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-06-16 Thread Stefan Hajnoczi
On Wed, Jun 16, 2021 at 01:36:10PM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > On Thu, Jun 10, 2021 at 04:29:42PM +0100, Dr. David Alan Gilbert wrote:
> > > * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > > > * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > > +uint64_t addr; /* In the bus address of the device */
> > 
> > Please check the spec for preferred terminology. "bus address" isn't
> > used in the spec, so there's probably another term for it.
> 
> I'm not seeing anything useful in the virtio spec; it mostly talks about
> guest physical addresses; it does say 'bus addresses' in the definition
> of 'VIRTIO_F_ACCESS_PLATFORM' .

I meant the docs/interop/vhost-user.rst spec.

Stefan


signature.asc
Description: PGP signature


Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-06-16 Thread Dr. David Alan Gilbert
* Stefan Hajnoczi (stefa...@redhat.com) wrote:
> On Thu, Jun 10, 2021 at 04:29:42PM +0100, Dr. David Alan Gilbert wrote:
> > * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > > * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> > 
> > 
> > 
> > > > Instead I was thinking about VHOST_USER_DMA_READ/WRITE messages
> > > > containing the address (a device IOVA, it could just be a guest physical
> > > > memory address in most cases) and the length. The WRITE message would
> > > > also contain the data that the vhost-user device wishes to write. The
> > > > READ message reply would contain the data that the device read from
> > > > QEMU.
> > > > 
> > > > QEMU would implement this using QEMU's address_space_read/write() APIs.
> > > > 
> > > > So basically just a new vhost-user protocol message to do a memcpy(),
> > > > but with guest addresses and vIOMMU support :).
> > > 
> > > This doesn't actually feel that hard - ignoring vIOMMU for a minute
> > > which I know very little about - I'd have to think where the data
> > > actually flows, probably the slave fd.
> > > 
> > > > The vhost-user device will need to do bounce buffering so using these
> > > > new messages is slower than zero-copy I/O to shared guest RAM.
> > > 
> > > I guess the theory is it's only in the weird corner cases anyway.
> 
> The feature is also useful if DMA isolation is desirable (i.e.
> security/reliability are more important than performance). Once this new
> vhost-user protocol feature is available it will be possible to run
> vhost-user devices without shared memory or with limited shared memory
> (e.g. just the vring).

I don't see it ever being efficient, so that case is going to be pretty
limited.

> > The direction I'm going is something like the following;
> > the idea is that the master will have to handle the requests on a
> > separate thread, to avoid any problems with side effects from the memory
> > accesses; the slave will then have to parkt he requests somewhere and
> > handle them later.
> > 
> > 
> > From 07aacff77c50c8a2b588b2513f2dfcfb8f5aa9df Mon Sep 17 00:00:00 2001
> > From: "Dr. David Alan Gilbert" 
> > Date: Thu, 10 Jun 2021 15:34:04 +0100
> > Subject: [PATCH] WIP: vhost-user: DMA type interface
> > 
> > A DMA type interface where the slave can ask for a stream of bytes
> > to be read/written to the guests memory by the master.
> > The interface is asynchronous, since a request may have side effects
> > inside the guest.
> > 
> > Signed-off-by: Dr. David Alan Gilbert 
> > ---
> >  docs/interop/vhost-user.rst   | 33 +++
> >  hw/virtio/vhost-user.c|  4 +++
> >  subprojects/libvhost-user/libvhost-user.h | 24 +
> >  3 files changed, 61 insertions(+)
> 
> Use of the word "RAM" in this patch is a little unclear since we need
> these new messages precisely when it's not ordinary guest RAM :-). Maybe
> referring to the address space is more general.

Yeh, I'll try and spot those.

> > diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
> > index 9ebd05e2bf..b9b5322147 100644
> > --- a/docs/interop/vhost-user.rst
> > +++ b/docs/interop/vhost-user.rst
> > @@ -1347,6 +1347,15 @@ Master message types
> >query the backend for its device status as defined in the Virtio
> >specification.
> >  
> > +``VHOST_USER_MEM_DATA``
> > +  :id: 41
> > +  :equivalent ioctl: N/A
> > +  :slave payload: N/A
> > +  :master payload: ``struct VhostUserMemReply``
> > +
> > +  This message is an asynchronous response to a 
> > ``VHOST_USER_SLAVE_MEM_ACCESS``
> > +  message.  Where the request was for the master to read data, this
> > +  message will be followed by the data that was read.
> 
> Please explain why this message is asynchronous. Implementors will need
> to understand the gotchas around deadlocks, etc.

I've added:
  Making this a separate asynchronous response message (rather than just a reply
  to the ``VHOST_USER_SLAVE_MEM_ACCESS``) makes it much easier for the master
  to deal with any side effects the access may have, and in particular avoid
  deadlocks they might cause if an access triggers another vhost_user message.

> >  
> >  Slave message types
> >  ---
> > @@ -1469,6 +1478,30 @@ Slave message types
> >The ``VHOST_USER_FS_FLAG_MAP_W`` flag must be set in the ``flags`` field 
> > to
> >write to the file from RAM.
> >  
> > +``VHOST_USER_SLAVE_MEM_ACCESS``
> > +  :id: 9
> > +  :equivalent ioctl: N/A
> > +  :slave payload: ``struct VhostUserMemAccess``
> > +  :master payload: N/A
> > +
> > +  Requests that the master perform a range of memory accesses on behalf
> > +  of the slave that the slave can't perform itself.
> > +
> > +  The ``VHOST_USER_MEM_FLAG_TO_MASTER`` flag must be set in the ``flags``
> > +  field for the slave to write data into the RAM of the master.   In this
> > +  case the data to write follows the ``VhostUserMemAccess`` on the fd.
> > +  The ``VHOST_USER_MEM_FLAG_FROM_MASTER`` flag must 

Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-06-10 Thread Stefan Hajnoczi
On Thu, Jun 10, 2021 at 04:29:42PM +0100, Dr. David Alan Gilbert wrote:
> * Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> > * Stefan Hajnoczi (stefa...@redhat.com) wrote:
> 
> 
> 
> > > Instead I was thinking about VHOST_USER_DMA_READ/WRITE messages
> > > containing the address (a device IOVA, it could just be a guest physical
> > > memory address in most cases) and the length. The WRITE message would
> > > also contain the data that the vhost-user device wishes to write. The
> > > READ message reply would contain the data that the device read from
> > > QEMU.
> > > 
> > > QEMU would implement this using QEMU's address_space_read/write() APIs.
> > > 
> > > So basically just a new vhost-user protocol message to do a memcpy(),
> > > but with guest addresses and vIOMMU support :).
> > 
> > This doesn't actually feel that hard - ignoring vIOMMU for a minute
> > which I know very little about - I'd have to think where the data
> > actually flows, probably the slave fd.
> > 
> > > The vhost-user device will need to do bounce buffering so using these
> > > new messages is slower than zero-copy I/O to shared guest RAM.
> > 
> > I guess the theory is it's only in the weird corner cases anyway.

The feature is also useful if DMA isolation is desirable (i.e.
security/reliability are more important than performance). Once this new
vhost-user protocol feature is available it will be possible to run
vhost-user devices without shared memory or with limited shared memory
(e.g. just the vring).

> The direction I'm going is something like the following;
> the idea is that the master will have to handle the requests on a
> separate thread, to avoid any problems with side effects from the memory
> accesses; the slave will then have to parkt he requests somewhere and
> handle them later.
> 
> 
> From 07aacff77c50c8a2b588b2513f2dfcfb8f5aa9df Mon Sep 17 00:00:00 2001
> From: "Dr. David Alan Gilbert" 
> Date: Thu, 10 Jun 2021 15:34:04 +0100
> Subject: [PATCH] WIP: vhost-user: DMA type interface
> 
> A DMA type interface where the slave can ask for a stream of bytes
> to be read/written to the guests memory by the master.
> The interface is asynchronous, since a request may have side effects
> inside the guest.
> 
> Signed-off-by: Dr. David Alan Gilbert 
> ---
>  docs/interop/vhost-user.rst   | 33 +++
>  hw/virtio/vhost-user.c|  4 +++
>  subprojects/libvhost-user/libvhost-user.h | 24 +
>  3 files changed, 61 insertions(+)

Use of the word "RAM" in this patch is a little unclear since we need
these new messages precisely when it's not ordinary guest RAM :-). Maybe
referring to the address space is more general.

> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
> index 9ebd05e2bf..b9b5322147 100644
> --- a/docs/interop/vhost-user.rst
> +++ b/docs/interop/vhost-user.rst
> @@ -1347,6 +1347,15 @@ Master message types
>query the backend for its device status as defined in the Virtio
>specification.
>  
> +``VHOST_USER_MEM_DATA``
> +  :id: 41
> +  :equivalent ioctl: N/A
> +  :slave payload: N/A
> +  :master payload: ``struct VhostUserMemReply``
> +
> +  This message is an asynchronous response to a 
> ``VHOST_USER_SLAVE_MEM_ACCESS``
> +  message.  Where the request was for the master to read data, this
> +  message will be followed by the data that was read.

Please explain why this message is asynchronous. Implementors will need
to understand the gotchas around deadlocks, etc.

>  
>  Slave message types
>  ---
> @@ -1469,6 +1478,30 @@ Slave message types
>The ``VHOST_USER_FS_FLAG_MAP_W`` flag must be set in the ``flags`` field to
>write to the file from RAM.
>  
> +``VHOST_USER_SLAVE_MEM_ACCESS``
> +  :id: 9
> +  :equivalent ioctl: N/A
> +  :slave payload: ``struct VhostUserMemAccess``
> +  :master payload: N/A
> +
> +  Requests that the master perform a range of memory accesses on behalf
> +  of the slave that the slave can't perform itself.
> +
> +  The ``VHOST_USER_MEM_FLAG_TO_MASTER`` flag must be set in the ``flags``
> +  field for the slave to write data into the RAM of the master.   In this
> +  case the data to write follows the ``VhostUserMemAccess`` on the fd.
> +  The ``VHOST_USER_MEM_FLAG_FROM_MASTER`` flag must be set in the ``flags``
> +  field for the slave to read data from the RAM of the master.
> +
> +  When the master has completed the access it replies on the main fd with
> +  a ``VHOST_USER_MEM_DATA`` message.
> +
> +  The master is allowed to complete part of the request and reply stating
> +  the amount completed, leaving it to the slave to resend further components.
> +  This may happen to limit memory allocations in the master or to simplify
> +  the implementation.
> +
> +
>  .. _reply_ack:
>  
>  VHOST_USER_PROTOCOL_F_REPLY_ACK
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 39a0e55cca..a3fefc4c1d 100644
> --- a/hw/virtio/vhost-user.c
> +++ 

Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-06-10 Thread Dr. David Alan Gilbert
* Dr. David Alan Gilbert (dgilb...@redhat.com) wrote:
> * Stefan Hajnoczi (stefa...@redhat.com) wrote:



> > Instead I was thinking about VHOST_USER_DMA_READ/WRITE messages
> > containing the address (a device IOVA, it could just be a guest physical
> > memory address in most cases) and the length. The WRITE message would
> > also contain the data that the vhost-user device wishes to write. The
> > READ message reply would contain the data that the device read from
> > QEMU.
> > 
> > QEMU would implement this using QEMU's address_space_read/write() APIs.
> > 
> > So basically just a new vhost-user protocol message to do a memcpy(),
> > but with guest addresses and vIOMMU support :).
> 
> This doesn't actually feel that hard - ignoring vIOMMU for a minute
> which I know very little about - I'd have to think where the data
> actually flows, probably the slave fd.
> 
> > The vhost-user device will need to do bounce buffering so using these
> > new messages is slower than zero-copy I/O to shared guest RAM.
> 
> I guess the theory is it's only in the weird corner cases anyway.

The direction I'm going is something like the following;
the idea is that the master will have to handle the requests on a
separate thread, to avoid any problems with side effects from the memory
accesses; the slave will then have to parkt he requests somewhere and
handle them later.


>From 07aacff77c50c8a2b588b2513f2dfcfb8f5aa9df Mon Sep 17 00:00:00 2001
From: "Dr. David Alan Gilbert" 
Date: Thu, 10 Jun 2021 15:34:04 +0100
Subject: [PATCH] WIP: vhost-user: DMA type interface

A DMA type interface where the slave can ask for a stream of bytes
to be read/written to the guests memory by the master.
The interface is asynchronous, since a request may have side effects
inside the guest.

Signed-off-by: Dr. David Alan Gilbert 
---
 docs/interop/vhost-user.rst   | 33 +++
 hw/virtio/vhost-user.c|  4 +++
 subprojects/libvhost-user/libvhost-user.h | 24 +
 3 files changed, 61 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 9ebd05e2bf..b9b5322147 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -1347,6 +1347,15 @@ Master message types
   query the backend for its device status as defined in the Virtio
   specification.
 
+``VHOST_USER_MEM_DATA``
+  :id: 41
+  :equivalent ioctl: N/A
+  :slave payload: N/A
+  :master payload: ``struct VhostUserMemReply``
+
+  This message is an asynchronous response to a ``VHOST_USER_SLAVE_MEM_ACCESS``
+  message.  Where the request was for the master to read data, this
+  message will be followed by the data that was read.
 
 Slave message types
 ---
@@ -1469,6 +1478,30 @@ Slave message types
   The ``VHOST_USER_FS_FLAG_MAP_W`` flag must be set in the ``flags`` field to
   write to the file from RAM.
 
+``VHOST_USER_SLAVE_MEM_ACCESS``
+  :id: 9
+  :equivalent ioctl: N/A
+  :slave payload: ``struct VhostUserMemAccess``
+  :master payload: N/A
+
+  Requests that the master perform a range of memory accesses on behalf
+  of the slave that the slave can't perform itself.
+
+  The ``VHOST_USER_MEM_FLAG_TO_MASTER`` flag must be set in the ``flags``
+  field for the slave to write data into the RAM of the master.   In this
+  case the data to write follows the ``VhostUserMemAccess`` on the fd.
+  The ``VHOST_USER_MEM_FLAG_FROM_MASTER`` flag must be set in the ``flags``
+  field for the slave to read data from the RAM of the master.
+
+  When the master has completed the access it replies on the main fd with
+  a ``VHOST_USER_MEM_DATA`` message.
+
+  The master is allowed to complete part of the request and reply stating
+  the amount completed, leaving it to the slave to resend further components.
+  This may happen to limit memory allocations in the master or to simplify
+  the implementation.
+
+
 .. _reply_ack:
 
 VHOST_USER_PROTOCOL_F_REPLY_ACK
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 39a0e55cca..a3fefc4c1d 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -126,6 +126,9 @@ typedef enum VhostUserRequest {
 VHOST_USER_GET_MAX_MEM_SLOTS = 36,
 VHOST_USER_ADD_MEM_REG = 37,
 VHOST_USER_REM_MEM_REG = 38,
+VHOST_USER_SET_STATUS = 39,
+VHOST_USER_GET_STATUS = 40,
+VHOST_USER_MEM_DATA = 41,
 VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -139,6 +142,7 @@ typedef enum VhostUserSlaveRequest {
 VHOST_USER_SLAVE_FS_MAP = 6,
 VHOST_USER_SLAVE_FS_UNMAP = 7,
 VHOST_USER_SLAVE_FS_IO = 8,
+VHOST_USER_SLAVE_MEM_ACCESS = 9,
 VHOST_USER_SLAVE_MAX
 }  VhostUserSlaveRequest;
 
diff --git a/subprojects/libvhost-user/libvhost-user.h 
b/subprojects/libvhost-user/libvhost-user.h
index eee611a2f6..b5444f4f6f 100644
--- a/subprojects/libvhost-user/libvhost-user.h
+++ b/subprojects/libvhost-user/libvhost-user.h
@@ -109,6 +109,9 @@ typedef enum VhostUserRequest {
 VHOST_USER_GET_MAX_MEM_SLOTS = 36,
 

Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-27 Thread Dr. David Alan Gilbert
* Stefan Hajnoczi (stefa...@redhat.com) wrote:
> On Mon, May 10, 2021 at 11:23:24AM -0400, Vivek Goyal wrote:
> > On Mon, May 10, 2021 at 10:05:09AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, May 06, 2021 at 12:02:23PM -0400, Vivek Goyal wrote:
> > > > On Thu, May 06, 2021 at 04:37:04PM +0100, Stefan Hajnoczi wrote:
> > > > > On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert 
> > > > > (git) wrote:
> > > > > > From: Vivek Goyal 
> > > > > > 
> > > > > > If qemu guest asked to drop CAP_FSETID upon write, send that info
> > > > > > to qemu in SLAVE_FS_IO message so that qemu can drop capability
> > > > > > before WRITE. This is to make sure that any setuid bit is killed
> > > > > > on fd (if there is one set).
> > > > > > 
> > > > > > Signed-off-by: Vivek Goyal 
> > > > > 
> > > > > I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
> > > > > running with FSETID because QEMU is untrusted. FSETGID would allow 
> > > > > QEMU
> > > > > to create setgid files, thereby potentially allowing an attacker to 
> > > > > gain
> > > > > any GID.
> > > > 
> > > > Sure, its not recommended to run QEMU as root, but we don't block that
> > > > either and I do regularly test with qemu running as root.
> > > > 
> > > > > 
> > > > > I think it's better not to implement QEMU FSETID functionality at all
> > > > > and to handle it another way.
> > > > 
> > > > One way could be that virtiofsd tries to clear setuid bit after I/O
> > > > has finished. But that will be non-atomic operation and it is filled
> > > > with perils as it requires virtiofsd to know what all kernel will
> > > > do if this write has been done with CAP_FSETID dropped.
> > > > 
> > > > > In the worst case I/O requests should just
> > > > > fail, it seems like a rare case anyway:
> > > > 
> > > > Is there a way for virtiofsd to know if qemu is running with CAP_FSETID
> > > > or not. If there is one, it might be reasonable to error out. If we
> > > > don't know, then we can't fail all the operations.
> > > > 
> > > > > I/O to a setuid/setgid file with
> > > > > a memory buffer that is not mapped in virtiofsd.
> > > > 
> > > > With DAX it is easily triggerable. User has to append to a setuid file
> > > > in virtiofs and this path will trigger.
> > > > 
> > > > I am fine with not supporting this patch but will also need a 
> > > > reaosonable
> > > > alternative solution.
> > > 
> > > One way to avoid this problem is by introducing DMA read/write functions
> > > into the vhost-user protocol that can be used by all device types, not
> > > just virtio-fs.
> > > 
> > > Today virtio-fs uses the IO slave request when it cannot access a region
> > > of guest memory. It sends the file descriptor to QEMU and QEMU performs
> > > the pread(2)/pwrite(2) on behalf of virtiofsd.
> > > 
> > > I mentioned in the past that this solution is over-specialized. It
> > > doesn't solve the larger problem that vhost-user processes do not have
> > > full access to the guest memory space (e.g. DAX window).
> > > 
> > > Instead of sending file I/O requests over to QEMU, the vhost-user
> > > protocol should offer DMA read/write requests so any vhost-user process
> > > can access the guest memory space where vhost's shared memory mechanism
> > > is insufficient.
> > > 
> > > Here is how it would work:
> > > 
> > > 1. Drop the IO slave request, replace it with DMA read/write slave
> > >requests.
> > > 
> > >Note that these new requests can also be used in environments where
> > >maximum vIOMMU isolation is needed for security reasons and sharing
> > >all of guest RAM with the vhost-user process is considered
> > >unacceptable.
> > > 
> > > 2. When virtqueue buffer mapping fails, send DMA read/write slave
> > >requests to transfer the data from/to QEMU. virtiofsd calls
> > >pread(2)/pwrite(2) itself with virtiofsd's Linux capabilities.
> > 
> > Can you elaborate a bit more how will this new DMA read/write vhost-user
> > commands can be implemented. I am assuming its not a real DMA and just
> > sort of emulation of DMA. Effectively we have two processes and one
> > process needs to read/write to/from address space of other process.
> > 
> > We were also wondering if we can make use of process_vm_readv()
> > and process_vm_write() syscalls to achieve this. But this atleast
> > requires virtiofsd to be more priviliged than qemu and also virtiofsd
> > needs to know where DAX mapping window is. We briefly discussed this here.
> > 
> > https://lore.kernel.org/qemu-devel/20210421200746.gh1579...@redhat.com/
> 
> I wasn't thinking of directly allowing QEMU virtual memory access via
> process_vm_readv/writev(). That would be more efficient but requires
> privileges and also exposes internals of QEMU's virtual memory layout
> and vIOMMU translation to the vhost-user process.
> 
> Instead I was thinking about VHOST_USER_DMA_READ/WRITE messages
> containing the address (a device IOVA, it could just be a guest physical
> memory address in most 

Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-10 Thread Stefan Hajnoczi
On Mon, May 10, 2021 at 11:23:24AM -0400, Vivek Goyal wrote:
> On Mon, May 10, 2021 at 10:05:09AM +0100, Stefan Hajnoczi wrote:
> > On Thu, May 06, 2021 at 12:02:23PM -0400, Vivek Goyal wrote:
> > > On Thu, May 06, 2021 at 04:37:04PM +0100, Stefan Hajnoczi wrote:
> > > > On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert (git) 
> > > > wrote:
> > > > > From: Vivek Goyal 
> > > > > 
> > > > > If qemu guest asked to drop CAP_FSETID upon write, send that info
> > > > > to qemu in SLAVE_FS_IO message so that qemu can drop capability
> > > > > before WRITE. This is to make sure that any setuid bit is killed
> > > > > on fd (if there is one set).
> > > > > 
> > > > > Signed-off-by: Vivek Goyal 
> > > > 
> > > > I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
> > > > running with FSETID because QEMU is untrusted. FSETGID would allow QEMU
> > > > to create setgid files, thereby potentially allowing an attacker to gain
> > > > any GID.
> > > 
> > > Sure, its not recommended to run QEMU as root, but we don't block that
> > > either and I do regularly test with qemu running as root.
> > > 
> > > > 
> > > > I think it's better not to implement QEMU FSETID functionality at all
> > > > and to handle it another way.
> > > 
> > > One way could be that virtiofsd tries to clear setuid bit after I/O
> > > has finished. But that will be non-atomic operation and it is filled
> > > with perils as it requires virtiofsd to know what all kernel will
> > > do if this write has been done with CAP_FSETID dropped.
> > > 
> > > > In the worst case I/O requests should just
> > > > fail, it seems like a rare case anyway:
> > > 
> > > Is there a way for virtiofsd to know if qemu is running with CAP_FSETID
> > > or not. If there is one, it might be reasonable to error out. If we
> > > don't know, then we can't fail all the operations.
> > > 
> > > > I/O to a setuid/setgid file with
> > > > a memory buffer that is not mapped in virtiofsd.
> > > 
> > > With DAX it is easily triggerable. User has to append to a setuid file
> > > in virtiofs and this path will trigger.
> > > 
> > > I am fine with not supporting this patch but will also need a reaosonable
> > > alternative solution.
> > 
> > One way to avoid this problem is by introducing DMA read/write functions
> > into the vhost-user protocol that can be used by all device types, not
> > just virtio-fs.
> > 
> > Today virtio-fs uses the IO slave request when it cannot access a region
> > of guest memory. It sends the file descriptor to QEMU and QEMU performs
> > the pread(2)/pwrite(2) on behalf of virtiofsd.
> > 
> > I mentioned in the past that this solution is over-specialized. It
> > doesn't solve the larger problem that vhost-user processes do not have
> > full access to the guest memory space (e.g. DAX window).
> > 
> > Instead of sending file I/O requests over to QEMU, the vhost-user
> > protocol should offer DMA read/write requests so any vhost-user process
> > can access the guest memory space where vhost's shared memory mechanism
> > is insufficient.
> > 
> > Here is how it would work:
> > 
> > 1. Drop the IO slave request, replace it with DMA read/write slave
> >requests.
> > 
> >Note that these new requests can also be used in environments where
> >maximum vIOMMU isolation is needed for security reasons and sharing
> >all of guest RAM with the vhost-user process is considered
> >unacceptable.
> > 
> > 2. When virtqueue buffer mapping fails, send DMA read/write slave
> >requests to transfer the data from/to QEMU. virtiofsd calls
> >pread(2)/pwrite(2) itself with virtiofsd's Linux capabilities.
> 
> Can you elaborate a bit more how will this new DMA read/write vhost-user
> commands can be implemented. I am assuming its not a real DMA and just
> sort of emulation of DMA. Effectively we have two processes and one
> process needs to read/write to/from address space of other process.
> 
> We were also wondering if we can make use of process_vm_readv()
> and process_vm_write() syscalls to achieve this. But this atleast
> requires virtiofsd to be more priviliged than qemu and also virtiofsd
> needs to know where DAX mapping window is. We briefly discussed this here.
> 
> https://lore.kernel.org/qemu-devel/20210421200746.gh1579...@redhat.com/

I wasn't thinking of directly allowing QEMU virtual memory access via
process_vm_readv/writev(). That would be more efficient but requires
privileges and also exposes internals of QEMU's virtual memory layout
and vIOMMU translation to the vhost-user process.

Instead I was thinking about VHOST_USER_DMA_READ/WRITE messages
containing the address (a device IOVA, it could just be a guest physical
memory address in most cases) and the length. The WRITE message would
also contain the data that the vhost-user device wishes to write. The
READ message reply would contain the data that the device read from
QEMU.

QEMU would implement this using QEMU's address_space_read/write() APIs.

So 

Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-10 Thread Vivek Goyal
On Mon, May 10, 2021 at 10:05:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, May 06, 2021 at 12:02:23PM -0400, Vivek Goyal wrote:
> > On Thu, May 06, 2021 at 04:37:04PM +0100, Stefan Hajnoczi wrote:
> > > On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert (git) 
> > > wrote:
> > > > From: Vivek Goyal 
> > > > 
> > > > If qemu guest asked to drop CAP_FSETID upon write, send that info
> > > > to qemu in SLAVE_FS_IO message so that qemu can drop capability
> > > > before WRITE. This is to make sure that any setuid bit is killed
> > > > on fd (if there is one set).
> > > > 
> > > > Signed-off-by: Vivek Goyal 
> > > 
> > > I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
> > > running with FSETID because QEMU is untrusted. FSETGID would allow QEMU
> > > to create setgid files, thereby potentially allowing an attacker to gain
> > > any GID.
> > 
> > Sure, its not recommended to run QEMU as root, but we don't block that
> > either and I do regularly test with qemu running as root.
> > 
> > > 
> > > I think it's better not to implement QEMU FSETID functionality at all
> > > and to handle it another way.
> > 
> > One way could be that virtiofsd tries to clear setuid bit after I/O
> > has finished. But that will be non-atomic operation and it is filled
> > with perils as it requires virtiofsd to know what all kernel will
> > do if this write has been done with CAP_FSETID dropped.
> > 
> > > In the worst case I/O requests should just
> > > fail, it seems like a rare case anyway:
> > 
> > Is there a way for virtiofsd to know if qemu is running with CAP_FSETID
> > or not. If there is one, it might be reasonable to error out. If we
> > don't know, then we can't fail all the operations.
> > 
> > > I/O to a setuid/setgid file with
> > > a memory buffer that is not mapped in virtiofsd.
> > 
> > With DAX it is easily triggerable. User has to append to a setuid file
> > in virtiofs and this path will trigger.
> > 
> > I am fine with not supporting this patch but will also need a reaosonable
> > alternative solution.
> 
> One way to avoid this problem is by introducing DMA read/write functions
> into the vhost-user protocol that can be used by all device types, not
> just virtio-fs.
> 
> Today virtio-fs uses the IO slave request when it cannot access a region
> of guest memory. It sends the file descriptor to QEMU and QEMU performs
> the pread(2)/pwrite(2) on behalf of virtiofsd.
> 
> I mentioned in the past that this solution is over-specialized. It
> doesn't solve the larger problem that vhost-user processes do not have
> full access to the guest memory space (e.g. DAX window).
> 
> Instead of sending file I/O requests over to QEMU, the vhost-user
> protocol should offer DMA read/write requests so any vhost-user process
> can access the guest memory space where vhost's shared memory mechanism
> is insufficient.
> 
> Here is how it would work:
> 
> 1. Drop the IO slave request, replace it with DMA read/write slave
>requests.
> 
>Note that these new requests can also be used in environments where
>maximum vIOMMU isolation is needed for security reasons and sharing
>all of guest RAM with the vhost-user process is considered
>unacceptable.
> 
> 2. When virtqueue buffer mapping fails, send DMA read/write slave
>requests to transfer the data from/to QEMU. virtiofsd calls
>pread(2)/pwrite(2) itself with virtiofsd's Linux capabilities.

Can you elaborate a bit more how will this new DMA read/write vhost-user
commands can be implemented. I am assuming its not a real DMA and just
sort of emulation of DMA. Effectively we have two processes and one
process needs to read/write to/from address space of other process.

We were also wondering if we can make use of process_vm_readv()
and process_vm_write() syscalls to achieve this. But this atleast
requires virtiofsd to be more priviliged than qemu and also virtiofsd
needs to know where DAX mapping window is. We briefly discussed this here.

https://lore.kernel.org/qemu-devel/20210421200746.gh1579...@redhat.com/

Vivek




Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-10 Thread Stefan Hajnoczi
On Thu, May 06, 2021 at 12:02:23PM -0400, Vivek Goyal wrote:
> On Thu, May 06, 2021 at 04:37:04PM +0100, Stefan Hajnoczi wrote:
> > On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert (git) 
> > wrote:
> > > From: Vivek Goyal 
> > > 
> > > If qemu guest asked to drop CAP_FSETID upon write, send that info
> > > to qemu in SLAVE_FS_IO message so that qemu can drop capability
> > > before WRITE. This is to make sure that any setuid bit is killed
> > > on fd (if there is one set).
> > > 
> > > Signed-off-by: Vivek Goyal 
> > 
> > I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
> > running with FSETID because QEMU is untrusted. FSETGID would allow QEMU
> > to create setgid files, thereby potentially allowing an attacker to gain
> > any GID.
> 
> Sure, its not recommended to run QEMU as root, but we don't block that
> either and I do regularly test with qemu running as root.
> 
> > 
> > I think it's better not to implement QEMU FSETID functionality at all
> > and to handle it another way.
> 
> One way could be that virtiofsd tries to clear setuid bit after I/O
> has finished. But that will be non-atomic operation and it is filled
> with perils as it requires virtiofsd to know what all kernel will
> do if this write has been done with CAP_FSETID dropped.
> 
> > In the worst case I/O requests should just
> > fail, it seems like a rare case anyway:
> 
> Is there a way for virtiofsd to know if qemu is running with CAP_FSETID
> or not. If there is one, it might be reasonable to error out. If we
> don't know, then we can't fail all the operations.
> 
> > I/O to a setuid/setgid file with
> > a memory buffer that is not mapped in virtiofsd.
> 
> With DAX it is easily triggerable. User has to append to a setuid file
> in virtiofs and this path will trigger.
> 
> I am fine with not supporting this patch but will also need a reaosonable
> alternative solution.

One way to avoid this problem is by introducing DMA read/write functions
into the vhost-user protocol that can be used by all device types, not
just virtio-fs.

Today virtio-fs uses the IO slave request when it cannot access a region
of guest memory. It sends the file descriptor to QEMU and QEMU performs
the pread(2)/pwrite(2) on behalf of virtiofsd.

I mentioned in the past that this solution is over-specialized. It
doesn't solve the larger problem that vhost-user processes do not have
full access to the guest memory space (e.g. DAX window).

Instead of sending file I/O requests over to QEMU, the vhost-user
protocol should offer DMA read/write requests so any vhost-user process
can access the guest memory space where vhost's shared memory mechanism
is insufficient.

Here is how it would work:

1. Drop the IO slave request, replace it with DMA read/write slave
   requests.

   Note that these new requests can also be used in environments where
   maximum vIOMMU isolation is needed for security reasons and sharing
   all of guest RAM with the vhost-user process is considered
   unacceptable.

2. When virtqueue buffer mapping fails, send DMA read/write slave
   requests to transfer the data from/to QEMU. virtiofsd calls
   pread(2)/pwrite(2) itself with virtiofsd's Linux capabilities.

Stefan


signature.asc
Description: PGP signature


Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-06 Thread Vivek Goyal
On Thu, May 06, 2021 at 04:37:04PM +0100, Stefan Hajnoczi wrote:
> On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: Vivek Goyal 
> > 
> > If qemu guest asked to drop CAP_FSETID upon write, send that info
> > to qemu in SLAVE_FS_IO message so that qemu can drop capability
> > before WRITE. This is to make sure that any setuid bit is killed
> > on fd (if there is one set).
> > 
> > Signed-off-by: Vivek Goyal 
> 
> I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
> running with FSETID because QEMU is untrusted. FSETGID would allow QEMU
> to create setgid files, thereby potentially allowing an attacker to gain
> any GID.

Sure, its not recommended to run QEMU as root, but we don't block that
either and I do regularly test with qemu running as root.

> 
> I think it's better not to implement QEMU FSETID functionality at all
> and to handle it another way.

One way could be that virtiofsd tries to clear setuid bit after I/O
has finished. But that will be non-atomic operation and it is filled
with perils as it requires virtiofsd to know what all kernel will
do if this write has been done with CAP_FSETID dropped.

> In the worst case I/O requests should just
> fail, it seems like a rare case anyway:

Is there a way for virtiofsd to know if qemu is running with CAP_FSETID
or not. If there is one, it might be reasonable to error out. If we
don't know, then we can't fail all the operations.

> I/O to a setuid/setgid file with
> a memory buffer that is not mapped in virtiofsd.

With DAX it is easily triggerable. User has to append to a setuid file
in virtiofs and this path will trigger.

I am fine with not supporting this patch but will also need a reaosonable
alternative solution.

Vivek




Re: [PATCH v3 26/26] virtiofsd: Ask qemu to drop CAP_FSETID if client asked for it

2021-05-06 Thread Stefan Hajnoczi
On Wed, Apr 28, 2021 at 12:01:00PM +0100, Dr. David Alan Gilbert (git) wrote:
> From: Vivek Goyal 
> 
> If qemu guest asked to drop CAP_FSETID upon write, send that info
> to qemu in SLAVE_FS_IO message so that qemu can drop capability
> before WRITE. This is to make sure that any setuid bit is killed
> on fd (if there is one set).
> 
> Signed-off-by: Vivek Goyal 

I'm not sure if the QEMU FSETID patches make sense. QEMU shouldn't be
running with FSETID because QEMU is untrusted. FSETGID would allow QEMU
to create setgid files, thereby potentially allowing an attacker to gain
any GID.

I think it's better not to implement QEMU FSETID functionality at all
and to handle it another way. In the worst case I/O requests should just
fail, it seems like a rare case anyway: I/O to a setuid/setgid file with
a memory buffer that is not mapped in virtiofsd.

Stefan


signature.asc
Description: PGP signature