On Mon, Jun 03, 2019 at 09:16:11AM +0800, Lu Baolu wrote:
> The Thunderbolt vulnerabilities are public and have a nice
> name as Thunderclap [1] [3] nowadays. This patch series aims
> to mitigate those concerns.
> 
> An external PCI device is a PCI peripheral device connected
> to the system through an external bus, such as Thunderbolt.
> What makes it different is that it can't be trusted to the
> same degree as the devices build into the system. Generally,
> a trusted PCIe device will DMA into the designated buffers
> and not overrun or otherwise write outside the specified
> bounds. But it's different for an external device.
> 
> The minimum IOMMU mapping granularity is one page (4k), so
> for DMA transfers smaller than that a malicious PCIe device
> can access the whole page of memory even if it does not
> belong to the driver in question. This opens a possibility
> for DMA attack. For more information about DMA attacks
> imposed by an untrusted PCI/PCIe device, please refer to [2].
> 
> This implements bounce buffer for the untrusted external
> devices. The transfers should be limited in isolated pages
> so the IOMMU window does not cover memory outside of what
> the driver expects. Previously (v3 and before), we proposed
> an optimisation to only copy the head and tail of the buffer
> if it spans multiple pages, and directly map the ones in the
> middle. Figure 1 gives a big picture about this solution.
> 
>                                 swiotlb             System
>                 IOVA          bounce page           Memory
>              .---------.      .---------.        .---------.
>              |         |      |         |        |         |
>              |         |      |         |        |         |
> buffer_start .---------.      .---------.        .---------.
>              |         |----->|         |*******>|         |
>              |         |      |         | swiotlb|         |
>              |         |      |         | mapping|         |
>  IOMMU Page  '---------'      '---------'        '---------'
>   Boundary   |         |                         |         |
>              |         |                         |         |
>              |         |                         |         |
>              |         |------------------------>|         |
>              |         |    IOMMU mapping        |         |
>              |         |                         |         |
>  IOMMU Page  .---------.                         .---------.
>   Boundary   |         |                         |         |
>              |         |                         |         |
>              |         |------------------------>|         |
>              |         |     IOMMU mapping       |         |
>              |         |                         |         |
>              |         |                         |         |
>  IOMMU Page  .---------.      .---------.        .---------.
>   Boundary   |         |      |         |        |         |
>              |         |      |         |        |         |
>              |         |----->|         |*******>|         |
>   buffer_end '---------'      '---------' swiotlb'---------'
>              |         |      |         | mapping|         |
>              |         |      |         |        |         |
>              '---------'      '---------'        '---------'
>           Figure 1: A big view of iommu bounce page 
> 
> As Robin Murphy pointed out, this ties us to using strict mode for
> TLB maintenance, which may not be an overall win depending on the
> balance between invalidation bandwidth vs. memcpy bandwidth. If we
> use standard SWIOTLB logic to always copy the whole thing, we should
> be able to release the bounce pages via the flush queue to allow
> 'safe' lazy unmaps. So since v4 we start to use the standard swiotlb
> logic.
> 
>                                 swiotlb             System
>                 IOVA          bounce page           Memory
> buffer_start .---------.      .---------.        .---------.
>              |         |      |         |        |         |
>              |         |      |         |        |         |
>              |         |      |         |        .---------.physical
>              |         |----->|         | ------>|         |_start  
>              |         |iommu |         | swiotlb|         |
>              |         | map  |         |   map  |         |
>  IOMMU Page  .---------.      .---------.        '---------'

The prior picture had 'buffer_start' at an offset in the page. I am
assuming you meant that here in as well?

Meaning it starts at the same offset as 'physical_start' in the right
side box?

>   Boundary   |         |      |         |        |         |
>              |         |      |         |        |         |
>              |         |----->|         |        |         |
>              |         |iommu |         |        |         |
>              |         | map  |         |        |         |
>              |         |      |         |        |         |
>  IOMMU Page  .---------.      .---------.        .---------.
>   Boundary   |         |      |         |        |         |
>              |         |----->|         |        |         |
>              |         |iommu |         |        |         |
>              |         | map  |         |        |         |
>              |         |      |         |        |         |
>  IOMMU Page  |         |      |         |        |         |
>   Boundary   .---------.      .---------.        .---------.
>              |         |      |         |------->|         |
>   buffer_end '---------'      '---------' swiotlb|         |
>              |         |----->|         |   map  |         |
>              |         |iommu |         |        |         |
>              |         | map  |         |        '---------' physical
>              |         |      |         |        |         | _end    
>              '---------'      '---------'        '---------'
>           Figure 2: A big view of simplified iommu bounce page 
> 
> The implementation of bounce buffers for untrusted devices
> will cause a little performance overhead, but we didn't see
> any user experience problems. The users could use the kernel

What kind of devices did you test it with?

Thank you for making this awesome cover letter btw!
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to