On Thu, Aug 19, 2021 at 06:17:40PM +, Michael Kelley wrote:
> > +#define storvsc_dma_map(dev, page, offset, size, dir) \
> > + dma_map_page(dev, page, offset, size, dir)
> > +
> > +#define storvsc_dma_unmap(dev, dma_range, dir) \
> > + dma_unmap_page(dev, dma_range.dma,
On Thu, Aug 19, 2021 at 06:14:51PM +, Michael Kelley wrote:
> > + if (!pfns)
> > + return NULL;
> > +
> > + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
> > + pfns[i] = virt_to_hvpfn(buf + i * HV_HYP_PAGE_SIZE)
> > + + (ms_hyperv.shared_gpa_boundary >>
On Thu, Aug 19, 2021 at 06:11:30PM +, Michael Kelley wrote:
> This function is manipulating page tables in the guest VM. It is not involved
> in communicating with Hyper-V, or passing PFNs to Hyper-V. The pfn array
> contains guest PFNs, not Hyper-V PFNs. So it should use PAGE_SIZE
> instead
On 8/19/21 11:33 AM, Tom Lendacky wrote:
There was some talk about this on the mailing list where TDX and SEV may
need to be differentiated, so we wanted to reserve a range of values per
technology. I guess I can remove them until they are actually needed.
In TDX also we have similar require
On 8/19/21 4:55 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
>> +static inline bool prot_guest_has(unsigned int attr)
>
> No reall need to have this inline. In fact I'd suggest we havea the
> prototype in a common header so that everyone must implem
On 8/19/21 4:52 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:
>> While the name suggests this is intended mainly for guests, it will
>> also be used for host memory encryption checks in place of sme_active().
>
> Which suggest that the name is not goo
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
Subject line tag should be "scsi: storvsc:"
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by vm
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
The Subject line tag should be "hv_netvsc:".
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vm
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs to
On Thu, Aug 19, 2021 at 10:52:53AM +0100, Christoph Hellwig wrote:
> Which suggest that the name is not good to start with. Maybe protected
> hardware, system or platform might be a better choice?
Yah, coming up with a proper name here hasn't been easy.
prot_guest_has() is not the first variant.
On 8/19/21 4:46 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:
>> +#define PATTR_MEM_ENCRYPT 0 /* Encrypted memory */
>> +#define PATTR_HOST_MEM_ENCRYPT 1 /* Host encrypted
>> memory */
>> +#define PATTR_GUEST_MEM_ENC
On Thu, Aug 19, 2021 at 6:03 PM Robin Murphy wrote:
>
> On 2021-08-17 02:38, David Stevens wrote:
> > From: David Stevens
> >
> > For devices which set min_align_mask, swiotlb preserves the offset of
> > the original physical address within that mask. Since __iommu_dma_map
> > accounts for non-
On 8/19/2021 6:02 PM, Christoph Hellwig wrote:
On Thu, Aug 19, 2021 at 05:59:02PM +0800, Tianyu Lan wrote:
On 8/19/2021 4:49 PM, Christoph Hellwig wrote:
On Mon, Aug 16, 2021 at 10:50:26PM +0800, Tianyu Lan wrote:
Hi Christoph:
Sorry to bother you.Please double check with these two p
On Thu, Aug 19, 2021 at 05:59:02PM +0800, Tianyu Lan wrote:
>
>
> On 8/19/2021 4:49 PM, Christoph Hellwig wrote:
>> On Mon, Aug 16, 2021 at 10:50:26PM +0800, Tianyu Lan wrote:
>>> Hi Christoph:
>>>Sorry to bother you.Please double check with these two patches
>>> " [PATCH V3 10/13] x86/Swio
On 8/19/2021 4:49 PM, Christoph Hellwig wrote:
On Mon, Aug 16, 2021 at 10:50:26PM +0800, Tianyu Lan wrote:
Hi Christoph:
Sorry to bother you.Please double check with these two patches
" [PATCH V3 10/13] x86/Swiotlb: Add Swiotlb bounce buffer remap function
for HV IVM" and "[PATCH V3 09
On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
> +static inline bool prot_guest_has(unsigned int attr)
No reall need to have this inline. In fact I'd suggest we havea the
prototype in a common header so that everyone must implement it out
of line.
__
On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:
> While the name suggests this is intended mainly for guests, it will
> also be used for host memory encryption checks in place of sme_active().
Which suggest that the name is not good to start with. Maybe protected
hardware, system or
On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:
> +#define PATTR_MEM_ENCRYPT0 /* Encrypted memory */
> +#define PATTR_HOST_MEM_ENCRYPT 1 /* Host encrypted
> memory */
> +#define PATTR_GUEST_MEM_ENCRYPT 2 /* Guest encrypted
> m
On 2021-08-17 02:38, David Stevens wrote:
From: David Stevens
For devices which set min_align_mask, swiotlb preserves the offset of
the original physical address within that mask. Since __iommu_dma_map
accounts for non-aligned addresses, passing a non-aligned swiotlb
address with the swiotlb al
On 2021-08-17 02:38, David Stevens wrote:
From: David Stevens
Fold the _swiotlb helper functions into the respective _page functions,
since recent fixes have moved all logic from the _page functions to the
_swiotlb functions.
Reviewed-by: Robin Murphy
Signed-off-by: David Stevens
Reviewed
On 2021-08-17 02:38, David Stevens wrote:
From: David Stevens
Calling the iommu_dma_sync_*_for_cpu functions during unmap can cause
two copies out of the swiotlb buffer. Do the arch sync directly in
__iommu_dma_unmap_swiotlb instead to avoid this. This makes the call to
iommu_dma_sync_sg_for_cp
On 2021-08-17 02:38, David Stevens wrote:
From: David Stevens
When calling arch_sync_dma, we need to pass it the memory that's
actually being used for dma. When using swiotlb bounce buffers, this is
the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb
helper, so it can use the
On Mon, Aug 16, 2021 at 10:50:26PM +0800, Tianyu Lan wrote:
> Hi Christoph:
> Sorry to bother you.Please double check with these two patches
> " [PATCH V3 10/13] x86/Swiotlb: Add Swiotlb bounce buffer remap function
> for HV IVM" and "[PATCH V3 09/13] DMA: Add dma_map_decrypted/dma_
> unmap_
On Wed, Aug 18, 2021 at 09:48:43PM +0800, Lu Baolu wrote:
> Andy Shevchenko (1):
> iommu/vt-d: Drop the kernel doc annotation
>
> Liu Yi L (2):
> iommu/vt-d: Use pasid_pte_is_present() helper function
> iommu/vt-d: Add present bit check in pasid entry setup helpers
>
> Lu Baolu (5):
> iom
24 matches
Mail list logo