Hi Christoph:
     Thanks a lot for your review. There are some reasons.
     1) Vmbus drivers don't use DMA API now.
2) Hyper-V Vmbus channel ring buffer already play bounce buffer role for most vmbus drivers. Just two kinds of packets from netvsc/storvsc are uncovered. 3) In AMD SEV-SNP based Hyper-V guest, the access physical address of shared memory should be bounce buffer memory physical address plus
with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
called virtual top of memory(vTom) in AMD spec and works as a watermark. So it needs to ioremap/memremap the associated physical address above the share memory boundary before accessing them. swiotlb_bounce() uses low end physical address to access bounce buffer and this doesn't work in this senario. If something wrong, please help me correct me.

Thanks.


On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
This should be handled by the DMA mapping layer, just like for native
SEV support.

Reply via email to