On 3/1/2022 7:53 PM, Christoph Hellwig wrote:
On Fri, Feb 25, 2022 at 10:28:54PM +0800, Tianyu Lan wrote:
One more perspective is that one device may have multiple queues and
each queues should have independent swiotlb bounce buffer to avoid spin
lock overhead. The number of queues is only
On Fri, Feb 25, 2022 at 10:28:54PM +0800, Tianyu Lan wrote:
> One more perspective is that one device may have multiple queues and
> each queues should have independent swiotlb bounce buffer to avoid spin
> lock overhead. The number of queues is only available in the device
> driver. This me
On 2/23/2022 5:46 PM, Tianyu Lan wrote:
On 2/23/2022 12:00 AM, Christoph Hellwig wrote:
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
Thanks for your comment. That means we need to expose an
swiotlb_device_init() interface to allocate bounce buffer and initialize
io tlb mem entr
On 2/23/2022 12:00 AM, Christoph Hellwig wrote:
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
Thanks for your comment. That means we need to expose an
swiotlb_device_init() interface to allocate bounce buffer and initialize
io tlb mem entry. DMA API Current rmem_swiotlb_device_
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
> Thanks for your comment. That means we need to expose an
> swiotlb_device_init() interface to allocate bounce buffer and initialize
> io tlb mem entry. DMA API Current rmem_swiotlb_device_init() only works
> for platform with device tr
On 2/22/2022 4:05 PM, Christoph Hellwig wrote:
On Mon, Feb 21, 2022 at 11:14:58PM +0800, Tianyu Lan wrote:
Sorry. The boot failure is not related with these patches and the issue
has been fixed in the latest upstream code.
There is a performance bottleneck due to io tlb mem's spin lock durin
On Mon, Feb 21, 2022 at 11:14:58PM +0800, Tianyu Lan wrote:
> Sorry. The boot failure is not related with these patches and the issue
> has been fixed in the latest upstream code.
>
> There is a performance bottleneck due to io tlb mem's spin lock during
> performance test. All devices'io queues us
On 2/15/2022 11:32 PM, Tianyu Lan wrote:
On 2/14/2022 9:58 PM, Christoph Hellwig wrote:
On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you s
On 2/14/2022 9:58 PM, Christoph Hellwig wrote:
On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you should fine a way to just call
swiotlb_init
On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
> On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
>> Adding a function to set the flag doesn't really change much. As Robin
>> pointed out last time you should fine a way to just call
>> swiotlb_init_with_tbl directly with the memory alloc
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you should fine a way to just call
swiotlb_init_with_tbl directly with the memory allocated the way you
like it. Or given that we have quite a few of these
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you should fine a way to just call
swiotlb_init_with_tbl directly with the memory allocated the way you
like it. Or given that we have quite a few of these trusted hypervisor
schemes maybe add an argument
From: Tianyu Lan
Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
share memory with hypervisor. Current swiotlb bounce buffer is only
allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
0xUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most.
This will
13 matches
Mail list logo