On Tue, Jun 01, 2021 at 04:22:25PM -0600, Alex Williamson wrote:
> On Tue, 1 Jun 2021 07:01:57 +
> "Tian, Kevin" wrote:
> >
> > I summarized five opens here, about:
> >
> > 1) Finalizing the name to replace /dev/ioasid;
> > 2) Whether one device is allowed to bind to multiple IOASID fd's;
On Tue, Jun 01, 2021 at 09:57:12AM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 01, 2021 at 02:03:33PM +1000, David Gibson wrote:
> > On Thu, May 27, 2021 at 03:48:47PM -0300, Jason Gunthorpe wrote:
> > > On Thu, May 27, 2021 at 02:58:30PM +1000, David Gibson wrote:
> > > > On Tue, May 25, 2021 at
On Thu, Jun 03, 2021 at 06:49:20AM +, Tian, Kevin wrote:
> > From: David Gibson
> > Sent: Thursday, June 3, 2021 1:09 PM
> [...]
> > > > In this way the SW mode is the same as a HW mode with an infinite
> > > > cache.
> > > >
> > > > The collaposed shadow page table is really just a cache.
> >
On Fri, Jun 04, 2021 at 09:30:54AM -0300, Jason Gunthorpe wrote:
> On Fri, Jun 04, 2021 at 12:44:28PM +0200, Enrico Weigelt, metux IT consult
> wrote:
> > On 02.06.21 19:24, Jason Gunthorpe wrote:
> >
> > Hi,
> >
> > >> If I understand this correctly, /dev/ioasid is a kind of "common
> >
On Thu, Jun 03, 2021 at 08:52:24AM -0300, Jason Gunthorpe wrote:
> On Thu, Jun 03, 2021 at 03:13:44PM +1000, David Gibson wrote:
>
> > > We can still consider it a single "address space" from the IOMMU
> > > perspective. What has happened is that the address table is not just a
> > > 64 bit IOVA,
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so changes of the
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through
From: Robin Murphy
The Mediatek driver is not the only one which might want a basic
address-based gathering behaviour, so although it's arguably simple
enough to open-code, let's factor it out for the sake of cleanliness.
Let's also take this opportunity to document the intent of these
helpers
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc: Robin
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in commit
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong parentheses on
在 2021/6/8 上午3:41, Alex Williamson 写道:
On Mon, 7 Jun 2021 16:08:02 -0300
Jason Gunthorpe wrote:
On Mon, Jun 07, 2021 at 12:59:46PM -0600, Alex Williamson wrote:
It is up to qemu if it wants to proceed or not. There is no issue with
allowing the use of no-snoop and blocking wbinvd, other
在 2021/6/3 上午1:21, Jason Gunthorpe 写道:
On Wed, Jun 02, 2021 at 04:54:26PM +0800, Jason Wang wrote:
在 2021/6/2 上午1:31, Jason Gunthorpe 写道:
On Tue, Jun 01, 2021 at 04:47:15PM +0800, Jason Wang wrote:
We can open up to ~0U file descriptors, I don't see why we need to restrict
it in uAPI.
There
On 2021/6/7 20:19, Liu, Yi L wrote:
>> From: Shenming Lu
>> Sent: Friday, June 4, 2021 10:03 AM
>>
>> On 2021/6/4 2:19, Jacob Pan wrote:
>>> Hi Shenming,
>>>
>>> On Wed, 2 Jun 2021 12:50:26 +0800, Shenming Lu
>>
>>> wrote:
>>>
On 2021/6/2 1:33, Jason Gunthorpe wrote:
> On Tue, Jun 01,
在 2021/6/7 下午10:14, Jason Gunthorpe 写道:
On Mon, Jun 07, 2021 at 11:18:33AM +0800, Jason Wang wrote:
Note that no drivers call these things doesn't meant it was not
supported by the spec.
Of course it does. If the spec doesn't define exactly when the driver
should call the cache flushes for
On Mon, 7 Jun 2021 20:03:53 -0300
Jason Gunthorpe wrote:
> On Mon, Jun 07, 2021 at 01:41:28PM -0600, Alex Williamson wrote:
>
> > > Compatibility is important, but when I look in the kernel code I see
> > > very few places that call wbinvd(). Basically all DRM for something
> > > relavent to
On Mon, Jun 07, 2021 at 01:41:28PM -0600, Alex Williamson wrote:
> > Compatibility is important, but when I look in the kernel code I see
> > very few places that call wbinvd(). Basically all DRM for something
> > relavent to qemu.
> >
> > That tells me that the vast majority of PCI devices do
On Mon, 7 Jun 2021 16:08:02 -0300
Jason Gunthorpe wrote:
> On Mon, Jun 07, 2021 at 12:59:46PM -0600, Alex Williamson wrote:
>
> > > It is up to qemu if it wants to proceed or not. There is no issue with
> > > allowing the use of no-snoop and blocking wbinvd, other than some
> > > drivers may
On Mon, Jun 07, 2021 at 12:59:46PM -0600, Alex Williamson wrote:
> > It is up to qemu if it wants to proceed or not. There is no issue with
> > allowing the use of no-snoop and blocking wbinvd, other than some
> > drivers may malfunction. If the user is certain they don't have
> > malfunctioning
On Mon, 7 Jun 2021 15:18:58 -0300
Jason Gunthorpe wrote:
> On Mon, Jun 07, 2021 at 09:41:48AM -0600, Alex Williamson wrote:
> > You're calling this an admin knob, which to me suggests a global module
> > option, so are you trying to implement both an administrator and a user
> > policy? ie. the
On Mon, Jun 07, 2021 at 09:41:48AM -0600, Alex Williamson wrote:
> You're calling this an admin knob, which to me suggests a global module
> option, so are you trying to implement both an administrator and a user
> policy? ie. the user can create scenarios where access to wbinvd might
> be
On Mon, Jun 07, 2021 at 03:30:21PM +0200, Enrico Weigelt, metux IT consult
wrote:
> On 02.06.21 19:21, Jason Gunthorpe wrote:
>
> Hi,
>
> > Not really, once one thing in an applicate uses a large number FDs the
> > entire application is effected. If any open() can return 'very big
> > number'
On Mon, Jun 07, 2021 at 08:51:42AM +0200, Paolo Bonzini wrote:
> On 07/06/21 05:25, Tian, Kevin wrote:
> > Per Intel SDM wbinvd is a privileged instruction. A process on the
> > host has no privilege to execute it.
>
> (Half of) the point of the kernel is to do privileged tasks on the
>
On Sat, Jun 05, 2021 at 08:22:27AM +0200, Paolo Bonzini wrote:
> On 04/06/21 19:22, Jason Gunthorpe wrote:
> > 4) The KVM interface is the very simple enable/disable WBINVD.
> > Possessing a FD that can do IOMMU_EXECUTE_WBINVD is required
> > to enable WBINVD at KVM.
>
> The KVM
On Fri, Jun 04, 2021 at 11:10:53PM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Friday, June 4, 2021 8:09 PM
> >
> > On Fri, Jun 04, 2021 at 06:37:26AM +, Tian, Kevin wrote:
> > > > From: Jason Gunthorpe
> > > > Sent: Thursday, June 3, 2021 9:05 PM
> > > >
> > > > > >
> > >
On Fri, 4 Jun 2021 20:01:08 -0300
Jason Gunthorpe wrote:
> On Fri, Jun 04, 2021 at 03:29:18PM -0600, Alex Williamson wrote:
> > On Fri, 4 Jun 2021 14:22:07 -0300
> > Jason Gunthorpe wrote:
> >
> > > On Fri, Jun 04, 2021 at 06:10:51PM +0200, Paolo Bonzini wrote:
> > > > On 04/06/21 18:03,
On 6/7/2021 2:50 PM, Christoph Hellwig wrote:
On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
+ if (hv_isolation_type_snp()) {
+ pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned
long),
+ GFP_KERNEL);
+ for
On 6/7/2021 2:46 PM, Christoph Hellwig wrote:
On Sun, May 30, 2021 at 11:06:28AM -0400, Tianyu Lan wrote:
+ for (i = 0; i < request->hvpg_count; i++)
+ dma_unmap_page(>device,
+
On 6/7/2021 2:43 PM, Christoph Hellwig wrote:
On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
From: Tianyu Lan
For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
needs to be accessed via extra address space(e.g address above bit39).
Hyper-V code may
On Mon, Jun 07, 2021 at 11:18:33AM +0800, Jason Wang wrote:
> Note that no drivers call these things doesn't meant it was not
> supported by the spec.
Of course it does. If the spec doesn't define exactly when the driver
should call the cache flushes for no-snoop transactions then the
protocol
On 02.06.21 19:21, Jason Gunthorpe wrote:
Hi,
Not really, once one thing in an applicate uses a large number FDs the
entire application is effected. If any open() can return 'very big
number' then nothing in the process is allowed to ever use select.
isn't that a bug in select() ?
--mtx
--
On Mon, Jun 07, 2021 at 02:49:05PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Compiling the recent dma-iommu changes under 32-bit x86 triggers this
> compile warning:
>
> drivers/iommu/dma-iommu.c:249:5: warning: format ‘%llx’ expects argument of
> type ‘long long unsigned int’, but
On 2021/6/5 3:04, Bjorn Helgaas wrote:
[+cc John, who tested 6bf6c24720d3]
On Fri, May 21, 2021 at 03:03:24AM +, Wang Xingang wrote:
From: Xingang Wang
When booting with devicetree, the pci_request_acs() is called after the
enumeration and initialization of PCI devices, thus the ACS is
On Fri, Jun 04, 2021 at 06:35:17PM +0100, Robin Murphy wrote:
> For the sake of justifying this as "fix" rather than "cleanup", you may as
> well use the flush queue commit cited in the patch log - I maintain there's
> nothing technically wrong with that commit itself, but it is the point at
>
From: Joerg Roedel
Compiling the recent dma-iommu changes under 32-bit x86 triggers this
compile warning:
drivers/iommu/dma-iommu.c:249:5: warning: format ‘%llx’ expects argument of
type ‘long long unsigned int’, but argument 3 has type ‘phys_addr_t’ {aka
‘unsigned int’} [-Wformat=]
The
On 2021/6/4 23:36, Joerg Roedel wrote:
On Fri, May 21, 2021 at 03:03:24AM +, Wang Xingang wrote:
From: Xingang Wang
When booting with devicetree, the pci_request_acs() is called after the
enumeration and initialization of PCI devices, thus the ACS is not
enabled. And ACS should be enabled
> From: Shenming Lu
> Sent: Friday, June 4, 2021 10:03 AM
>
> On 2021/6/4 2:19, Jacob Pan wrote:
> > Hi Shenming,
> >
> > On Wed, 2 Jun 2021 12:50:26 +0800, Shenming Lu
>
> > wrote:
> >
> >> On 2021/6/2 1:33, Jason Gunthorpe wrote:
> >>> On Tue, Jun 01, 2021 at 08:30:35PM +0800, Lu Baolu wrote:
On 2021-06-07 03:42, chenxiang wrote:
From: Xiang Chen
When rmmod the driver of the last device in the group, cached iovas are not
used, and it is better to free them to save memories. And also export
function free_rcache_cached_iovas() and iommu_domain_to_iova().
How common is it to use a
Hi Olof and Arnd,
Tegra memory controller driver changes with necessary dependency from Thierry
(which you will also get from him):
1. Dmitry's power domain work on Tegra MC drivers,
2. Necessary clock and regulator dependencies for Dmitry's work.
Hi Thierry and Will,
This is the pull for you
Hi Christoph:
Thanks for your review.
On 6/7/2021 2:41 PM, Christoph Hellwig wrote:
On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
+ if (ms_hyperv.ghcb_base) {
+ rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
+
+ ghcb_va =
On 07/06/21 05:25, Tian, Kevin wrote:
Per Intel SDM wbinvd is a privileged instruction. A process on the
host has no privilege to execute it.
(Half of) the point of the kernel is to do privileged tasks on the
processes' behalf. There are good reasons why a process that uses VFIO
(without
On Sun, May 30, 2021 at 11:06:27AM -0400, Tianyu Lan wrote:
> + if (hv_isolation_type_snp()) {
> + pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned
> long),
> +GFP_KERNEL);
> + for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
> +
On Sun, May 30, 2021 at 11:06:28AM -0400, Tianyu Lan wrote:
> + for (i = 0; i < request->hvpg_count; i++)
> + dma_unmap_page(>device,
> +
> request->dma_range[i].dma,
> +
Honestly, we really need to do away with the concept of hypervisor-
specific swiotlb allocations and just add a hypervisor hook to remap the
"main" buffer. That should remove a lot of code and confusion not just
for Xen but also any future addition like hyperv.
On Sun, May 30, 2021 at 11:06:25AM -0400, Tianyu Lan wrote:
> From: Tianyu Lan
>
> For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
> needs to be accessed via extra address space(e.g address above bit39).
> Hyper-V code may remap extra address space outside of swiotlb.
On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote:
> + if (ms_hyperv.ghcb_base) {
> + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
> +
> + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
> + if (!ghcb_va)
> + return -ENOMEM;
47 matches
Mail list logo