On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote:
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
Il 19/09/2014 09:10, Jason Wang ha scritto:
-if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
+if (vq-urgent || !vhost_has_feature(vq,
On Fri, Sep 19, 2014 at 05:12:15PM +0200, Thomas Huth wrote:
Hi Frank,
On Fri, 19 Sep 2014 13:54:34 +0200
frank.blasc...@de.ibm.com wrote:
From: Frank Blaschka frank.blasc...@de.ibm.com
This patch implements the s390 pci instructions in qemu. This allows
to attach qemu pci
On 09/19/2014 10:38 PM, Alexander Graf wrote:
On 19.09.14 20:51, Christian Borntraeger wrote:
On 09/19/2014 04:19 PM, Jason J. Herne wrote:
From: Jason J. Herne jjhe...@us.ibm.com
Enable KVM_SET_CLOCK and KVM_GET_CLOCK Ioctls on S390 for managing guest TOD
clock value.
Just some
Il 20/09/2014 01:44, Radim Krčmář ha scritto:
This patch removes the redundant variable, by making init_rmode_tss()
return 0 on success, -errno on failure.
Which is going to propagate all the way to userpace through ioctl ...
is this change of A[PB]I acceptable?
Otherwise, -EFAULT seems
Il 20/09/2014 12:47, Tang Chen ha scritto:
Since different architectures need different handling, we will add some arch
specific
code later. The code may need to make cpu requests outside kvm_main.c, so
make it
non-static and rename it to kvm_make_all_cpus_request().
Signed-off-by: Tang
Il 20/09/2014 12:47, Tang Chen ha scritto:
@@ -4534,8 +4539,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
}
if (vm_need_virtualize_apic_accesses(vmx-vcpu.kvm))
- vmcs_write64(APIC_ACCESS_ADDR,
-
Il 20/09/2014 12:47, Tang Chen ha scritto:
We are handling L1 and L2 share one apic access page situation when
migrating
apic access page. We should do some handling when migration happens in the
following situations:
1) when L0 is running: Update L1's vmcs in the next L0-L1 entry and
Il 20/09/2014 12:47, Tang Chen ha scritto:
We are handling L1 and L2 share one apic access page situation when
migrating
apic access page. We should do some handling when migration happens in the
following situations:
1) when L0 is running: Update L1's vmcs in the next L0-L1 entry and
Il 20/09/2014 12:47, Tang Chen ha scritto:
@@ -3624,6 +3624,11 @@ static bool svm_has_secondary_apic_access(struct
kvm_vcpu *vcpu)
return false;
}
+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+ return;
+}
+
static int svm_vm_has_apicv(struct
Il 22/09/2014 11:33, Paolo Bonzini ha scritto:
Something's wrong in the way you're generating the patches, because
you're adding these hunks twice.
Nevermind, that was my mistake.
Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
Il 20/09/2014 12:47, Tang Chen ha scritto:
We wants to migrate apic access page pinned by guest (L1 and L2) to make
memory
hotplug available.
There are two situations need to be handled for apic access page used by L2
vm:
1. L1 prepares a separate apic access page for L2.
L2 pins a
On 09/22/2014 02:55 PM, Michael S. Tsirkin wrote:
On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote:
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
Il 19/09/2014 09:10, Jason Wang ha scritto:
-if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
+if (vq-urgent ||
Il 22/09/2014 04:31, Tiejun Chen ha scritto:
s/drity/dirty and s/vmsc01/vmcs01
Signed-off-by: Tiejun Chen tiejun.c...@intel.com
---
arch/x86/kvm/mmu.c | 2 +-
arch/x86/kvm/vmx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
Il 20/09/2014 01:03, David Matlack ha scritto:
vcpu ioctls can hang the calling thread if issued while a vcpu is
running. If we know ioctl is going to be rejected as invalid anyway,
we can fail before trying to take the vcpu mutex.
This patch does not change functionality, it just makes
Il 11/09/2014 19:03, Chris Webb ha scritto:
Paolo Bonzini pbonz...@redhat.com wrote:
This is a hypercall that should have kicked VCPU 3 (see rcx).
Can you please apply this patch and gather a trace of the host
(using trace-cmd -e kvm qemu-kvm arguments)?
Sure, no problem. I've built the
On x86_64, kernel text mappings are mapped read-only with CONFIG_DEBUG_RODATA.
In that case, KVM will fail to patch VMCALL instructions to VMMCALL
as required on AMD processors.
The failure mode is currently a divide-by-zero exception, which obviously
is a KVM bug that has to be fixed. However,
On Mon, Sep 22, 2014 at 05:55:23PM +0800, Jason Wang wrote:
On 09/22/2014 02:55 PM, Michael S. Tsirkin wrote:
On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote:
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
Il 19/09/2014 09:10, Jason Wang ha scritto:
- if
Linus,
The following changes since commit 02a68d0503fa470abff8852e10b1890df5730a08:
powerpc/kvm/cma: Fix panic introduces by signed shift operation (2014-09-03
10:34:07 +0200)
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git tags/for-linus
for you to
Hi, all
I start a VM with virtio-serial (default ports number: 31), and found
that virtio-blk performance degradation happened, about 25%, this
problem can be reproduced 100%.
without virtio-serial:
4k-read-random 1186 IOPS
with virtio-serial:
4k-read-random 871 IOPS
On 09/19/2014 05:46 PM, H. Peter Anvin wrote:
On 09/19/2014 01:46 PM, Andy Lutomirski wrote:
However, it sounds to me that at least for KVM, it is very easy just to
emulate the RDRAND instruction. The hypervisor would report to the guest
that RDRAND is supported in CPUID and the emulate the
On 09/19/2014 02:42 PM, Andy Lutomirski wrote:
On Fri, Sep 19, 2014 at 11:30 AM, Christopher Covington
c...@codeaurora.org wrote:
On 09/17/2014 10:50 PM, Andy Lutomirski wrote:
Hi all-
I would like to standardize on a very simple protocol by which a guest
OS can obtain an RNG seed early in
On 09/22/2014 12:50 PM, Paolo Bonzini wrote:
Il 20/09/2014 01:03, David Matlack ha scritto:
vcpu ioctls can hang the calling thread if issued while a vcpu is
running. If we know ioctl is going to be rejected as invalid anyway,
we can fail before trying to take the vcpu mutex.
This patch does
unsubscribe kvm
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/18/2014 11:04 AM, David Hildenbrand wrote:
This patch should fix the bug reported in https://lkml.org/lkml/2014/9/11/249.
We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.
Otherwise blk_mq_timeout_check() might dereference
On 2014-09-22 08:15, Christian Borntraeger wrote:
On 09/18/2014 11:04 AM, David Hildenbrand wrote:
This patch should fix the bug reported in https://lkml.org/lkml/2014/9/11/249.
We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.
On 09/22/2014 06:31 AM, Christopher Covington wrote:
On 09/19/2014 05:46 PM, H. Peter Anvin wrote:
On 09/19/2014 01:46 PM, Andy Lutomirski wrote:
However, it sounds to me that at least for KVM, it is very easy just to
emulate the RDRAND instruction. The hypervisor would report to the guest
On 09/22/2014 07:17 AM, H. Peter Anvin wrote:
It could, but how would you enumerate that? A new RDRAND-CPL-0 CPUID
bit pretty much would be required.
Note that there are two things that differ: the CPL 0-ness and the
performance/exhaustibility attributes.
-hpa
--
To unsubscribe
Il 22/09/2014 15:45, Christian Borntraeger ha scritto:
We now have an extra condition check for every valid ioctl, to make an error
case go faster.
I know, the extra check is just a 1 or 2 cycles if branch prediction is
right, but still.
I applied the patch because the delay could be
-#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
-#define VTTBR_BADDR_MASK (((1LLU (40 - VTTBR_X)) - 1)
VTTBR_BADDR_SHIFT)
Actually, after some more thinking, why don't we just make the upper
limit of this mask 48-bit always or even 64-bit. That's a physical mask
for checking whether the pgd
On Mon, Sep 22, 2014 at 04:56:58PM +0100, Joel Schopp wrote:
The TCR_EL2.PS setting should be done based on the ID_A64MMFR0_EL1
but you can do this in __do_hyp_init (it looks like this function
handles VTCR_EL2.PS already, not sure why it does do it for TCR_EL2 as
well).
So IMO you
On Tue, Sep 09, 2014 at 12:41:27PM -0300, Marcelo Tosatti wrote:
On Tue, Jul 22, 2014 at 05:59:42AM +0800, Xiao Guangrong wrote:
On Jul 10, 2014, at 3:12 AM, mtosa...@redhat.com wrote:
Skip pinned shadow pages when selecting pages to zap.
It seems there is no way to prevent
On Tue, Sep 09, 2014 at 12:28:11PM -0300, Marcelo Tosatti wrote:
On Mon, Jul 21, 2014 at 04:14:24PM +0300, Gleb Natapov wrote:
On Wed, Jul 09, 2014 at 04:12:53PM -0300, mtosa...@redhat.com wrote:
Reload remote vcpus MMU from GET_DIRTY_LOG codepath, before
deleting a pinned spte.
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We
On 09/22, Paolo Bonzini wrote:
Il 22/09/2014 15:45, Christian Borntraeger ha scritto:
We now have an extra condition check for every valid ioctl, to make an
error case go faster.
I know, the extra check is just a 1 or 2 cycles if branch prediction is
right, but still.
I applied the
On Thu, Sep 18, 2014 at 06:24:57PM -0300, Marcelo Tosatti wrote:
Initilization of L2 guest with -cpu host, on L1 guest with -cpu host
triggers:
(qemu) KVM: entry failed, hardware error 0x7
...
nested_vmx_run: VMCS MSR_{LOAD,STORE} unsupported
Nested VMX MSR load/store support is not
Paolo Bonzini pbonz...@redhat.com wrote:
Il 11/09/2014 19:03, Chris Webb ha scritto:
Paolo Bonzini pbonz...@redhat.com wrote:
This is a hypercall that should have kicked VCPU 3 (see rcx).
Can you please apply this patch and gather a trace of the host
(using trace-cmd -e kvm qemu-kvm
Il 22/09/2014 21:08, Chris Webb ha scritto:
Do you by chance have CONFIG_DEBUG_RODATA set? In that case, the fix is
simply not to set it.
Absolutely right: my host and guest kernels do have CONFIG_DEBUG_RODATA set!
Your patch to use alternatives for VMCALL vs VMMCALL definitely fixed
Il 22/09/2014 21:01, Marcelo Tosatti ha scritto:
On Thu, Sep 18, 2014 at 06:24:57PM -0300, Marcelo Tosatti wrote:
Initilization of L2 guest with -cpu host, on L1 guest with -cpu host
triggers:
(qemu) KVM: entry failed, hardware error 0x7
...
nested_vmx_run: VMCS MSR_{LOAD,STORE}
On 09/22/2014 04:31 PM, Paolo Bonzini wrote:
Il 22/09/2014 15:45, Christian Borntraeger ha scritto:
We now have an extra condition check for every valid ioctl, to make an error
case go faster.
I know, the extra check is just a 1 or 2 cycles if branch prediction is
right, but still.
I
Il 22/09/2014 21:20, Christian Borntraeger ha scritto:
while using trinity to fuzz KVM, we noticed long stalls on invalid ioctls.
Lets bail out early on invalid ioctls. or similar?
Okay. David, can you explain how you found it so that I can make up my
mind?
Gleb and Marcelo, a fourth and
On 09/22, Christian Borntraeger wrote:
On 09/22/2014 04:31 PM, Paolo Bonzini wrote:
Il 22/09/2014 15:45, Christian Borntraeger ha scritto:
We now have an extra condition check for every valid ioctl, to make an
error case go faster.
I know, the extra check is just a 1 or 2 cycles if
On Mon, Sep 22, 2014 at 01:17:48PM +0200, Paolo Bonzini wrote:
On x86_64, kernel text mappings are mapped read-only with CONFIG_DEBUG_RODATA.
Hmm, that depends on DEBUG_KERNEL.
I think you're actually talking about distro kernels which enable
CONFIG_DEBUG_RODATA, right?
--
Regards/Gruss,
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We
On Fri, Sep 19, 2014 at 04:03:25PM -0700, David Matlack wrote:
vcpu ioctls can hang the calling thread if issued while a vcpu is
running.
There is a mutex per-vcpu, so thats expected, OK...
If we know ioctl is going to be rejected as invalid anyway,
we can fail before trying to take the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/22/2014 03:57 PM, Andres Lagar-Cavilla wrote:
1. We were calling clear_flush_young_notify in unmap_one, but we
are within an mmu notifier invalidate range scope. The spte exists
no more (due to range_start) and the accessed bit info has
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We
On Fri, 2014-09-19 at 13:54 +0200, frank.blasc...@de.ibm.com wrote:
This set of patches implements a vfio based solution for pci
pass-through on the s390 platform. The kernel stuff is pretty
much straight forward, but qemu needs more work.
Most interesting patch is:
vfio: make vfio run on
On Thu, Sep 18, 2014 at 11:08 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 19/09/2014 05:58, Andres Lagar-Cavilla ha scritto:
Paolo, should I recut including the recent Reviewed-by's?
No, I'll add them myself.
Paolo, is this patch waiting for something? Is Gleb's Reviewed-by enough?
Il 22/09/2014 22:08, Marcelo Tosatti ha scritto:
This patch does not change functionality, it just makes invalid ioctls
fail faster.
Should not be executing vcpu ioctls without interrupt KVM_RUN in the
first place.
This is not entirely true, there are a couple of asynchronous ioctls
Commit c77dcac KVM: Move more code under CONFIG_HAVE_KVM_IRQFD added
functionality that depends on definitions in ioapic.h when
__KVM_HAVE_IOAPIC is defined.
At the same time, 0ba0951 KVM: EVENTFD: remove inclusion of irq.h
removed the inclusion of irq.h unconditionally, which happened to
include
Il 22/09/2014 22:49, Andres Lagar-Cavilla ha scritto:
Paolo, should I recut including the recent Reviewed-by's?
No, I'll add them myself.
Paolo, is this patch waiting for something? Is Gleb's Reviewed-by enough?
It's waiting for an Acked-by on the mm/ changes.
Paolo
--
To unsubscribe
On Mon, 22 Sep 2014 23:32:36 +0200 Paolo Bonzini pbonz...@redhat.com wrote:
Il 22/09/2014 22:49, Andres Lagar-Cavilla ha scritto:
Paolo, should I recut including the recent Reviewed-by's?
No, I'll add them myself.
Paolo, is this patch waiting for something? Is Gleb's Reviewed-by
Il 22/09/2014 22:26, Andres Lagar-Cavilla ha scritto:
+ __entry-gfn= gfn;
+ __entry-hva= ((gfn - slot-base_gfn)
This must be .
+ PAGE_SHIFT) + slot-userspace_addr;
+ /*
+ * No
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We
On Mon, Sep 22, 2014 at 2:48 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 22/09/2014 22:26, Andres Lagar-Cavilla ha scritto:
+ __entry-gfn= gfn;
+ __entry-hva= ((gfn - slot-base_gfn)
This must be .
Correct, thanks.
+
On 22.09.14 22:47, Alex Williamson wrote:
On Fri, 2014-09-19 at 13:54 +0200, frank.blasc...@de.ibm.com wrote:
This set of patches implements a vfio based solution for pci
pass-through on the s390 platform. The kernel stuff is pretty
much straight forward, but qemu needs more work.
Most
On Tue, 2014-09-23 at 00:08 +0200, Alexander Graf wrote:
On 22.09.14 22:47, Alex Williamson wrote:
On Fri, 2014-09-19 at 13:54 +0200, frank.blasc...@de.ibm.com wrote:
This set of patches implements a vfio based solution for pci
pass-through on the s390 platform. The kernel stuff is pretty
On 09/22, Marcelo Tosatti wrote:
On Fri, Sep 19, 2014 at 04:03:25PM -0700, David Matlack wrote:
vcpu ioctls can hang the calling thread if issued while a vcpu is
running.
There is a mutex per-vcpu, so thats expected, OK...
If we know ioctl is going to be rejected as invalid anyway,
On Mon, Sep 22, 2014 at 11:29:16PM +0200, Paolo Bonzini wrote:
Il 22/09/2014 22:08, Marcelo Tosatti ha scritto:
This patch does not change functionality, it just makes invalid ioctls
fail faster.
Should not be executing vcpu ioctls without interrupt KVM_RUN in the
first place.
Not really, no.
Sent from my tablet, pardon any formatting problems.
On Sep 22, 2014, at 06:31, Christopher Covington c...@codeaurora.org wrote:
On 09/19/2014 05:46 PM, H. Peter Anvin wrote:
On 09/19/2014 01:46 PM, Andy Lutomirski wrote:
However, it sounds to me that at least for KVM, it
On Mon, Sep 22, 2014 at 03:58:16PM -0700, David Matlack wrote:
Should not be executing vcpu ioctls without interrupt KVM_RUN in the
first place.
This patch is trying to be nice to code that isn't aware it's
probing kvm file descriptors. We saw long hangs with some generic
process
This patch adds support for ARMv7 dirty page logging. Some functions of dirty
page logging have been split to generic and arch specific implementations,
details below. Dirty page logging is one of serveral features required for
live migration, live migration has been tested for ARMv7.
Testing:
-
Add support to declare architecture specific TLB flush function, for now ARMv7.
Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
include/linux/kvm_host.h |1 +
virt/kvm/Kconfig |3 +++
virt/kvm/kvm_main.c |4
3 files changed, 8 insertions(+)
diff --git
Add support for generic implementation of dirty log read function. For now
x86_64 and ARMv7 share generic dirty log read. Other architectures call
their architecture specific functions.
Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
arch/arm/kvm/Kconfig |1 +
This patch adds ARMv7 architecture TLB Flush function.
Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
arch/arm/include/asm/kvm_asm.h |1 +
arch/arm/include/asm/kvm_host.h | 12
arch/arm/kvm/Kconfig|1 +
arch/arm/kvm/interrupts.S | 12
Patch adds support for initial write protection of VM memlsot. This patch series
assumes that huge PUDs will not be used in 2nd stage tables, which is awlays
valid on ARMv7.
Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
arch/arm/include/asm/kvm_host.h |2 +
This patch adds support to track VM dirty pages, between dirty log reads. Pages
that have been dirtied since last log read are write protected again, in
preparation of next dirty log read. In addition ARMv7 dirty log read function
is pushed up to generic layer.
Signed-off-by: Mario Smarduch
This patch adds support for handling 2nd stage page faults during migration,
it disables faulting in huge pages, and dissolves huge pages to page tables.
In case migration is canceled huge pages may be used again.
Signed-off-by: Mario Smarduch m.smard...@samsung.com
---
arch/arm/kvm/mmu.c |
On Mon, 09/22 21:23, Zhang Haoyu wrote:
Amit,
It's related to the big number of ioeventfds used in virtio-serial-pci. With
virtio-serial-pci's ioeventfd=off, the performance is not affected no matter
if
guest initializes it or not.
In my test, there are 12 fds to poll in qemu_poll_ns
Avoid open coded calculations for bank MSRs by using well-defined
macros that hide the index of higher bank MSRs.
No semantic changes.
Signed-off-by: Chen Yucong sla...@gmail.com
---
arch/x86/kvm/x86.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git
This enables PAPR defined feature called Dynamic DMA windows (DDW).
Each Partitionable Endpoint (IOMMU group) has a separate DMA window on
a PCI bus where devices are allows to perform DMA. By default there is
1 or 2GB window allocated at the host boot time and these windows are
used when an
At the moment the iommu_table struct has a set_bypass() which enables/
disables DMA bypass on IODA2 PHB. This is exposed to POWERPC IOMMU code
which calls this callback when external IOMMU users such as VFIO are
about to get over a PHB.
Since the set_bypass() is not really an iommu_table function
This makes use of the it_page_size from the iommu_table struct
as page size can differ.
This replaces missing IOMMU_PAGE_SHIFT macro in commented debug code
as recently introduced IOMMU_PAGE_XXX macros do not include
IOMMU_PAGE_SHIFT.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
At the moment writing new TCE value to the IOMMU table fails with EBUSY
if there is a valid entry already. However PAPR specification allows
the guest to write new TCE value without clearing it first.
Another problem this patch is addressing is the use of pool locks for
external IOMMU users such
There moves locked pages accounting to helpers.
Later they will be reused for Dynamic DMA windows (DDW).
While we are here, update the comment explaining why RLIMIT_MEMLOCK
might be required to be bigger than the guest RAM.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
Normally a bitmap from the iommu_table is used to track what TCE entry
is in use. Since we are going to use iommu_table without its locks and
do xchg() instead, it becomes essential not to put bits which are not
implied in the direction flag.
Signed-off-by: Alexey Kardashevskiy a...@ozlabs.ru
---
The previous patch introduced iommu_table_ops::exchange() callback
which effectively disabled VFIO on pseries. This implements exchange()
for pseries/lpar so VFIO can work in nested guests.
Since exchaange() callback returns an old TCE, it has to call H_GET_TCE
for every TCE being put to the
SPAPR defines an interface to create additional DMA windows dynamically.
Dynamically means that the window is not allocated before the guest
even started, the guest can request it later. In practice, existing linux
guests check for the capability and if it is there, they create and map
a DMA
This adds a iommu_table_ops struct and puts pointer to it into
the iommu_table struct. This moves tce_build/tce_free/tce_get/tce_flush
callbacks from ppc_md to the new struct where they really belong to.
This adds an extra @ops parameter to iommu_init_table() to make sure
that we do not leave any
This adds missing locks in iommu_take_ownership()/
iommu_release_ownership().
This marks all pages busy in iommu_table::it_map in order to catch
errors if there is an attempt to use this table while ownership over it
is taken.
This only clears TCE content if there is no page marked busy in
This defines and implements VFIO IOMMU API which lets the userspace
create and remove DMA windows.
This updates VFIO_IOMMU_SPAPR_TCE_GET_INFO to return the number of
available windows and page mask.
This adds VFIO_IOMMU_SPAPR_TCE_CREATE and VFIO_IOMMU_SPAPR_TCE_REMOVE
to allow the user space to
At the moment pnv_pci_ioda_tce_invalidate() gets the PE pointer via
container_of(tbl). Since we are going to have to add Dynamic DMA windows
and that means having 2 IOMMU tables per PE, this is not going to work.
This implements pnv_pci_ioda(1|2)_tce_invalidate as a pnv_ioda_pe callback.
This
Modern IBM POWERPC systems support multiple IOMMU tables per PE
so we need a more reliable way (compared to container_of()) to get
a PE pointer from the iommu_table struct pointer used in IOMMU functions.
At the moment IOMMU group data points to an iommu_table struct. This
introduces a
This checks that the TCE table page size is not bigger that the size of
a page we just pinned and going to put its physical address to the table.
Otherwise the hardware gets unwanted access to physical memory between
the end of the actual page and the end of the aligned up TCE page.
Hi Marcelo,
Sorry for the delay.
On Sep 9, 2014, at 11:41 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Tue, Jul 22, 2014 at 05:59:42AM +0800, Xiao Guangrong wrote:
On Jul 10, 2014, at 3:12 AM, mtosa...@redhat.com wrote:
Skip pinned shadow pages when selecting pages to zap.
It
86 matches
Mail list logo