The patch removes the eeh related statistics for eeh device since
they have been maintained by the corresponding eeh PE. Also, the
flags used to trace the state of eeh device and PE have been reworked
for a little bit.
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/eeh.h |
The idea comes from Benjamin Herrenschmidt. The eeh cache helps
fetching the pci device according to the given I/O address. Since
the eeh cache is serving for eeh, it's reasonable for eeh cache
to trace eeh device except pci device.
The patch make eeh cache to trace eeh device. Also, the major
eeh
While EEH module is installed, PCI devices is checked one by one
to see if it supports eeh. That is done based on OF nodes or
PCI device referred by "struct pci_dev". In order to distinguish
the case, global variable "eeh_probe_mode" is introduced.
The patch implements the support to eeh probe mod
The patch reworks the current implementation so that the eeh errors
will be handled basing on PE instead of eeh device.
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/eeh.h |1 +
arch/powerpc/include/asm/eeh_event.h|2 +-
arch/powerpc/platforms/pseries/eeh_dr
Once eeh error is found, eeh event will be created and put it into
the global linked list. At the mean while, kernel thread will be
started to process it. The handler for the kernel thread originally
was eeh device sensitive.
The patch reworks the handler of the kernel thread so that it's PE
sensi
The patch implements reset based on PE instead of eeh device. Also,
The functions used to retrieve the reset type, either hot or fundamental
reset, have been reworked for a little bit. More specificly, it's
implemented based the the eeh device traverse function.
Signed-off-by: Gavin Shan
---
arc
The patch introduces the function to traverse the devices of the
specified PE and its child PEs. Also, the restore on device bars
is implemented based on the traverse function.
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/eeh.h |3 +
arch/powerpc/include/asm/ppc-pci.h
There're 2 conditions to trigger EEH error detection: invalid value
returned from reading I/O or config space. On each case, the function
eeh_dn_check_failure will be called to initialize EEH event and put
it into the poll for further processing.
The patch changes the function for a little bit so
The patch refactors the original implementation in order to enable
I/O and do log retrieval based on PE.
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/ppc-pci.h |4 +-
arch/powerpc/platforms/pseries/eeh.c | 44 ++---
2 files changed, 21 insertions(+),
Originally, all the EEH options were implemented based on OF node.
Actually, it explicitly breaks the rules that the operation target
is PE instead of device. Therefore, the patch makes all the operations
based on PE instead of device.
Unfortunately, the backend for config space has to be kept as
The EEH initialization functions have been postponed until slab/slub
are ready. So we use slab/slub to allocate the memory chunks for
newly creatd EEH devices. That would save lots of memory.
The patch also does cleanup to replace "kmalloc" with "kzalloc" so
that we needn't clear the allocated mem
Since we've introduced dedicated struct to trace individual PEs,
it's reasonable to trace its state through the dedicated struct
instead of using "eeh_dev" any more.
The patches implements the state tracing based on PE. It's notable
that the PE state will be applied to the specified PE as well as
The original implementation builds EEH event based on EEH device.
We already had dedicated struct to depict PE. It's reasonable to
build EEH event based on PE.
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/eeh_event.h |4 +-
arch/powerpc/platforms/pseries/eeh_event.c | 29 ++
During PCI hotplug and EEH recovery, the PE hierarchy PE might be
changed due to the PCI topology changes. At later point when the
PCI device is added, the PE will be created dynamically again.
The patch introduces new function to remove EEH devices from the
associated PE. That also can cause that
The patch creates PEs and associated the newly created PEs with
it parent/silbing as well as EEH devices. It would become more
straight to trace EEH errors and recover them accordingly.
Once the EEH functionality on one PCI IOA has been enabled, we
tries to create PE against it. If there's existin
The patch implements searching PE based on the following
requirements:
* Search PE according to PE address, which is traditional
PE address that is composed of PCI bus/device/function
number, or unified PE address assigned by firmware or
platform.
* Search parent PE according to the giv
For one particular PE, it's only meaningful in the ancestor PHB
domain. Therefore, each PHB should have its own PE hierarchy tree
to trace those PEs created against the PHB.
The patch creates PEs for the PHBs and put those PEs into the
global link list traced by "eeh_phb_pe". The link list of PEs
As defined in PAPR 2.4, Partitionable Endpoint (PE) is an I/O subtree
that can be treated as a unit for the purposes of partitioning and error
recovery. Therefore, eeh core should be aware of PE. With eeh_pe struct,
we can support PE explicitly. Further more, it makes all the staff as
data centrali
The patch introduces global mutex for EEH so that the core data
structures can be protected by that. Also, 2 inline functions
are exported for that: eeh_lock() and eeh_unlock().
Signed-off-by: Gavin Shan
---
arch/powerpc/include/asm/eeh.h | 15 +++
arch/powerpc/platforms/pser
The patch adds more logs to EEH initialization functions for
debugging purpose. Also, the machine type ("pSeries") is checked
in the platform initialization to assure it's the correct platform
to invoke it.
Signed-off-by: Gavin Shan
---
arch/powerpc/platforms/pseries/eeh_dev.c |2 ++
arc
Currently, we have 3 phases for EEH initialization on pSeries platform
using builtin functions: platform initialization, EEH device creation,
and EEH subsystem enablement. All of them are done no later than
ppc_md.setup_arch. That means that the slab/slub isn't ready yet, so
we have to allocate mem
The series of patches address explicit PE support as well as probe type
support. For explicit PE support, struct eeh_pe has been introduced.
While designing the struct, following factors have been taken into
account.
* For one particular PE, it might be composed of single PCI device,
or mu
Hi Linus !
Here are a few fixes for 3.6 that were piling up while I was away or
busy (I was mostly MIA a week or two before San Diego). Some fixes from
Anton fixing up issues with our relatively new DSCR control feature,
and a few other fixes that are either regressions or bugs nasty enough
to war
On Wed, Sep 05, 2012 at 03:26:59PM +1000, Benjamin Herrenschmidt wrote:
> On Fri, 2012-08-24 at 13:01 +0530, Ananth N Mavinakayanahalli wrote:
> > From: Ananth N Mavinakayanahalli
> >
> > This is the port of uprobes to powerpc. Usage is similar to x86.
>
> Guys, can you do a minimum of build tes
On Mon, 2012-08-06 at 16:27 +0300, Mihai Caraman wrote:
> Critical exception handler on 64-bit booke uses user-visible SPRG3 as scratch.
> Restore VDSO information in SPRG3 on exception prolog.
Breaks the build on !BOOKE because of :
> diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel
On Tue, 2012-09-04 at 22:25 -0700, Christian Kujau wrote:
> I sure hope that other people will benefit from this as well. I can't
> stand the thought that you guys are always putting out fixes for this
> ol'
> PowerBook of mine :-\
Well, for some strange reason I didn't observe the problem on a
On Fri, 2012-08-24 at 13:01 +0530, Ananth N Mavinakayanahalli wrote:
> From: Ananth N Mavinakayanahalli
>
> This is the port of uprobes to powerpc. Usage is similar to x86.
Guys, can you do a minimum of build testing ?
This one breaks due to uprobe_get_swbp_addr() being defined in both
asm and
On 05/09/12 15:17, Benjamin Herrenschmidt wrote:
On Tue, 2012-09-04 at 22:57 -0600, Alex Williamson wrote:
Do we need an extra region info field, or is it sufficient that we
define a region to be mmap'able with getpagesize() pages when the MMAP
flag is set and simply offset the region within th
On Wed, 5 Sep 2012 at 11:08, Benjamin Herrenschmidt wrote:
> Try this:
>
> powerpc: Don't use __put_user() in patch_instruction
Perfect! With this patch applied, the machine boots again.
Tested-by: Christian Kujau
I sure hope that other people will benefit from this as well. I can't
stand t
On Tue, 2012-09-04 at 22:57 -0600, Alex Williamson wrote:
> Do we need an extra region info field, or is it sufficient that we
> define a region to be mmap'able with getpagesize() pages when the MMAP
> flag is set and simply offset the region within the device fd? ex.
Alexey ? You mentioned you
On Wed, 2012-09-05 at 11:16 +1000, Benjamin Herrenschmidt wrote:
> > > It's still bad in more ways that I care to explain...
> >
> > Well it is right before pci_reassigndev_resource_alignment() which is
> > common and does the same thing.
> >
> > > The main one is that you do the "fixup" in a ve
We have been observing hangs, both of KVM guest vcpu tasks and more
generally, where a process that is woken doesn't properly wake up and
continue to run, but instead sticks in TASK_WAKING state. This
happens because the update of rq->wake_list in ttwu_queue_remote()
is not ordered with the update
On Wed, 2012-08-22 at 16:42 -0500, Kent Yoder wrote:
> On Wed, Aug 22, 2012 at 04:17:43PM -0500, Ashley Lai wrote:
> > This patch adds a new device driver to support IBM virtual TPM
> > (vTPM) for PPC64. IBM vTPM is supported through the adjunct
> > partition with firmware release 740 or higher.
> -Original Message-
> From: David Laight [mailto:david.lai...@aculab.com]
> Sent: Tuesday, September 04, 2012 10:51 PM
> To: Xie Shaohui-B21989; jgar...@pobox.com; linux-...@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org; linux-ker...@vger.kernel.org; Bhartiya
> Anju-B07263
> Subject:
On Thu, 2012-08-02 at 09:10 +0200, Jiri Kosina wrote:
> Directly comparing current->personality against PER_LINUX32 doesn't work
> in cases when any of the personality flags stored in the top three bytes
> are used.
>
> Directly forcefully setting personality to PER_LINUX32 or PER_LINUX
> discards
On Mon, 2012-07-23 at 11:13 +0200, Uwe Kleine-König wrote:
> This prepares *of_device_id.data becoming const. Without this change
> the following warning would occur:
>
> drivers/macintosh/mediabay.c: In function 'media_bay_attach':
> drivers/macintosh/mediabay.c:589:11: warning: assig
> @@ -195,14 +195,13 @@ static void done(struct fsl_ep *ep, struct fsl_req
> *req, int status)
> dma_pool_free(udc->td_pool, curr_td, curr_td->td_dma);
> }
>
> - if (req->mapped) {
> + if (req->req.dma != DMA_ADDR_INVALID) {
> dma_unmap_single(ep->udc->g
>
> Because the fsl_udc_core driver shares one 'status_req' object for the
> complete ep0 control transfer, it is not possible to prime the final
> STATUS phase immediately after the IN transaction. E.g. ch9getstatus()
> executed:
>
> | req = udc->status_req;
> | ...
> | list_add_tail(&req->qu
On Tue, 2012-08-28 at 13:35 -0700, Mark Brown wrote:
> Rather than requiring platforms to select the generic clock API to make
> it available make the API available as a user selectable option unless the
> user either selects HAVE_CUSTOM_CLK (if they have their own implementation)
> or selects COMM
At 09/05/2012 07:16 AM, Andrew Morton Wrote:
> On Mon, 03 Sep 2012 13:51:10 +0800
> Wen Congyang wrote:
>
+static void release_firmware_map_entry(struct kobject *kobj)
+{
+ struct firmware_map_entry *entry = to_memmap_entry(kobj);
+ struct page *page;
+
+ page = v
The upcoming VFIO support requires a way to know which
entry in the TCE map is not empty in order to do cleanup
at QEMU exit/crash. This patch adds such functionality
to POWERNV platform code.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/platforms/powernv/pci.c |6 ++
1 file chan
On Tue, Sep 4, 2012 at 5:28 AM, Liu Qiang-B32616 wrote:
>> Will this engine be coordinating with another to handle memory copies?
>> The dma mapping code for async_tx/raid is broken when dma mapping
>> requests overlap or cross dma device boundaries [1].
>>
>> [1]: http://marc.info/?l=linux-arm-k
> > It's still bad in more ways that I care to explain...
>
> Well it is right before pci_reassigndev_resource_alignment() which is
> common and does the same thing.
>
> > The main one is that you do the "fixup" in a very wrong place anyway and
> > it might cause cases of overlapping BARs.
>
>
On Tue, 2012-09-04 at 02:32 -0700, Christian Kujau wrote:
> On Tue, 4 Sep 2012 at 16:51, Michael Ellerman wrote:
> > My guess would be we're calling that quite early and the __put_user()
> > check is getting confused and failing. That means we'll have left some
> > code unpatched, which then fails.
On 05/09/12 05:45, Benjamin Herrenschmidt wrote:
On Tue, 2012-09-04 at 17:36 +1000, Alexey Kardashevskiy wrote:
VFIO adds a separate memory region for every BAR and tries
to mmap() it to provide direct BAR mapping to the guest.
If it succeedes, QEMU registers this address with kvm_set_phys_mem()
On Wed, Sep 05, 2012 at 05:41:42AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2012-09-04 at 17:35 +1000, Alexey Kardashevskiy wrote:
> > The upcoming VFIO support requires a way to know which
> > entry in the TCE map is not empty in order to do cleanup
> > at QEMU exit/crash. This patch adds su
On Wed, 2012-09-05 at 10:19 +1000, Alexey Kardashevskiy wrote:
> >> +static unsigned long pnv_tce_get(struct iommu_table *tbl, long
> index)
> >> +{
> >> +return ((u64 *)tbl->it_base)[index - tbl->it_offset] &
> IOMMU_PAGE_MASK;
> >> +}
> >
> > Why the masking here ?
>
>
> Oops. No reason. Wi
On 05/09/12 05:41, Benjamin Herrenschmidt wrote:
On Tue, 2012-09-04 at 17:35 +1000, Alexey Kardashevskiy wrote:
The upcoming VFIO support requires a way to know which
entry in the TCE map is not empty in order to do cleanup
at QEMU exit/crash. This patch adds such functionality
to POWERNV platfo
On Mon, 03 Sep 2012 13:51:10 +0800
Wen Congyang wrote:
> >> +static void release_firmware_map_entry(struct kobject *kobj)
> >> +{
> >> + struct firmware_map_entry *entry = to_memmap_entry(kobj);
> >> + struct page *page;
> >> +
> >> + page = virt_to_page(entry);
> >> + if (PageSlab(page) || P
Because the fsl_udc_core driver shares one 'status_req' object for the
complete ep0 control transfer, it is not possible to prime the final
STATUS phase immediately after the IN transaction. E.g. ch9getstatus()
executed:
| req = udc->status_req;
| ...
| list_add_tail(&req->queue, &ep->queue);
| i
The 'mapped' flag in 'struct fsl_req' flag is redundant with checking
for 'req.dma != DMA_ADDR_INVALID' and it was also set to a wrong value
(see 2nd hunk of patch).
Replacing it in the way described above saves 60 bytes:
function old new delta
fsl_ud
On Tue, 2012-09-04 at 10:27 -0400, Steven Rostedt wrote:
> As a test case, continuing on error is fine, but I would not recommend
> that as a fix. If it fails, but still does the patch, that could be
> harmful, and confusing of a result.
>
> Need to figure out why put_user is failing.
It's proba
On Tue, 2012-09-04 at 17:36 +1000, Alexey Kardashevskiy wrote:
> VFIO adds a separate memory region for every BAR and tries
> to mmap() it to provide direct BAR mapping to the guest.
> If it succeedes, QEMU registers this address with kvm_set_phys_mem().
> However it is not always possible because
On Tue, 2012-09-04 at 17:35 +1000, Alexey Kardashevskiy wrote:
> The upcoming VFIO support requires a way to know which
> entry in the TCE map is not empty in order to do cleanup
> at QEMU exit/crash. This patch adds such functionality
> to POWERNV platform code.
>
> Signed-off-by: Alexey Kardashe
On Tue, 2012-09-04 at 16:51 +1000, Michael Ellerman wrote:
>
> My guess would be we're calling that quite early and the __put_user()
> check is getting confused and failing. That means we'll have left some
> code unpatched, which then fails.
>
> Can you try with the patch applied, but instead of
> + /* Read command completed register */
> + done_mask = ioread32(hcr_base + CC);
> +
> + if (host_priv->quirks & SATA_FSL_QUIRK_V2_ERRATA) {
> + if (unlikely(hstatus & INT_ON_DATA_LENGTH_MISMATCH)) {
> + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {
> +
On Tue, 2012-09-04 at 16:51 +1000, Michael Ellerman wrote:
> My guess would be we're calling that quite early and the __put_user()
> check is getting confused and failing. That means we'll have left some
> code unpatched, which then fails.
>
> Can you try with the patch applied, but instead of re
> -Original Message-
> From: dan.j.willi...@gmail.com [mailto:dan.j.willi...@gmail.com] On
> Behalf Of Dan Williams
> Sent: Sunday, September 02, 2012 4:41 PM
> To: Liu Qiang-B32616
> Cc: linux-cry...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
> ker...@vger.kernel.org; vinod.k..
> -Original Message-
> From: dan.j.willi...@gmail.com [mailto:dan.j.willi...@gmail.com] On
> Behalf Of Dan Williams
> Sent: Sunday, September 02, 2012 4:12 PM
> To: Liu Qiang-B32616
> Cc: linux-cry...@vger.kernel.org; herb...@gondor.hengli.com.au;
> da...@davemloft.net; linux-ker...@vger.ke
> The freescale V2 SATA controller checks
> if the received data length matches
> the programmed length 'ttl', if not,
> it assumes that this is an error.
...
Can you tell us exactly what
"The freescale V2 SATA controller" is,
and what versions of what devices contain it?
Thanks,
Clive
__
The freescale V2 SATA controller checks if the received data length matches
the programmed length 'ttl', if not, it assumes that this is an error.
In ATAPI, the 'ttl' is based on max allocation length and not the actual
data transfer length, controller will raise 'DLM' (Data length Mismatch)
error
Hi Wen,
2012/09/04 12:46, Wen Congyang wrote:
Hi, isimatu-san
At 09/01/2012 05:30 AM, Andrew Morton Wrote:
On Tue, 28 Aug 2012 18:00:20 +0800
we...@cn.fujitsu.com wrote:
From: Yasuaki Ishimatsu
There is a possibility that get_page_bootmem() is called to the same page many
times. So when ge
On 09/04/2012 10:42 AM, Artem Bityutskiy wrote:
> Aiaiai! :-) [1] [2]
>
> I've build-tested this using aiaiai and it reports that this change breaks
> the build:
>
> dedekind@blue:~/git/maintaining$ ./verify ../l2-mtd/ mpc5121_nfc <
> ~/tmp/julia2.mbox
> Tested the patch(es) on top of the foll
On Tue, 2012-09-04 at 11:44 +0200, Julia Lawall wrote:
> > I've been bitten by the same issue recently, also cause by one of these
> > cocci devm patches. devm_clk_get is only available if the generic
> > clk_get/clk_put implementation is used. Not all architectures do this and
> > some implement t
On Tue, 4 Sep 2012, Lars-Peter Clausen wrote:
> On 09/04/2012 10:42 AM, Artem Bityutskiy wrote:
> > Aiaiai! :-) [1] [2]
> >
> > I've build-tested this using aiaiai and it reports that this change breaks
> > the build:
> >
> > dedekind@blue:~/git/maintaining$ ./verify ../l2-mtd/ mpc5121_nfc <
> >
On Tue, 4 Sep 2012 at 16:51, Michael Ellerman wrote:
> My guess would be we're calling that quite early and the __put_user()
> check is getting confused and failing. That means we'll have left some
> code unpatched, which then fails.
>
> Can you try with the patch applied, but instead of returning
From: "Aneesh Kumar K.V"
ASM_VSID_SCRAMBLE can leave non-zero bits in the high 28 bits of the result
for 256MB segment (40 bits for 1T segment). Properly mask them before using
the values in slbmte
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/slb_low.S | 1
From: "Aneesh Kumar K.V"
Increase max addressable range to 64TB. This is not tested on
real hardware yet.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/mmu-hash64.h| 42 --
arch/powerpc/include/asm/pgtable-ppc64-4k.h |2 +-
arch/powerpc/inc
From: "Aneesh Kumar K.V"
This update the proto-VSID and VSID scramble related information
to be more generic by using names instead of current values.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/mmu-hash64.h | 40 ++---
arch/powerpc/mm/mmu_context
From: "Aneesh Kumar K.V"
With larger vsid we need to track more bits of ESID in slb cache
for slb invalidate.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/paca.h |2 +-
arch/powerpc/mm/slb_low.S |8
2 files changed, 5 insertio
From: "Aneesh Kumar K.V"
slice array size and slice mask size depend on PGTABLE_RANGE. We
can't directly include pgtable.h in these header because there is
a circular dependency. So add compile time check for these values.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/p
From: "Aneesh Kumar K.V"
This patch makes the high psizes mask as an unsigned char array
so that we can have more than 16TB. Currently we support upto
64TB
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/mmu-hash64.h |6 +-
arch/powerpc/include/asm/page_64.h|6 +-
arch
From: "Aneesh Kumar K.V"
As we keep increasing PGTABLE_RANGE we need not increase the virual
map area for kernel.
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable-ppc64.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arc
From: "Aneesh Kumar K.V"
Rename the variable to better reflect the values. No functional change
in this patch.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s.h |2 +-
arch/powerpc/include/asm/machdep.h |6 +--
arch/powerpc/include/asm/mmu-hash64.h |
From: "Aneesh Kumar K.V"
This patch simplify hpte_decode for easy switching of virtual address to
virtual page number in the later patch
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/hash_native_64.c | 49 ++
1 file chang
From: "Aneesh Kumar K.V"
This patch convert different functions to take virtual page number
instead of virtual address. Virtual page number is virtual address
shifted right by VPN_SHIFT (12) bits. This enable us to have an
address range of upto 76 bits.
Signed-off-by: Aneesh Kumar K.V
---
arch
From: "Aneesh Kumar K.V"
Don't open code the same
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/platforms/cell/beat_htab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/cell/beat_htab.c
b/arch/powerpc/platforms/ce
From: "Aneesh Kumar K.V"
To clarify the meaning for future readers, replace the open coded
19 with CONTEXT_BITS
Reviewed-by: Paul Mackerras
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/mmu_context_hash64.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powe
Hi,
This patchset include patches for supporting 64TB with ppc64. I haven't booted
this on hardware with 64TB memory yet. But they boot fine on real hardware with
less memory. Changes extend VSID bits to 38 bits for a 256MB segment
and 26 bits for 1TB segments.
Changes from V6:
* rebase to lates
Aiaiai! :-) [1] [2]
I've build-tested this using aiaiai and it reports that this change breaks the
build:
dedekind@blue:~/git/maintaining$ ./verify ../l2-mtd/ mpc5121_nfc <
~/tmp/julia2.mbox
Tested the patch(es) on top of the following commits:
ba64756 Quick fixes - applied by aiaiai
651c6fa J
VFIO adds a separate memory region for every BAR and tries
to mmap() it to provide direct BAR mapping to the guest.
If it succeedes, QEMU registers this address with kvm_set_phys_mem().
However it is not always possible because such a BAR should
be host page size aligned. In this case VFIO uses "sl
From: Paul Mackerras
TODO: ask Paul to make a proper message.
This is the fix for a host kernel compiled with a page size
other than 4K (TCE page size). In the case of a 64K page size,
the host used to lose address bits in hpte_rpn().
The patch fixes it.
Signed-off-by: Alexey Kardashevskiy
---
The upcoming VFIO support requires a way to know which
entry in the TCE map is not empty in order to do cleanup
at QEMU exit/crash. This patch adds such functionality
to POWERNV platform code.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/platforms/powernv/pci.c |6 ++
1 file chan
Cc: David Gibson
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/include/asm/iommu.h|3 +
drivers/iommu/Kconfig |8 +
drivers/vfio/Kconfig|6 +
drivers/vfio/Makefile |1 +
drivers
84 matches
Mail list logo