Hi Gavin,
On 2021/4/23 9:35, Gavin Shan wrote:
> Hi Keqian,
>
> On 4/22/21 5:41 PM, Keqian Zhu wrote:
>> On 2021/4/22 10:12, Gavin Shan wrote:
>>> On 4/21/21 4:28 PM, Keqian Zhu wrote:
On 2021/4/21 14:38, Gavin Shan wrote:
> On 4/16/21 12:03 AM, Keqian Zhu wrote:
>
> [...]
>
>>>
>>>
On 2021/4/22 16:00, Santosh Shukla wrote:
> On Thu, Apr 22, 2021 at 1:07 PM Tarun Gupta (SW-GPU)
> wrote:
>>
>>
>>
>> On 4/22/2021 12:20 PM, Marc Zyngier wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> On Thu, 22 Apr 2021 03:02:00 +0100,
>>> Gavin Shan wrote:
Hi Marc,
On 4/22/21 4:50 PM, Marc Zyngier wrote:
On Thu, 22 Apr 2021 03:02:00 +0100,
Gavin Shan wrote:
On 4/21/21 9:59 PM, Marc Zyngier wrote:
On Wed, 21 Apr 2021 07:17:44 +0100,
Keqian Zhu wrote:
On 2021/4/21 14:20, Gavin Shan wrote:
On 4/21/21 12:59 PM, Keqian Zhu wrote:
On 2020/10/22
Hi Keqian,
On 4/22/21 5:41 PM, Keqian Zhu wrote:
On 2021/4/22 10:12, Gavin Shan wrote:
On 4/21/21 4:28 PM, Keqian Zhu wrote:
On 2021/4/21 14:38, Gavin Shan wrote:
On 4/16/21 12:03 AM, Keqian Zhu wrote:
[...]
Yeah, Sorry that I missed that part. Something associated with Santosh's
patch.
Hi Marc,
On 4/22/21 4:51 PM, Marc Zyngier wrote:
On Thu, 22 Apr 2021 03:25:23 +0100,
Gavin Shan wrote:
On 4/21/21 4:36 PM, Keqian Zhu wrote:
On 2021/4/21 15:52, Gavin Shan wrote:
On 4/16/21 12:03 AM, Keqian Zhu wrote:
The MMIO region of a device maybe huge (GB level), try to use
block
On 4/16/21 12:03 AM, Keqian Zhu wrote:
The MMIO region of a device maybe huge (GB level), try to use
block mapping in stage2 to speedup both map and unmap.
Compared to normal memory mapping, we should consider two more
points when try block mapping for MMIO region:
1. For normal memory
On 4/16/21 12:03 AM, Keqian Zhu wrote:
The MMIO regions may be unmapped for many reasons and can be remapped
by stage2 fault path. Map MMIO regions at creation time becomes a
minor optimization and makes these two mapping path hard to sync.
Remove the mapping code while keep the useful sanity
On Wed, Apr 14, 2021 at 12:22:56PM +0100, Shameer Kolothum wrote:
> Hi,
>
> This is an attempt to revive this series originally posted by
> Julien Grall[1]. The main motive to work on this now is because
> of the requirement to have Pinned KVM VMIDs and the RFC discussion
> for the same basically
On Thu, Apr 22, 2021 at 04:17:27PM +0100, Alexandru Elisei wrote:
> Hi Drew,
>
> On 4/20/21 5:51 PM, Andrew Jones wrote:
> > Hi Alex,
> >
> > On Tue, Apr 20, 2021 at 05:13:37PM +0100, Alexandru Elisei wrote:
> >> This is an RFC because it's not exactly clear to me that this is the best
> >>
On 2021/4/22 15:29, Mike Rapoport wrote:
On Thu, Apr 22, 2021 at 03:00:20PM +0800, Kefeng Wang wrote:
On 2021/4/21 14:51, Mike Rapoport wrote:
From: Mike Rapoport
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark
Hi Drew,
On 4/20/21 5:51 PM, Andrew Jones wrote:
> Hi Alex,
>
> On Tue, Apr 20, 2021 at 05:13:37PM +0100, Alexandru Elisei wrote:
>> This is an RFC because it's not exactly clear to me that this is the best
>> approach. I'm also open to using a different name for the new option, maybe
>>
Hi Eric,
I have validated the v14 of the patch series from branch
"jean_sva_current_2stage_v14".
Verfied nested translations with NVMe PCI device assigned to Qemu 5.2 Guest.
Had to revert patch "mm: notify remote TLBs when dirtying a PTE".
Tested-by: Sumit Gupta
On 2021/4/21 14:51, Mike Rapoport wrote:
From: Mike Rapoport
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark NOMAP pages as reserved in the memory map and restore
the intended semantics of pfn_valid() to designate
On 4/22/2021 12:20 PM, Marc Zyngier wrote:
External email: Use caution opening links or attachments
On Thu, 22 Apr 2021 03:02:00 +0100,
Gavin Shan wrote:
Hi Marc,
On 4/21/21 9:59 PM, Marc Zyngier wrote:
On Wed, 21 Apr 2021 07:17:44 +0100,
Keqian Zhu wrote:
On 2021/4/21 14:20, Gavin
On Thu, 15 Apr 2021 14:17:25 +0100, Alexandru Elisei wrote:
> pmu__generate_fdt_nodes() checks if the host has support for PMU in a guest
> and prints a warning if that's not the case. However, this check is too
> late because the function is called after the VCPU has been created, and
> VCPU
On Wed, 14 Apr 2021 14:44:04 +0100, Marc Zyngier wrote:
> This small series builds on top of the work that was started with [1].
>
> It recently became apparent that KVM/arm64 is the last bit of the
> kernel that still uses perf_num_counters().
>
> As I went ahead to address this, it became
On Wed, Apr 14, 2021 at 02:44:09PM +0100, Marc Zyngier wrote:
> perf_pmu_name() and perf_num_counters() are unused. Drop them.
>
> Signed-off-by: Marc Zyngier
> ---
> include/linux/perf_event.h | 2 --
> kernel/events/core.c | 5 -
> 2 files changed, 7 deletions(-)
Acked-by: Will
On Wed, Apr 14, 2021 at 02:44:06PM +0100, Marc Zyngier wrote:
> perf_pmu_name() and perf_num_counters() are now unused. Drop them.
>
> Signed-off-by: Marc Zyngier
> ---
> drivers/perf/arm_pmu.c | 30 --
> 1 file changed, 30 deletions(-)
Nice! This was some of the
On Wed, Apr 14, 2021 at 02:44:05PM +0100, Marc Zyngier wrote:
> KVM/arm64 is the sole user of perf_num_counters(), and really
> could do without it. Stop using the obsolete API by relying on
> the existing probing code.
>
> Signed-off-by: Marc Zyngier
> ---
> arch/arm64/kvm/perf.c | 7
On Wed, Apr 21, 2021 at 02:56:16PM +0100, Marc Zyngier wrote:
> On Wed, 21 Apr 2021 14:49:01 +0100,
> Arnd Bergmann wrote:
> >
> > From: Arnd Bergmann
> >
> > The perf_num_counters() function is only defined when CONFIG_PERF_EVENTS
> > is enabled:
> >
> > arch/arm64/kvm/perf.c: In function
On 22.04.21 08:19, Mike Rapoport wrote:
From: Mike Rapoport
The intended semantics of pfn_valid() is to verify whether there is a
struct page for the pfn in question and nothing else.
Yet, on arm64 it is used to distinguish memory areas that are mapped in the
linear map vs those that require
On Thu, Apr 22, 2021 at 1:07 PM Tarun Gupta (SW-GPU)
wrote:
>
>
>
> On 4/22/2021 12:20 PM, Marc Zyngier wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On Thu, 22 Apr 2021 03:02:00 +0100,
> > Gavin Shan wrote:
> >>
> >> Hi Marc,
> >>
> >> On 4/21/21 9:59 PM, Marc
Hi Gavin,
On 2021/4/22 10:12, Gavin Shan wrote:
> Hi Keqian,
>
> On 4/21/21 4:28 PM, Keqian Zhu wrote:
>> On 2021/4/21 14:38, Gavin Shan wrote:
>>> On 4/16/21 12:03 AM, Keqian Zhu wrote:
The MMIO regions may be unmapped for many reasons and can be remapped
by stage2 fault path. Map
On Thu, Apr 22, 2021 at 03:00:20PM +0800, Kefeng Wang wrote:
>
> On 2021/4/21 14:51, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Hi,
> >
> > These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
> > pfn_valid_within() to 1.
> >
> > The idea is to mark NOMAP pages
On Thu, 22 Apr 2021 03:25:23 +0100,
Gavin Shan wrote:
>
> Hi Keqian,
>
> On 4/21/21 4:36 PM, Keqian Zhu wrote:
> > On 2021/4/21 15:52, Gavin Shan wrote:
> >> On 4/16/21 12:03 AM, Keqian Zhu wrote:
> >>> The MMIO region of a device maybe huge (GB level), try to use
> >>> block mapping in stage2
On Thu, 22 Apr 2021 03:02:00 +0100,
Gavin Shan wrote:
>
> Hi Marc,
>
> On 4/21/21 9:59 PM, Marc Zyngier wrote:
> > On Wed, 21 Apr 2021 07:17:44 +0100,
> > Keqian Zhu wrote:
> >> On 2021/4/21 14:20, Gavin Shan wrote:
> >>> On 4/21/21 12:59 PM, Keqian Zhu wrote:
> On 2020/10/22 0:16,
From: Mike Rapoport
The intended semantics of pfn_valid() is to verify whether there is a
struct page for the pfn in question and nothing else.
Yet, on arm64 it is used to distinguish memory areas that are mapped in the
linear map vs those that require ioremap() to access them.
Introduce a
From: Mike Rapoport
The arm64's version of pfn_valid() differs from the generic because of two
reasons:
* Parts of the memory map are freed during boot. This makes it necessary to
verify that there is actual physical memory that corresponds to a pfn
which is done by querying memblock.
*
From: Mike Rapoport
The struct pages representing a reserved memory region are initialized
using reserve_bootmem_range() function. This function is called for each
reserved region just before the memory is freed from memblock to the buddy
page allocator.
The struct pages for MEMBLOCK_NOMAP
From: Mike Rapoport
Add comment describing the semantics of pfn_valid() that clarifies that
pfn_valid() only checks for availability of a memory map entry (i.e. struct
page) for a PFN rather than availability of usable memory backing that PFN.
The most "generic" version of pfn_valid() used by
From: Mike Rapoport
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark NOMAP pages as reserved in the memory map and restore
the intended semantics of pfn_valid() to designate availability of struct
page for a pfn.
With
31 matches
Mail list logo