On Tue, Mar 23, 2021 at 12:08:56PM +, Robin Murphy wrote:
> On 2021-03-23 07:34, Yang Yingliang wrote:
> > When copy over 128 bytes, src/dst is added after
> > each ldp/stp instruction, it will cost more time.
> > To improve this, we only add src/dst after load
> > or store 64 bytes.
>
> This
On Sat, 20 Mar 2021 03:58:48 +0530, Bhaskar Chowdhury wrote:
> s/acurate/accurate/
Applied to arm64 (for-next/fixes), thanks!
[1/1] arm64: cpuinfo: Fix a typo
https://git.kernel.org/arm64/c/d1296f1265f7
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
On Tue, 16 Feb 2021 10:03:50 -0500, Pavel Tatashin wrote:
> v3: - Sync with linux-next where arch_get_mappable_range() was
> introduced.
> v2: - Added test-by Tyler Hicks
> - Addressed comments from Anshuman Khandual: moved check under
> IS_ENABLED(CONFIG_RANDOMIZE_BASE),
On Tue, 16 Mar 2021 12:50:41 -0600, Tom Saeger wrote:
> In commit 94bccc340710 ("iscsi_ibft: make ISCSI_IBFT dependson ACPI instead
> of ISCSI_IBFT_FIND") Kconfig was disentangled to make ISCSI_IBFT selection
> not depend on x86.
>
> Update arm64 acpi documentation, changing IBFT support status
On Fri, 19 Mar 2021 16:50:54 -0400, Pavel Tatashin wrote:
> The ppos points to a position in the old kernel memory (and in case of
> arm64 in the crash kernel since elfcorehdr is passed as a segment). The
> function should update the ppos by the amount that was read. This bug is
> not exposed by
On Fri, 19 Mar 2021 18:41:06 +, Mark Rutland wrote:
> We recently converted arm64 to use arch_stack_walk() in commit:
>
> 5fc57df2f6fd ("arm64: stacktrace: Convert to ARCH_STACKWALK")
>
> The core stacktrace code expects that (when tracing the current task)
> arch_stack_walk() starts a
PTE_S2_MEMATTR(MT_S2_ ## attr); \
Given that this isn't used outside of pgtable.c, I wonder if we should move
it in there, as it's a pretty low-level thing to do now that it takes the
'has_fwb' parameter.
But regardless,
Acked-by: Will Deacon
Will
o
> avoid programming errors.
>
> Suggested-by: Will Deacon
> Signed-off-by: Quentin Perret
> ---
> arch/arm64/include/asm/kvm_pgtable.h | 2 ++
> arch/arm64/kvm/hyp/pgtable.c | 3 +++
> 2 files changed, 5 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kv
---
> arch/arm64/include/asm/kvm_pgtable.h | 20 +
> arch/arm64/kvm/hyp/pgtable.c | 126 ++-
> 2 files changed, 122 insertions(+), 24 deletions(-)
Acked-by: Will Deacon
Will
> 5 files changed, 24 insertions(+), 1 deletion(-)
> create mode 100644 arch/arm64/kvm/hyp/nvhe/cache.S
Acked-by: Will Deacon
Will
_regs.c | 19 +++
> 5 files changed, 59 insertions(+)
> create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h
Acked-by: Will Deacon
Will
[+Lorenzo]
On Tue, Mar 16, 2021 at 12:50:41PM -0600, Tom Saeger wrote:
> In commit 94bccc340710 ("iscsi_ibft: make ISCSI_IBFT dependson ACPI instead
> of ISCSI_IBFT_FIND") Kconfig was disentangled to make ISCSI_IBFT selection
> not depend on x86.
>
> Update arm64 acpi documentation, changing
On Wed, Mar 17, 2021 at 02:17:13PM +, Quentin Perret wrote:
> In order to further configure stage-2 page-tables, pass flags to the
> init function using a new enum.
>
> The first of these flags allows to disable FWB even if the hardware
> supports it as we will need to do so for the host
On Mon, Mar 15, 2021 at 04:56:21PM +, Quentin Perret wrote:
> On Monday 15 Mar 2021 at 16:33:23 (+), Will Deacon wrote:
> > On Mon, Mar 15, 2021 at 02:35:14PM +, Quentin Perret wrote:
> > > We will need to do cache maintenance at EL2 soon, so compile a copy of
> &g
On Mon, Mar 15, 2021 at 04:53:18PM +, Quentin Perret wrote:
> On Monday 15 Mar 2021 at 16:36:19 (+), Will Deacon wrote:
> > On Mon, Mar 15, 2021 at 02:35:29PM +, Quentin Perret wrote:
> > > As the host stage 2 will be identity mapped, all the .hyp memory regions
&
On Mon, Mar 15, 2021 at 02:35:29PM +, Quentin Perret wrote:
> As the host stage 2 will be identity mapped, all the .hyp memory regions
> and/or memory pages donated to protected guestis will have to marked
> invalid in the host stage 2 page-table. At the same time, the hypervisor
> will need a
On Mon, Mar 15, 2021 at 02:35:14PM +, Quentin Perret wrote:
> We will need to do cache maintenance at EL2 soon, so compile a copy of
> __flush_dcache_area at EL2, and provide a copy of arm64_ftr_reg_ctrel0
> as it is needed by the read_ctr macro.
>
> Signed-off-by: Quentin Perret
> ---
>
> arch/arm64/kvm/sys_regs.c | 2 ++
> 2 files changed, 4 insertions(+)
Acked-by: Will Deacon
Will
h | 29 +
> arch/arm64/kvm/hyp/pgtable.c | 89 ++--
> 2 files changed, 114 insertions(+), 4 deletions(-)
Acked-by: Will Deacon
Will
On Sun, 14 Mar 2021 20:26:50 -0500, Alex Elder wrote:
> The last line of ip_fast_csum() calls csum_fold(), forcing the
> type of the argument passed to be u32. But csum_fold() takes a
> __wsum argument (which is __u32 __bitwise for arm64). As long
> as we're forcing the cast, cast it to the
(1):
arm64: perf: Fix 64-bit event counter read truncation
Wei Yongjun (1):
perf/arm_dmc620_pmu: Fix error return code in dmc620_pmu_device_probe()
Will Deacon (2):
arm64: cpufeatures: Fix handling of CONFIG_CMDLINE for idreg overrides
arm64: Drop support for CMDLINE_EXTEND
arch/ar
On Fri, 12 Mar 2021 08:04:21 +, 'Wei Yongjun wrote:
> Fix to return negative error code -ENOMEM from the error handling
> case instead of 0, as done elsewhere in this function.
Applied to arm64 (for-next/fixes), thanks!
[1/1] perf/arm_dmc620_pmu: Fix error return code in
On Fri, Mar 12, 2021 at 10:13:26AM +, Quentin Perret wrote:
> On Friday 12 Mar 2021 at 09:32:06 (+), Will Deacon wrote:
> > I'm not saying to use the VMID directly, just that allocating half of the
> > pte feels a bit OTT given that the state of things after th
On Fri, Mar 12, 2021 at 05:32:13AM +, Quentin Perret wrote:
> On Thursday 11 Mar 2021 at 19:04:07 (+), Will Deacon wrote:
> > On Wed, Mar 10, 2021 at 05:57:47PM +, Quentin Perret wrote:
> > > + for (level = pgt->start_level; level < KVM_P
On Fri, Mar 12, 2021 at 06:23:00AM +, Quentin Perret wrote:
> On Thursday 11 Mar 2021 at 18:38:36 (+), Will Deacon wrote:
> > On Wed, Mar 10, 2021 at 05:57:45PM +, Quentin Perret wrote:
> > > As the host stage 2 will be identity mapped, all the .hyp memory regions
&
On Fri, Mar 12, 2021 at 06:34:09AM +, Quentin Perret wrote:
> On Thursday 11 Mar 2021 at 19:36:39 (+), Will Deacon wrote:
> > On Wed, Mar 10, 2021 at 05:57:30PM +, Quentin Perret wrote:
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> &
On Wed, Mar 10, 2021 at 05:57:30PM +, Quentin Perret wrote:
> Introduce the infrastructure in KVM enabling to copy CPU feature
> registers into EL2-owned data-structures, to allow reading sanitised
> values directly at EL2 in nVHE.
>
> Given that only a subset of these features are being read
> arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +
> arch/arm64/kvm/hyp/nvhe/hyp-main.c| 9
> arch/arm64/kvm/hyp/nvhe/mem_protect.c | 33 +
> 5 files changed, 91 insertions(+)
Acked-by: Will Deacon
Will
arch/arm64/kvm/hyp/nvhe/switch.c | 7 +-
> arch/arm64/kvm/hyp/nvhe/tlb.c | 4 +-
> 12 files changed, 319 insertions(+), 7 deletions(-)
> create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c
I like this a lot more now, thanks:
Acked-by: Will Deacon
Will
On Wed, Mar 10, 2021 at 05:57:47PM +, Quentin Perret wrote:
> Since the host stage 2 will be identity mapped, and since it will own
> most of memory, it would preferable for performance to try and use large
> block mappings whenever that is possible. To ease this, introduce a new
> helper in
64 size,
> .arg= _data,
> };
>
> - ret = stage2_map_set_prot_attr(prot, _data);
> + ret = stage2_set_prot_attr(prot, _data.attr);
> if (ret)
> return ret;
(nit: this is now different to hyp_map_set_prot_attr() -- can we do the same
thing there, please?)
With that:
Acked-by: Will Deacon
Will
On Wed, Mar 10, 2021 at 05:57:45PM +, Quentin Perret wrote:
> As the host stage 2 will be identity mapped, all the .hyp memory regions
> and/or memory pages donated to protected guestis will have to marked
> invalid in the host stage 2 page-table. At the same time, the hypervisor
> will need a
hy do you exclude bit 1 from this range?
> it entirely by ensuring to cache the anchor's child upfront.
>
> Suggested-by: Will Deacon
> Signed-off-by: Quentin Perret
> ---
> arch/arm64/kvm/hyp/pgtable.c | 26 --
> 1 file changed, 16 insertions(+),
---
> arch/arm64/kvm/mmu.c | 43 ++--
> 3 files changed, 120 insertions(+), 12 deletions(-)
Acked-by: Will Deacon
Will
rch/arm64/include/asm/assembler.h | 14 +++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
Acked-by: Will Deacon
Will
rvisor side of things. In other words, this only implements the
> new hypercalls, but does not make use of them from the host yet. The
> host-side changes will follow in a subsequent patch.
>
> Credits to Will for __pkvm_init_switch_pgd.
>
> Co-authored-by: Will Deacon
> S
ode 100644 arch/arm64/kvm/hyp/nvhe/page_alloc.c
Eventually, we can replace the refcount with refcount_t, but for now this
looks pretty good:
Acked-by: Will Deacon
Will
> + struct list_head *next)
> +{
> + return true;
> +}
> +
> +bool __list_del_entry_valid(struct list_head *entry)
> +{
> + return true;
> +}
> +#endif
This isn't any worse than disabling DEBUG_LIST for the EL2 object, so as
an initial implementation:
Acked-by: Will Deacon
but we really should have the debug list checks on (probably
unconditionally) for the EL2 code in my opinion.
Will
sertions(+), 42 deletions(-)
Thanks, looks good to me now:
Acked-by: Will Deacon
Will
On Thu, Mar 11, 2021 at 01:22:53PM +0530, Anshuman Khandual wrote:
> On 3/8/21 2:25 PM, Mike Rapoport wrote:
> > On Mon, Mar 08, 2021 at 08:57:53AM +0530, Anshuman Khandual wrote:
> >> Platforms like arm and arm64 have redefined pfn_valid() because their early
> >> memory sections might have
Hi Claire,
On Tue, Feb 09, 2021 at 02:21:30PM +0800, Claire Chang wrote:
> Introduce the new compatible string, restricted-dma-pool, for restricted
> DMA. One can specify the address and length of the restricted DMA memory
> region by restricted-dma-pool in the reserved-memory node.
>
>
On Tue, 9 Mar 2021 17:44:12 -0700, Rob Herring wrote:
> Commit 0fdf1bb75953 ("arm64: perf: Avoid PMXEV* indirection") changed
> armv8pmu_read_evcntr() to return a u32 instead of u64. The result is
> silent truncation of the event counter when using 64-bit counters. Given
> the offending commit
On Wed, 10 Mar 2021 11:23:10 +0530, Anshuman Khandual wrote:
> As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1
> might contain a range of values to describe supported translation granules
> (4K and 16K pages sizes in particular) instead of just enabled or disabled
>
On Tue, Mar 09, 2021 at 07:32:16PM -0500, Mark Salter wrote:
> I ran into an early boot soft lockup on a Qualcomm Amberwing using a v5.11
> kernel configured for 52-bit VA. This turned into a panic with a v5.12-rc2
> kernel.
>
> The problem is that when we fall back to 48-bit VA, idmap_t0sz is
On Mon, 8 Mar 2021 17:10:23 +0100, Andrey Konovalov wrote:
> When CONFIG_DEBUG_VIRTUAL is enabled, the default page_to_virt() macro
> implementation from include/linux/mm.h is used. That definition doesn't
> account for KASAN tags, which leads to no tags on page_alloc allocations.
>
> Provide an
On Mon, Mar 08, 2021 at 02:42:00PM +, Marc Zyngier wrote:
> On Fri, 05 Mar 2021 14:36:09 +,
> Anshuman Khandual wrote:
> > - switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) {
> > - default:
> > - case 1:
> > + tgran_2 = cpuid_feature_extract_unsigned_field(mmfr0,
; HAVE_SETUP_PER_CPU_AREA.
>
> Signed-off-by: Pingfan Liu
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Greg Kroah-Hartman
> Cc: "Rafael J. Wysocki"
> Cc: Atish Patra
> Cc: linux-kernel@vger.kernel.org
> To: linux-arm-ker...@lists.infradead.org
> ---
>
On Tue, Mar 09, 2021 at 09:46:43AM +0530, Viresh Kumar wrote:
> On 08-03-21, 14:52, Will Deacon wrote:
> > On Mon, Mar 01, 2021 at 12:21:17PM +0530, Viresh Kumar wrote:
> > > +EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
> >
> > I don't get why you nee
.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Ard Biesheuvel
> Cc: Mark Rutland
> Cc: James Morse
> Cc: Robin Murphy
> Cc: Jérôme Glisse
> Cc: Dan Williams
> Cc: David Hildenbrand
> Cc: Mike Rapoport
> Cc: Veronika Kabatova
> Cc: linux-arm-ker...@l
On Mon, Jan 25, 2021 at 10:10:43PM +0800, Yanan Wang wrote:
> With a guest translation fault, we don't really need the memcache pages
> when only installing a new entry to the existing page table or replacing
> the table entry with a block entry. And with a guest permission fault,
> we also don't
On Mon, Jan 25, 2021 at 10:10:44PM +0800, Yanan Wang wrote:
> After dirty-logging is stopped for a VM configured with huge mappings,
> KVM will recover the table mappings back to block mappings. As we only
> replace the existing page tables with a block entry and the cacheability
> has not been
85 ++--
> include/linux/arch_topology.h | 14 +++-
> 4 files changed, 134 insertions(+), 80 deletions(-)
For the arm64 bits:
Acked-by: Will Deacon
However...
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index de8587cc119e..8f62db
On Wed, 3 Mar 2021 13:49:25 +, Will Deacon wrote:
> This is version two of the series I previously posted here:
>
> https://lore.kernel.org/r/20210225125921.13147-1-w...@kernel.org
>
> The main change since v1 is that, rather than "fix" the FDT code to
> fol
On Mon, 1 Mar 2021 10:36:32 +0530, Anshuman Khandual wrote:
> There is already an ARCH_WANT_HUGE_PMD_SHARE which is being selected for
> applicable configurations. Hence just drop the other redundant entry.
Applied to arm64 (for-next/fixes), thanks!
[1/1] arm64/mm: Drop redundant
On Mon, 1 Mar 2021 16:55:14 +0530, Anshuman Khandual wrote:
> Currently without THP being enabled, MAX_ORDER via FORCE_MAX_ZONEORDER gets
> reduced to 11, which falls below HUGETLB_PAGE_ORDER for certain 16K and 64K
> page size configurations. This is problematic which throws up the following
>
sues/1317
> Reported-by: Nathan Chancellor
> Suggested-by: Marc Zyngier
> Suggested-by: Ard Biesheuvel
> Signed-off-by: Sami Tolvanen
> ---
> arch/arm64/kvm/hyp/entry.S | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
Acked-by: Will Deacon
Will
On Mon, Mar 08, 2021 at 01:38:07PM +, Quentin Perret wrote:
> On Monday 08 Mar 2021 at 12:46:07 (+), Will Deacon wrote:
> > > > > +static int host_stage2_idmap(u64 addr)
> > > > > +{
> > > > > + enum kvm_pgtable_prot prot = KV
ptable
> >> range of values (depending on whether the field is signed or unsigned) now
> >> represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here,
> >> also fix similar situations in EFI stub and KVM as well.
> >>
> >> Cc: Catalin M
quot; for this kind of features which are
> not needed on some architectures.
>
> Cc: Mel Gorman
> Cc: Andy Lutomirski
> Cc: Catalin Marinas
> Cc: Will Deacon
> Signed-off-by: Barry Song
> ---
> Documentation/features/arch-support.txt| 1 +
> Document
On Mon, Mar 08, 2021 at 09:22:29AM +, Quentin Perret wrote:
> On Friday 05 Mar 2021 at 19:29:06 (+), Will Deacon wrote:
> > On Tue, Mar 02, 2021 at 02:59:59PM +, Quentin Perret wrote:
> > > +static __always_inline void __load_host_stage2(void)
> > > +{
>
On Tue, Mar 02, 2021 at 02:59:59PM +, Quentin Perret wrote:
> When KVM runs in protected nVHE mode, make use of a stage 2 page-table
> to give the hypervisor some control over the host memory accesses. The
> host stage 2 is created lazily using large block mappings if possible,
> and will
T_NONE in MMIO range.
> + */
> + if (!find_mem_range(start, ) || !find_mem_range(end, ))
> + return -EINVAL;
> + if (r1.start != r2.start)
> + return -EINVAL;
Feels like this should be in a helper to determine whether or not a range is
solely covered by memory.
Either way:
Acked-by: Will Deacon
Will
On Fri, Mar 05, 2021 at 09:52:12AM +, Quentin Perret wrote:
> On Thursday 04 Mar 2021 at 20:00:45 (+), Will Deacon wrote:
> > On Tue, Mar 02, 2021 at 02:59:56PM +, Quentin Perret wrote:
> > > Once we start unmapping portions of memory from the host stage 2
we currently do not have a use-case for it.
>
> Signed-off-by: Quentin Perret
> ---
> arch/arm64/kvm/perf.c | 3 ++-
> arch/arm64/kvm/pmu.c | 8
> 2 files changed, 6 insertions(+), 5 deletions(-)
Acked-by: Will Deacon
Will
On Fri, Mar 05, 2021 at 03:03:36PM +, Quentin Perret wrote:
> On Friday 05 Mar 2021 at 14:39:42 (+), Will Deacon wrote:
> > On Tue, Mar 02, 2021 at 02:59:58PM +, Quentin Perret wrote:
> > > + /* Reduce the kvm_mem_range to a granule size */
> > > + ret = __
On Tue, Mar 02, 2021 at 02:59:58PM +, Quentin Perret wrote:
> Add a new map function to the KVM page-table library that allows to
> greedily create block identity-mappings. This will be useful to create
> lazily the host stage 2 page-table as it will own most of memory and
> will always be
s
> Suggested-by: Nathan Chancellor
> Suggested-by: David Laight
> Suggested-by: Will Deacon
I'm still reasonably opposed to this patch, so please don't add my
"Suggested-by" here as, if I were to suggest anything, it would be not
to apply this patch :)
I still don't see why SLS
off-by: Quentin Perret
> ---
> arch/arm64/kernel/vmlinux.lds.S | 22 +-
> 1 file changed, 9 insertions(+), 13 deletions(-)
With the typo fixed:
Acked-by: Will Deacon
Will
On Tue, Mar 02, 2021 at 02:59:57PM +, Quentin Perret wrote:
> In order to ease its re-use in other code paths, refactor
> stage2_map_set_prot_attr() to not depend on a stage2_map_data struct.
> No functional change intended.
>
> Signed-off-by: Quentin Perret
> ---
>
On Tue, Mar 02, 2021 at 02:59:56PM +, Quentin Perret wrote:
> Once we start unmapping portions of memory from the host stage 2 (such
> as e.g. the hypervisor memory sections, or pages that belong to
> protected guests), we will need a way to track page ownership. And
> given that all mappings
otal_pages();
> +
> /* Allow 1 GiB for private mappings */
> res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT);
>
> return res;
> }
> +
> +static inline unsigned long host_s2_mem_pgtable_pages(void)
> +{
> + return __hyp_pgtable_total_pages() + 16;
Is this 16 due to the possibility of a concatenated pgd? If so, please add
a comment to that effect.
With that:
Acked-by: Will Deacon
Will
eserved_mem.c | 19 +++
> 1 file changed, 19 insertions(+)
Acked-by: Will Deacon
Will
ertions(+), 5 deletions(-)
Acked-by: Will Deacon
Will
!= ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
> + return true;
> +
> + if (!__get_fault_info(esr, >arch.fault))
> + return false;
> +
> return true;
Just return __get_fault_info(esr, >arch.fault); here.
With that:
Acked-by: Will Deacon
Will
big header in v2, thanks!
Acked-by: Will Deacon
Will
On Tue, Mar 02, 2021 at 02:59:46PM +, Quentin Perret wrote:
> Previous commits have introduced infrastructure to enable the EL2 code
> to manage its own stage 1 mappings. However, this was preliminary work,
> and none of it is currently in use.
>
> Put all of this together by elevating the
; the hypervisor side of things. In other words, this only implements the
> new hypercalls, but does not make use of them from the host yet. The
> host-side changes will follow in a subsequent patch.
>
> Credits to Will for __pkvm_init_switch_pgd.
>
> Co-authored-by: Will Deac
On Tue, Mar 02, 2021 at 02:59:42PM +, Quentin Perret wrote:
> When memory protection is enabled, the hyp code will require a basic
> form of memory management in order to allocate and free memory pages at
> EL2. This is needed for various use-cases, including the creation of hyp
> mappings or
reate mode 100644 arch/arm64/kvm/hyp/include/nvhe/early_alloc.h
> create mode 100644 arch/arm64/kvm/hyp/include/nvhe/memory.h
> create mode 100644 arch/arm64/kvm/hyp/nvhe/early_alloc.c
Acked-by: Will Deacon
Will
ch/arm64/kernel/vmlinux.lds.S | 52 ---
> arch/arm64/kvm/arm.c | 14 -
> arch/arm64/kvm/hyp/nvhe/hyp.lds.S | 1 +
> 4 files changed, 49 insertions(+), 19 deletions(-)
Acked-by: Will Deacon
Will
:Convert a physical address into a virtual address as
> + * accessible in the current context.
s/as accessible/mapped/
With those changes:
Acked-by: Will Deacon
Will
> 1 file changed, 18 insertions(+), 12 deletions(-)
Acked-by: Will Deacon
Will
On Thu, Mar 04, 2021 at 09:12:31AM +0100, David Hildenbrand wrote:
> On 04.03.21 04:31, Anshuman Khandual wrote:
> > On 3/4/21 2:54 AM, Will Deacon wrote:
> > > On Wed, Mar 03, 2021 at 07:04:33PM +, Catalin Marinas wrote:
> > > > On Thu, Feb 11, 2021 at 01:35
On Wed, Mar 03, 2021 at 04:30:21PM -0600, Rob Herring wrote:
> On Wed, Mar 3, 2021 at 7:50 AM Will Deacon wrote:
> >
> > The built-in kernel commandline (CONFIG_CMDLINE) can be configured in
> > three different ways:
> >
> > 1. CMDLINE_FORCE: Use CONFIG_CMDLI
-223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct
> > > kvm_pgtable_walk_data *data,
> > > > goto out;
> > > >
> > > > if (!table) {
> > > > - data->addr += kvm_granule_size(level);
> > > > + data->addr = ALIGN(data->addr, kvm_granule_size(level));
> >
> > What if previous data->addr is already aligned with
> > kvm_granule_size(level)?
> > Hence a deadloop? Am I missing anything else?
>
> Indeed, well spotted. I'll revert to your original suggestion
> if everybody agrees...
Heh, yeah, at least one of us is awake.
For the original patch, with the updated (including typo fix) commit
message:
Acked-by: Will Deacon
If that still counts for anything!
Will
On Wed, Mar 03, 2021 at 07:04:33PM +, Catalin Marinas wrote:
> On Thu, Feb 11, 2021 at 01:35:56PM +0100, David Hildenbrand wrote:
> > On 11.02.21 13:10, Anshuman Khandual wrote:
> > > On 2/11/21 5:23 PM, Will Deacon wrote:
> > > > ... and dropped. These pa
truct
> kvm_pgtable_walk_data *data,
> goto out;
>
> if (!table) {
> - data->addr += kvm_granule_size(level);
> + data->addr = ALIGN(data->addr, kvm_granule_size(level));
> goto out;
> }
If Jia is happy with it, please feel free to add:
Acked-by: Will Deacon
Will
[+Marc]
On Tue, Mar 02, 2021 at 02:55:43PM +, Ashish Kalra wrote:
> On Fri, Feb 26, 2021 at 09:44:41AM -0800, Sean Christopherson wrote:
> > On Fri, Feb 26, 2021, Ashish Kalra wrote:
> > > On Thu, Feb 25, 2021 at 02:59:27PM -0800, Steve Rutherford wrote:
> > > > On Thu, Feb 25, 2021 at 12:20
On Tue, Mar 02, 2021 at 05:25:20PM +, Christophe Leroy wrote:
> This patchs adds an option of prepend a text to the command
> line instead of appending it.
>
> Signed-off-by: Christophe Leroy
> ---
> include/linux/cmdline.h | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
>
On Wed, Mar 03, 2021 at 06:57:09PM +0100, Christophe Leroy wrote:
> Le 03/03/2021 à 18:46, Will Deacon a écrit :
> > On Wed, Mar 03, 2021 at 06:38:16PM +0100, Christophe Leroy wrote:
> > > Le 03/03/2021 à 18:28, Will Deacon a écrit :
> > > > On Tue, Mar 02, 2021 a
On Tue, Mar 02, 2021 at 05:25:17PM +, Christophe Leroy wrote:
> This code provides architectures with a way to build command line
> based on what is built in the kernel and what is handed over by the
> bootloader, based on selected compile-time options.
>
> Signed-off-by: Christophe Leroy
>
On Tue, Mar 02, 2021 at 05:25:22PM +, Christophe Leroy wrote:
> Most architectures have similar boot command line manipulation
> options. This patchs adds the definition in init/Kconfig, gated by
> CONFIG_HAVE_CMDLINE that the architectures can select to use them.
>
> In order to use this, a
On Wed, Mar 03, 2021 at 06:38:16PM +0100, Christophe Leroy wrote:
> Le 03/03/2021 à 18:28, Will Deacon a écrit :
> > On Tue, Mar 02, 2021 at 05:25:17PM +, Christophe Leroy wrote:
> > > This code provides architectures with a way to build command line
> > > based on
On Tue, Mar 02, 2021 at 05:25:17PM +, Christophe Leroy wrote:
> This code provides architectures with a way to build command line
> based on what is built in the kernel and what is handed over by the
> bootloader, based on selected compile-time options.
>
> Signed-off-by: Christophe Leroy
>
nk Rowand
Cc: Arnd Bergmann
Cc: Palmer Dabbelt
Cc: Greg Kroah-Hartman
Cc: Catalin Marinas
Cc:
Cc:
Cc:
Will Deacon (2):
arm64: cpufeatures: Fix handling of CONFIG_CMDLINE for idreg overrides
arm64: Drop support for CMDLINE_EXTEND
arch/arm64/Kconfig | 6 -
arch/arm64
/
Cc: Max Uvarov
Cc: Rob Herring
Cc: Ard Biesheuvel
Cc: Marc Zyngier
Cc: Doug Anderson
Cc: Tyler Hicks
Cc: Frank Rowand
Cc: Catalin Marinas
Link:
https://lore.kernel.org/r/CAL_JsqJX=TCCs7=gg486r9tn4nyscmtclnfqjf9crskkpq-...@mail.gmail.com
Signed-off-by: Will Deacon
---
arch/arm64/Kconfig
and following the same logic as that used by the EFI stub.
Reviewed-by: Marc Zyngier
Fixes: 33200303553d ("arm64: cpufeature: Add an early command-line cpufeature
override facility")
Signed-off-by: Will Deacon
---
arch/arm64/kernel/idreg-override.c | 44 +---
On Tue, Mar 02, 2021 at 05:33:35PM -0500, Steven Rostedt wrote:
> On Tue, 2 Mar 2021 17:30:58 -0500
> Steven Rostedt wrote:
>
> > I just realized that I received this patch twice, and thought it was the
> > same patch! Chen was three days ahead of you, so he get's the credit ;-)
> >
> >
On Wed, Mar 03, 2021 at 09:54:25AM +, Marc Zyngier wrote:
> Hi Jia,
>
> On Wed, 03 Mar 2021 02:42:25 +,
> Jia He wrote:
> >
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> >
On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked
101 - 200 of 10389 matches
Mail list logo