On Mon, Jan 6, 2020 at 12:19 PM Stefano Stabellini
wrote:
>
> On Thu, 2 Jan 2020, Pavel Tatashin wrote:
> > The arm and arm64 versions of hypercall.h are missing the include
> > guards. This is needed because C inlines for privcmd_call are going to
> > be added to the file
counterparts, it
is a good idea to check that PAN is in correct state on every
enable/disable calls.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 39
arch/arm64/kernel/entry.S| 27 ++-
arch/arm64/lib
.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/cacheflush.h | 19 +++
arch/arm64/mm/cache.S | 27 +--
arch/arm64/mm/flush.c | 1 +
3 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/include/asm
__flush_icache_range().
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/cacheflush.h | 5 +
arch/arm64/mm/cache.S | 14 --
arch/arm64/mm/flush.c | 2 +-
3 files changed, 2 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h
The arm and arm64 versions of hypercall.h are missing the include
guards. This is needed because C inlines for privcmd_call are going to
be added to the files.
Signed-off-by: Pavel Tatashin
Reviewed-by: Julien Grall
---
arch/arm/include/asm/xen/hypercall.h | 4
arch/arm64/include/asm
tions to have a C wrapper.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 22
arch/arm64/include/asm/cacheflush.h | 39 +---
arch/arm64/mm/cache.S| 36 ++---
arch/arm64/mm/fl
een.com
v3:
https://lore.kernel.org/lkml/20191127184453.229321-1-pasha.tatas...@soleen.com
v2:
https://lore.kernel.org/lkml/20191122022406.590141-1-pasha.tatas...@soleen.com
v1:
https://lore.kernel.org/lkml/20191121184805.414758-1-pasha.tatas...@soleen.com
Pavel Tatashin (6):
arm/arm64/xen: hyper
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin
Acked-by: Stefano Stabellini
Reviewed-by: Julien Grall
---
arch/arm/include/asm/xen/hypercall.h
> > /*
> > - * Whenever we re-enter userspace, the domains should always be
> > + * Whenever we re-enter kernel, the domains should always be
>
> This feels unrelated from the rest of the patch and probably want an
> explanation. So I think this want to be in a separate patch.
I wi
On Mon, Dec 16, 2019 at 3:41 PM Julien Grall wrote:
>
> Hello,
>
> On 04/12/2019 23:20, Pavel Tatashin wrote:
> > privcmd_call requires to enable access to userspace for the
> > duration of the hypercall.
> >
> > Currently, this is done via assembly macros
The arm and arm64 versions of hypercall.h are missing the include
guards. This is needed because C inlines for privcmd_call are going to
be added to the files.
Also fix a comment.
Signed-off-by: Pavel Tatashin
---
arch/arm/include/asm/assembler.h | 2 +-
arch/arm/include/asm/xen
vious discussions:
v3:
https://lore.kernel.org/lkml/20191127184453.229321-1-pasha.tatas...@soleen.com
v2:
https://lore.kernel.org/lkml/20191122022406.590141-1-pasha.tatas...@soleen.com
v1:
https://lore.kernel.org/lkml/20191121184805.414758-1-pasha.tatas...@soleen.com
Pavel Tatashin (6):
arm/arm6
tions to have a C wrapper.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 22 -
arch/arm64/include/asm/cacheflush.h | 35 ---
arch/arm64/mm/cache.S| 36 ++--
arch/arm64/mm/fl
__flush_icache_range().
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/cacheflush.h | 5 +
arch/arm64/mm/cache.S | 14 --
arch/arm64/mm/flush.c | 2 +-
3 files changed, 2 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin
Acked-by: Stefano Stabellini
---
arch/arm/include/asm/xen/hypercall.h | 6 ++
arch/arm/xen
.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/cacheflush.h | 19 +++
arch/arm64/mm/cache.S | 27 +--
arch/arm64/mm/flush.c | 1 +
3 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/include/asm
counterparts, it
is a good idea to check that PAN is in correct state on every
enable/disable calls.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 39
arch/arm64/kernel/entry.S| 27 ++-
arch/arm64/lib
On Thu, Nov 28, 2019 at 9:51 AM Mark Rutland wrote:
>
> On Wed, Nov 27, 2019 at 01:44:52PM -0500, Pavel Tatashin wrote:
> > We currently duplicate the logic to enable/disable uaccess via TTBR0,
> > with C functions and assembly macros. This is a maintenenace burden
> >
On Fri, Nov 29, 2019 at 10:05 AM Julien Grall wrote:
>
> Hi,
>
> On 27/11/2019 18:44, Pavel Tatashin wrote:
> > diff --git a/arch/arm64/include/asm/xen/hypercall.h
> > b/arch/arm64/include/asm/xen/hypercall.h
> > index 3522cbaed316..1a74fb28607f 100644
> &
On Fri, Nov 29, 2019 at 10:10 AM Andrew Cooper
wrote:
>
> On 29/11/2019 15:05, Julien Grall wrote:
> > Hi,
> >
> > On 27/11/2019 18:44, Pavel Tatashin wrote:
> >> diff --git a/arch/arm64/include/asm/xen/hypercall.h
> >> b/arch/arm64/include/a
Sorry, forgot to set the subject prefix correctly. It should be: [PATCH v3 0/3].
On Wed, Nov 27, 2019 at 1:44 PM Pavel Tatashin
wrote:
>
> Changelog
> v3:
> - Added Acked-by from Stefano Stabellini
> - Addressed comments from Mark Rutland
> v2:
>
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin
Acked-by: Stefano Stabellini
---
arch/arm/include/asm/assembler.h | 2 +-
arch/arm/include
counterparts, it
is a good idea to check that PAN is in correct state on every
enable/disable calls.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 39
arch/arm64/kernel/entry.S| 27 ++-
arch/arm64/lib
m ASM macros to C inlines.
These patches apply against linux-next. I boot tested ARM64, and
compile tested ARM change
Pavel Tatashin (3):
arm/arm64/xen: use C inlines for privcmd_call
arm64: remove uaccess_ttbr0 asm macros from cache functions
arm64: remove the rest of asm-uaccess.h
arc
tions to have a C wrapper.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 22 ---
arch/arm64/include/asm/cacheflush.h | 39 --
arch/arm64/mm/cache.S| 41 +---
arch/arm64/mm/fl
On Wed, Nov 27, 2019 at 12:01 PM Mark Rutland wrote:
>
> On Wed, Nov 27, 2019 at 11:09:35AM -0500, Pavel Tatashin wrote:
> > On Wed, Nov 27, 2019 at 11:03 AM Mark Rutland wrote:
> > >
> > > On Wed, Nov 27, 2019 at 10:31:54AM -0500, Pavel Tatashin wrote:
> >
On Wed, Nov 27, 2019 at 11:03 AM Mark Rutland wrote:
>
> On Wed, Nov 27, 2019 at 10:31:54AM -0500, Pavel Tatashin wrote:
> > On Wed, Nov 27, 2019 at 10:12 AM Mark Rutland wrote:
> > >
> > > On Thu, Nov 21, 2019 at 09:24:06PM -0500, Pavel Tatashin wrote:
> >
On Wed, Nov 27, 2019 at 10:12 AM Mark Rutland wrote:
>
> On Thu, Nov 21, 2019 at 09:24:06PM -0500, Pavel Tatashin wrote:
> > The __uaccess_ttbr0_disable and __uaccess_ttbr0_enable,
> > are the last two macros defined in asm-uaccess.h.
> >
> > Replace them with C wrap
Hi Mark,
Thank you for reviewing this work.
> A commit message should provide rationale, rather than just a
> description of the patch. Something like:
>
> | We currently duplicate the logic to enable/disable uaccess via TTBR0,
> | with C functions and assembly macros. This is a maintenenace burd
for
this purpose to thread_info.
3. Keep as is, and do not add extra overhead for this hardening.
Thank you,
Pasha
On Thu, Nov 21, 2019 at 9:24 PM Pavel Tatashin
wrote:
>
> Changelog
> v2:
> - Addressed Russell King's concern by not adding
> uaccess_* to ARM.
> > That may be, but be very careful that you only use them in ARMv7-only
> > code. Using them elsewhere is unsafe as the domain register is used
> > for other purposes, and merely blatting over it (as your
> > uaccess_enable and uaccess_disable functions do) is unsafe.
>
> In fact, I'll turn that
ested ARM changes.
Pavel Tatashin (3):
arm/arm64/xen: use C inlines for privcmd_call
arm64: remove uaccess_ttbr0 asm macros from cache functions
arm64: remove the rest of asm-uaccess.h
arch/arm/include/asm/assembler.h | 2 +-
arch/arm/include/asm/xen/hypercall.h | 10 +
arch/ar
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin
---
arch/arm/include/asm/assembler.h | 2 +-
arch/arm/include/asm/uaccess.h | 32
The __uaccess_ttbr0_disable and __uaccess_ttbr0_enable,
are the last two macros defined in asm-uaccess.h.
Replace them with C wrappers and call C functions from
kernel_entry and kernel_exit.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 38
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin
---
arch/arm/include/asm/assembler.h | 2 +-
arch/arm/include/asm/xen/hypercall.h | 10
> This is not related to arm64 or to the changes in the description,
> but the change itself is OK. Whether you keep it in this patch,
> or choose to split it out feel free to add
>
> Acked-by: Max Filippov # for xtensa bits
Sorry, this was accidental change. I will remove it from the next
versio
Convert the remaining uaccess_* calls from ASM macros to C inlines.
These patches apply against linux-next. I boot tested ARM64, and
compile tested ARM changes.
Pavel Tatashin (3):
arm/arm64/xen: use C inlines for privcmd_call
arm64: remove uaccess_ttbr0 asm macros from cache functions
Replace the uaccess_ttbr0_disable/uaccess_ttbr0_enable via
inline variants, and remove asm macros.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 22
arch/arm64/include/asm/cacheflush.h | 38 +---
arch/arm64/mm/cache.S
The __uaccess_ttbr0_disable and __uaccess_ttbr0_enable,
are the last two macros defined in asm-uaccess.h.
Replace them with C wrappers and call C functions from
kernel_entry and kernel_exit.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 38
> > +#ifdef CONFIG_CPU_SW_DOMAIN_PAN
> > +static __always_inline void uaccess_enable(void)
> > +{
> > + unsigned long val = DACR_UACCESS_ENABLE;
> > +
> > + asm volatile("mcr p15, 0, %0, c3, c0, 0" : : "r" (val));
> > + isb();
> > +}
> > +
> > +static __always_inline void uaccess_disabl
Replace the uaccess_ttbr0_disable/uaccess_ttbr0_enable via
inline variants, and remove asm macros.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/asm-uaccess.h | 22
arch/arm64/include/asm/cacheflush.h | 38 +---
arch/arm64/mm/cache.S
itialization is enabled, the
rest of struct pages are going to be initialized later in boot once
page_alloc_init_late() is called.
xen_after_bootmem() walks page table's pages and marks them pinned.
Signed-off-by: Pavel Tatashin
---
arch/x86/include/asm/x86_init.h | 2 ++
arch/x86/kernel/
ise it would be all
zero.
CONFIG_DEBUG_VM_PGFLAGS=y
Verifies that we do not access struct pages flags while memory is still
poisoned (struct pages are not initialized yet).
Pavel Tatashin (1):
xen, mm: Allow deferred page initialization for xen pv domains
arch/x86/include/asm/x86_init.h |
Hi Juergen,
Thank you for taking a look at this patch, I will address your
comments, and send out an updated patch.
>> extern void default_banner(void);
>>
>> +static inline void paravirt_after_bootmem(void)
>> +{
>> + pv_init_ops.after_bootmem();
>> +}
>> +
>
> Putting this in the paravirt
not initialized yet).
Pavel Tatashin (1):
xen, mm: Allow deferred page initalization for xen pv domains
arch/x86/include/asm/paravirt.h | 9 +
arch/x86/include/asm/paravirt_types.h | 3 +++
arch/x86/kernel/paravirt.c| 1 +
arch/x86/mm/init_32.c | 1
tion is enabled, the
rest of struct pages are going to be initialized later in boot once
page_alloc_init_late() is called.
xen_after_bootmem() is xen's implementation of pv_init_ops.after_bootmem(),
we walk page table and mark every page as pinned.
Signed-off-by: Pavel Tatashin
---
Reviewed-by: Pavel Tatashin
This is unique for Xen, so this particular issue won't effect other
configurations. I am going to investigate if there is a way to re-enable
deferred page initialization on xen guests.
Pavel
On 02/16/2018 03:40 PM, Andrew Morton wrote:
On Fri, 16 Feb 2018
On 02/16/2018 09:02 AM, Juergen Gross wrote:
On 16/02/18 14:59, Michal Hocko wrote:
[CC Pavel]
On Fri 16-02-18 14:37:26, Juergen Gross wrote:
Commit f7f99100d8d95dbcf09e0216a143211e79418b9f ("mm: stop zeroing
memory during allocation in vmemmap") broke Xen pv domains in some
configurations, as
Boris / Pavel,
Bisection result is:
a4a3ede2132ae0863e2d43e06f9b5697c51a7a3b is the first bad commit
commit a4a3ede2132ae0863e2d43e06f9b5697c51a7a3b
Author: Pavel Tatashin
Date: Wed Nov 15 17:36:31 2017 -0800
mm: zero reserved and unavailable struct pages
Some memory is reserved but unavaila
49 matches
Mail list logo