Re: [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory

2017-10-22 Thread Liuwenliang (Lamb)
On 10/19/2017 20:56PM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 04:22:25PM +0800, Abbott Liu wrote:
>>  Because the KASan's shadow memory don't need to track,so remove the  
>> mapping code in kasan_init.
>
>Is there a reason why this isn't part of the earlier patch that introduced the 
>code below?
Thanks for your reviews.
I'm going to change it in the new version.


Re: [PATCH 03/11] arm: Kconfig: enable KASan

2017-10-22 Thread Liuwenliang (Lamb)
On 10/22/2017 01:22 AM, Russell King - ARM Linux wrote:
>On Wed, Oct 11, 2017 at 12:15:44PM -0700, Florian Fainelli wrote:
>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
>> > From: Andrey Ryabinin 
>> > 
>> > This patch enable kernel address sanitizer for arm.
>> > 
>> > Cc: Andrey Ryabinin 
>> > Signed-off-by: Abbott Liu 
>> 
>> This needs to be the last patch in the series, otherwise you allow
>> people between patch 3 and 11 to have varying degrees of experience with
>> this patch series depending on their system type (LPAE or not, etc.)
>
>As the series stands, if patches 1-3 are applied, and KASAN is enabled,
>there are various constants that end up being undefined, and the kernel
>build will fail.  That is, of course, not acceptable.
>
>KASAN must not be available until support for it is functionally
>complete.

Thanks for Florian Fainelli and Russell King's review.
I'm going to change it in the new version. 



Re: [PATCH 04/11] Define the virtual space of KASan's shadow region

2017-10-22 Thread Liuwenliang (Lamb)
On Tue, Oct 19, 2017 at 20:41 17PM +, Russell King - ARM Linux:
>On Mon, Oct 16, 2017 at 11:42:05AM +0000, Liuwenliang (Lamb) wrote:
>> On 10/16/2017 07:03 PM, Abbott Liu wrote:
> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support 
> >`movw r1,
>>   
>> #:lower16:0xC000-0x0100)>>3)+((0xC000-0x0100)-(1<<29'
>>  in ARM mode
>> >arch/arm/kernel/entry-armv.S:348: Error: selected processor does not 
>> >support `movt r1,
>>   
>> #:upper16:0xC000-0x0100)>>3)+((0xC000-0x0100)-(1<<29'
>>  in ARM mode
>> 
>> Thanks for building test. This error can be solved by following code:
>> --- a/arch/arm/kernel/entry-armv.S
>> +++ b/arch/arm/kernel/entry-armv.S
>> @@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
>> get_thread_info tsk
>> ldr r0, [tsk, #TI_ADDR_LIMIT]
>>  #ifdef CONFIG_KASAN
>> -   movw r1, #:lower16:TASK_SIZE
>> -   movt r1, #:upper16:TASK_SIZE
>> + ldr r1, =TASK_SIZE
>>  #else
>> mov r1, #TASK_SIZE
>>  #endif
>
>We can surely do better than this with macros and condition support -
>we can build-time test in the assembler whether TASK_SIZE can fit in a
>normal "mov", whether we can use the movw/movt instructions, or fall
>back to ldr if necessary.  I'd rather we avoided "ldr" here where
>possible.

Thanks for your review.
I don't know why we need to avoided "ldr". The "ldr" maybe cause the 
performance fall down, but it will be very limited, and as we know the 
performance of kasan version is lower than the normal version. And usually
we don't use kasan version in our product, we only use kasan version when 
we want to debug some memory corruption problem in laboratory(not not in
commercial product) because the performance of kasan version is lower than
normal version.

So I think we can accept the influence of the performance by using "ldr" here. 




On Tue, Oct 19, 2017 at 20:44 17PM +, Russell King - ARM Linux:
>On Tue, Oct 17, 2017 at 11:27:19AM +, Liuwenliang (Lamb) wrote:
>> ---c0a3b198:   b6e0.word   0xb6e0   
>> //TASK_SIZE:0xb6e0
>
>It's probably going to be better all round to round TASK_SIZE down
>to something that fits in an 8-bit rotated constant anyway (like
>we already guarantee) which would mean this patch is not necessary.

Thanks for you review.
If we enable CONFIG_KASAN, we need steal 130MByte(0xb6e0 ~ 0xbf00) from 
user space.
If we change to steal 130MByte(0xb600 ~ 0xbe20) , the 14MB of user 
space is going to be 
wasted. I think it is better to to use "ldr" whose influence to the system are 
very limited than to waste 
14MB user space by chaned TASK_SIZE from 0xb6e0 from 0xb600.


If TASK_SIZE is an 8-bit rotated constant, the compiler can convert "ldr 
rx,=TASK_SIZE" into mov rx, #TASK_SIZE
If TASK_SIZE is not an 8-bit rotated constant, the compiler can convert "ldr 
rx,=TASK_SIZE" into ldr rx, [pc,xxx],
So we can use ldr to replace mov. Here is the code which is tested by me:

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index f9efea3..00a1833 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -187,12 +187,7 @@ ENDPROC(__und_invalid)

get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
-#ifdef CONFIG_KASAN
-   movw r1, #:lower16:TASK_SIZE
-   movt r1, #:upper16:TASK_SIZE
-#else
-   mov r1, #TASK_SIZE
-#endif
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]

@@ -446,7 +441,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region.  Here we
@ perform a quick test inline since it should be false
@ 99.% of the time.  The rest is done out of line.
-   cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhskuser_cmpxchg64_fixup
 #endif
 #endif




Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory

2017-10-19 Thread Liuwenliang (Lamb)
On 2017.10.12 7:43AM  Dmitry Osipenko [mailto:dig...@gmail.com] wrote:
>Shouldn't all __pgprot's contain L_PTE_MT_WRITETHROUGH ?
>
>[...]
>
>--
>Dmitry

Thanks for your review. I'm sorry that my replay is so late.

I don't think L_PTE_MT_WRITETHROUGH is need for all arm soc. So I think kasan's
mapping can use PAGE_KERNEL which can be initialized for different arm soc and 
__pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)).

I don't think the mapping table flags in kasan_early_init need be changed 
because of the follow reason:
1) PAGE_KERNEL can't be used in early_kasan_init because the pgprot_kernel 
which is used to define 
  PAGE_KERNEL doesn't be initialized. 

2) all of the kasan shadow's mapping table is going to be created again in 
kasan_init function.


All what I say is: I think only the mapping table flags in kasan_init function 
need to be changed into PAGE_KERNEL 
or  __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)). 

Here is the code, I has already tested:
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -124,7 +124,7 @@ pte_t * __meminit kasan_pte_populate(pmd_t *pmd, unsigned 
long addr, int node)
void *p = kasan_alloc_block(PAGE_SIZE, node);
if (!p)
return NULL;
-   entry = pfn_pte(virt_to_pfn(p), __pgprot(_L_PTE_DEFAULT | 
L_PTE_DIRTY | L_PTE_XN));
+ entry = pfn_pte(virt_to_pfn(p), __pgprot(pgprot_val(PAGE_KERNEL)));
set_pte_at(&init_mm, addr, pte, entry);
}
return pte;
@@ -253,7 +254,7 @@ void __init kasan_init(void)
 set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
 &kasan_zero_pte[i], pfn_pte(
 virt_to_pfn(kasan_zero_page),
-__pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | 
L_PTE_XN | L_PTE_RDONLY)));
+ __pgprot(pgprot_val(PAGE_KERNEL) | L_PTE_RDONLY)));
memset(kasan_zero_page, 0, PAGE_SIZE);
cpu_set_ttbr0(orig_ttbr0);
flush_cache_all();




Re: [PATCH 01/11] Initialize the mapping of KASan shadow memory

2017-10-17 Thread Liuwenliang (Lamb)
2017.10.12  05:42 AM  Russell King - ARM Linux [mailto:li...@armlinux.org.uk] 
wrote:

>> Please don't make this "exclusive" just conditionally call 
>> kasan_early_init(), remove the call to start_kernel from 
>> kasan_early_init and keep the call to start_kernel here.
>iow:
>
>#ifdef CONFIG_KASAN
>   bl  kasan_early_init
>#endif
>   b   start_kernel
>
>This has the advantage that we don't leave any stack frame from
>kasan_early_init() on the init task stack.

Thanks for your review.  I tested your opinion and it work well.
I agree with you that it is better to use follow code
#ifdef CONFIG_KASAN
bl  kasan_early_init
#endif
b   start_kernel

than :
#ifdef CONFIG_KASAN
bl  kasan_early_init
#else
b   start_kernel
#endif






Re: [PATCH 04/11] Define the virtual space of KASan's shadow region

2017-10-17 Thread Liuwenliang (Lamb)
On 10/17/2017 8:45 PM, Abbott Liu wrote:
>What I said was
>
>'if the value of TASK_SIZE fits its 12-bit immediate field'
>
>and your value of TASK_SIZE is 0xb6e0, which cannot be decomposed in the 
>right way.
>
>If you build with KASAN disabled, it will generate a mov instruction instead.

Thanks for your explain. I understand now.  I has tested and the testing result 
proves that what 
you said is right. 

Here is test log:
c010e9e0 <__irq_svc>:
c010e9e0:   e24dd04csub sp, sp, #76 ; 0x4c
c010e9e4:   e31d0004tst sp, #4
c010e9e8:   024dd004subeq   sp, sp, #4
c010e9ec:   e88d1ffestm sp, {r1, r2, r3, r4, r5, r6, r7, r8, 
r9, sl, fp, ip}
c010e9f0:   e8900038ldm r0, {r3, r4, r5}
c010e9f4:   e28d7030add r7, sp, #48 ; 0x30
c010e9f8:   e3e06000mvn r6, #0
c010e9fc:   e28d204cadd r2, sp, #76 ; 0x4c
c010ea00:   02822004addeq   r2, r2, #4
c010ea04:   e52d3004push{r3}; (str r3, [sp, #-4]!)
c010ea08:   e1a0300emov r3, lr
c010ea0c:   e887007cstm r7, {r2, r3, r4, r5, r6}
c010ea10:   e1a0972dlsr r9, sp, #14
c010ea14:   e1a09709lsl r9, r9, #14
c010ea18:   e5990008ldr r0, [r9, #8]
c010ea1c:   e3a014bfmov r1, #-1090519040; 0xbf00  
// ldr r1,=0xbf00


Re: [PATCH 00/11] KASan for arm

2017-10-17 Thread Liuwenliang (Lamb)
On 10/17/2017 7:40 PM, Abbott Liu wrote:
>On Wed, Oct 11, 2017 at 03:10:56PM -0700, Laura Abbott wrote:
>The decompressor does not link with the standard C library, so it
>needs to provide implementations of standard C library functionality
>where required.  That means, if we have any memset() users, we need
>to provide the memset() function.
>
>The undef is there to avoid the optimisation we have in asm/string.h
>for __memzero, because we don't want to use __memzero in the
>decompressor.
>
>Whether memset() is required depends on which compression method is
>being used - LZO and LZ4 appear to make direct references to it, but
>the inflate (gzip) decompressor code does not.
>
>What this means is that all supported kernel compression options need
>to be tested.

Thanks for your review. I am sorry that I am so late to reply your email.
I will test all arm kernel compression options.




Re: [PATCH 04/11] Define the virtual space of KASan's shadow region

2017-10-17 Thread Liuwenliang (Lamb)
On 10/17/2017 12:40 AM, Abbott Liu wrote:
> Ard Biesheuvel [ard.biesheu...@linaro.org] wrote
>This is unnecessary:
>
>ldr r1, =TASK_SIZE
>
>will be converted to a mov instruction by the assembler if the value of 
>TASK_SIZE fits its 12-bit immediate field.
>
>So please remove the whole #ifdef, and just use ldr r1, =xxx

Thanks for your review. 

The assembler on my computer don't convert ldr r1,=xxx into mov instruction. 
Here is the objdump for vmlinux:

c0a3b100 <__irq_svc>:
c0a3b100:   e24dd04csub sp, sp, #76 ; 0x4c
c0a3b104:   e31d0004tst sp, #4
c0a3b108:   024dd004subeq   sp, sp, #4
c0a3b10c:   e88d1ffestm sp, {r1, r2, r3, r4, r5, r6, r7, r8, 
r9, sl, fp, ip}
c0a3b110:   e8900038ldm r0, {r3, r4, r5}
c0a3b114:   e28d7030add r7, sp, #48 ; 0x30
c0a3b118:   e3e06000mvn r6, #0
c0a3b11c:   e28d204cadd r2, sp, #76 ; 0x4c
c0a3b120:   02822004addeq   r2, r2, #4
c0a3b124:   e52d3004push{r3}; (str r3, [sp, #-4]!)
c0a3b128:   e1a0300emov r3, lr
c0a3b12c:   e887007cstm r7, {r2, r3, r4, r5, r6}
c0a3b130:   e1a0972dlsr r9, sp, #14
c0a3b134:   e1a09709lsl r9, r9, #14
c0a3b138:   e5990008ldr r0, [r9, #8]
---c0a3b13c:   e59f1054ldr r1, [pc, #84]   ; c0a3b198 
<__irq_svc+0x98>  //ldr r1, =TASK_SIZE
c0a3b140:   e5891008str r1, [r9, #8]
c0a3b144:   e58d004cstr r0, [sp, #76]   ; 0x4c
c0a3b148:   ee130f10mrc 15, 0, r0, cr3, cr0, {0}
c0a3b14c:   e58d0048str r0, [sp, #72]   ; 0x48
c0a3b150:   e3a00051mov r0, #81 ; 0x51
c0a3b154:   ee030f10mcr 15, 0, r0, cr3, cr0, {0}
---c0a3b158:   e59f103cldr r1, [pc, #60]   ; c0a3b19c 
<__irq_svc+0x9c>  //orginal irq_svc also used same instruction
c0a3b15c:   e1admov r0, sp
c0a3b160:   e28fe000add lr, pc, #0
c0a3b164:   e591f000ldr pc, [r1]
c0a3b168:   e5998004ldr r8, [r9, #4]
c0a3b16c:   e599ldr r0, [r9]
c0a3b170:   e338teq r8, #0
c0a3b174:   13a0movne   r0, #0
c0a3b178:   e312tst r0, #2
c0a3b17c:   1b07blnec0a3b1a0 
c0a3b180:   e59d104cldr r1, [sp, #76]   ; 0x4c
c0a3b184:   e59d0048ldr r0, [sp, #72]   ; 0x48
c0a3b188:   ee030f10mcr 15, 0, r0, cr3, cr0, {0}
c0a3b18c:   e5891008str r1, [r9, #8]
c0a3b190:   e16ff005msr SPSR_fsxc, r5
c0a3b194:   e8ddldm sp, {r0, r1, r2, r3, r4, r5, r6, r7, 
r8, r9, sl, fp, ip, sp, lr, pc}^
---c0a3b198:   b6e0.word   0xb6e0   //TASK_SIZE:0xb6e0
c0a3b19c:   c0f0.word   0xc0f0



Even "ldr r1, =TASK_SIZE"  won't be converted to a mov instruction by some 
assembler, I also think it is better
to remove the whole #ifdef because the influence of performance by ldr is very 
limited. 



答复: [PATCH 00/11] KASan for arm

2017-10-16 Thread Liuwenliang (Lamb)
On 10/16/2017 07:57 PM, Abbott Liu wrote:
>Nice!
>
>When I build-tested KASAN on x86 and arm64, I ran into a lot of build-time
>regressions (mostly warnings but also some errors), so I'd like to give it
>a spin in my randconfig tree before this gets merged. Can you point me
>to a git URL that I can pull into my testing tree?
>
>I could of course apply the patches from email, but I expect that there
>will be updated versions of the series, so it's easier if I can just pull
>the latest version.
>
>  Arnd

I'm sorry. I don't have git server. These patches base on:
1. git remote -v
origin  git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (fetch)
origin  git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (push)

2. the commit is:
commit 46c1e79fee417f151547aa46fae04ab06cb666f4
Merge: ec846ec b130a69
Author: Linus Torvalds 
Date:   Wed Sep 13 12:24:20 2017 -0700

Merge branch 'perf-urgent-for-linus' of 
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf fixes from Ingo Molnar:
 "A handful of tooling fixes"

* 'perf-urgent-for-linus' of 
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf stat: Wait for the correct child
  perf tools: Support running perf binaries with a dash in their name
  perf config: Check not only section->from_system_config but also item's
  perf ui progress: Fix progress update
  perf ui progress: Make sure we always define step value
  perf tools: Open perf.data with O_CLOEXEC flag
  tools lib api: Fix make DEBUG=1 build
  perf tests: Fix compile when libunwind's unwind.h is available
  tools include linux: Guard against redefinition of some macros
I'm sorry that I didn't base on a stabe version.

3. config: arch/arm/configs/vexpress_defconfig

4. gcc version: gcc version 6.1.0



Re: [PATCH 04/11] Define the virtual space of KASan's shadow region

2017-10-16 Thread Liuwenliang (Lamb)
On 10/16/2017 07:03 PM, Abbott Liu wrote:
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support 
>`movw r1,
  #:lower16:0xC000-0x0100)>>3)+((0xC000-0x0100)-(1<<29' 
in ARM mode
>arch/arm/kernel/entry-armv.S:348: Error: selected processor does not support 
>`movt r1,
  #:upper16:0xC000-0x0100)>>3)+((0xC000-0x0100)-(1<<29' 
in ARM mode

Thanks for building test. This error can be solved by following code:
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -188,8 +188,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
 #ifdef CONFIG_KASAN
-   movw r1, #:lower16:TASK_SIZE
-   movt r1, #:upper16:TASK_SIZE
+ ldr r1, =TASK_SIZE
 #else
mov r1, #TASK_SIZE
 #endif
@@ -446,7 +445,12 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region.  Here we
@ perform a quick test inline since it should be false
@ 99.% of the time.  The rest is done out of line.
+#if CONFIG_KASAN
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
+#else
cmp r4, #TASK_SIZE
+#endif
blhskuser_cmpxchg64_fixup
 #endif
 #endif

movt,movw can only be used in ARMv6*, ARMv7 instruction set. But ldr can be 
used in ARMv4*, ARMv5T*, ARMv6*, ARMv7.
Maybe the performance is going to fall down by using ldr, but I think the 
influence of performance is very limited.



Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error

2017-10-12 Thread Liuwenliang (Lamb)
>> - I don't understand why this is necessary.  memory_is_poisoned_16()
>>   already handles unaligned addresses?
>>
>> - If it's needed on ARM then presumably it will be needed on other
>>   architectures, so CONFIG_ARM is insufficiently general.
>>
>> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>   it would be better to generalize/fix it in some fashion rather than
>>   creating a new variant of the function.


>Yes, I think it will be better to fix the current function rather then
>have 2 slightly different copies with ifdef's.
>Will something along these lines work for arm? 16-byte accesses are
>not too common, so it should not be a performance problem. And
>probably modern compilers can turn 2 1-byte checks into a 2-byte check
>where safe (x86).

>static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>{
>u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>
>if (shadow_addr[0] || shadow_addr[1])
>return true;
>/* Unaligned 16-bytes access maps into 3 shadow bytes. */
>if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>return memory_is_poisoned_1(addr + 15);
>return false;
>}

Thanks for Andrew Morton and Dmitry Vyukov's review. 
If the parameter addr=0xc008, now in function:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
 --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x1801(=0xc008>>3)) is 
not 
 --- // unsigned by 2 bytes.
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 

/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
  //here is going to be error on arm, specially when kernel has not 
finished yet.
  //Because the unsigned accessing cause DataAbort Exception which is 
not
  //initialized when kernel is starting. 
return *shadow_addr;
}

I also think it is better to fix this problem. 



Re: [PATCH 00/11] KASan for arm

2017-10-11 Thread Liuwenliang (Lamb)
On 10/12/2017 12:10 AM, Abbott Liu wrote:
>On 10/11/2017 12:50 PM, Florian Fainelli wrote:
>> On 10/11/2017 12:13 PM, Florian Fainelli wrote:
>>> Hi Abbott,
>>>
>>> On 10/11/2017 01:22 AM, Abbott Liu wrote:
 Hi,all:
These patches add arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

1/8 of kernel addresses reserved for shadow memory. There was no
 big enough hole for this, so virtual addresses for shadow were
 stolen from user space.

At early boot stage the whole shadow region populated with just
 one physical page (kasan_zero_page). Later, this page reused
 as readonly zero shadow for some memory that KASan currently
 don't track (vmalloc).

   After mapping the physical memory, pages for shadow memory are
 allocated and mapped.

   KASan's stack instrumentation significantly increases stack's
 consumption, so CONFIG_KASAN doubles THREAD_SIZE.

   Functions like memset/memmove/memcpy do a lot of memory accesses.
 If bad pointer passed to one of these function it is important
 to catch this. Compiler's instrumentation cannot do this since
 these functions are written in assembly.

   KASan replaces memory functions with manually instrumented variants.
 Original functions declared as weak symbols so strong definitions
 in mm/kasan/kasan.c could replace them. Original functions have aliases
 with '__' prefix in name, so we could call non-instrumented variant
 if needed.

   Some files built without kasan instrumentation (e.g. mm/slub.c).
 Original mem* function replaced (via #define) with prefixed variants
 to disable memory access checks for such files.

   On arm LPAE architecture,  the mapping table of KASan shadow memory(if
 PAGE_OFFSET is 0xc000, the KASan shadow memory's virtual space is
 0xb6e00~0xbf00) can't be filled in do_translation_fault function,
 because kasan instrumentation maybe cause do_translation_fault function
 accessing KASan shadow memory. The accessing of KASan shadow memory in
 do_translation_fault function maybe cause dead circle. So the mapping table
 of KASan shadow memory need be copyed in pgd_alloc function.


 Most of the code comes from:
 https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe.
>>>
>>> Thanks for putting these patches together, I can't get a kernel to build
>>> with ARM_LPAE=y or ARM_LPAE=n that does not result in the following:
>>>
>>>   AS  arch/arm/kernel/entry-common.o
>>> arch/arm/kernel/entry-common.S: Assembler messages:
>>> arch/arm/kernel/entry-common.S:53: Error: invalid constant
>>> (b6e0) after fixup
>>> arch/arm/kernel/entry-common.S:118: Error: invalid constant
>>> (b6e0) after fixup
>>> scripts/Makefile.build:412: recipe for target
>>> 'arch/arm/kernel/entry-common.o' failed
>>> make[3]: *** [arch/arm/kernel/entry-common.o] Error 1
>>> Makefile:1019: recipe for target 'arch/arm/kernel' failed
>>> make[2]: *** [arch/arm/kernel] Error 2
>>> make[2]: *** Waiting for unfinished jobs
>>>
>>> This is coming from the increase in TASK_SIZE it seems.
>>>
>>> This is on top of v4.14-rc4-84-gff5abbe799e2
>>
>> Seems like we can use the following to get through that build failure:
>>
>> diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
>> index 99c908226065..0de1160d136e 100644
>> --- a/arch/arm/kernel/entry-common.S
>> +++ b/arch/arm/kernel/entry-common.S
>> @@ -50,7 +50,13 @@ ret_fast_syscall:
>>   UNWIND(.cantunwind)
>> disable_irq_notrace @ disable interrupts
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> +   movwr1, #:lower16:TASK_SIZE
>> +   movtr1, #:upper16:TASK_SIZE
>> +   cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blneaddr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]@ re-check for syscall
>> tracing
>> tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
>> @@ -115,7 +121,13 @@ ret_slow_syscall:
>> disable_irq_notrace @ disable interrupts
>>  ENTRY(ret_to_user_from_irq)
>> ldr r2, [tsk, #TI_ADDR_LIMIT]
>> +#ifdef CONFIG_KASAN
>> +   movwr1, #:lower16:TASK_SIZE
>> +   movtr1, #:upper16:TASK_SIZE
>> +   cmp r2, r1
>> +#else
>> cmp r2, #TASK_SIZE
>> +#endif
>> blneaddr_limit_check_failed
>> ldr r1, [tsk, #TI_FLAGS]
>> tst r1, #_TIF_WORK_MASK
>>
>>
>>
>> but then we will see another set of build failures with the decompressor
>> code:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>>   KSYM.tmp_kallsyms1.o
>>   KSYM.tmp_kallsyms2.o
>>   LD  vmlinux
>>   SORTEX  vmlinux
>>