Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-25 Thread Will Deacon
On Mon, Aug 24, 2015 at 06:47:36PM +0100, Russell King - ARM Linux wrote:
> On Mon, Aug 24, 2015 at 05:15:22PM +0300, Andrey Ryabinin wrote:
> > Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
> > The only advantage of 512Mb shadow is better handling of user memory
> > accesses bugs
> > (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
> > API).
> 
> No need for that to be handed by KASan.  I have patches in linux-next,
> now acked by Will, which prevent the kernel accessing userspace with
> zero memory footprint.  No need for remapping, we have a way to quickly
> turn off access to userspace mapped pages on non-LPAE 32-bit CPUs.
> (LPAE is not supported yet - Catalin will be working on that using the
> hooks I'm providing once he returns.)

Hey, I only acked the "Efficiency cleanups" series so far! The PAN emulation
is still on my list.

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-25 Thread Will Deacon
On Mon, Aug 24, 2015 at 06:47:36PM +0100, Russell King - ARM Linux wrote:
 On Mon, Aug 24, 2015 at 05:15:22PM +0300, Andrey Ryabinin wrote:
  Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
  The only advantage of 512Mb shadow is better handling of user memory
  accesses bugs
  (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
  API).
 
 No need for that to be handed by KASan.  I have patches in linux-next,
 now acked by Will, which prevent the kernel accessing userspace with
 zero memory footprint.  No need for remapping, we have a way to quickly
 turn off access to userspace mapped pages on non-LPAE 32-bit CPUs.
 (LPAE is not supported yet - Catalin will be working on that using the
 hooks I'm providing once he returns.)

Hey, I only acked the Efficiency cleanups series so far! The PAN emulation
is still on my list.

Will
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Russell King - ARM Linux
On Mon, Aug 24, 2015 at 05:15:22PM +0300, Andrey Ryabinin wrote:
> Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
> The only advantage of 512Mb shadow is better handling of user memory
> accesses bugs
> (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
> API).

No need for that to be handed by KASan.  I have patches in linux-next,
now acked by Will, which prevent the kernel accessing userspace with
zero memory footprint.  No need for remapping, we have a way to quickly
turn off access to userspace mapped pages on non-LPAE 32-bit CPUs.
(LPAE is not supported yet - Catalin will be working on that using the
hooks I'm providing once he returns.)

This isn't a debugging thing, it's a security hardening thing.  Some
use-after-free bugs are potentially exploitable from userspace.  See
the recent blackhat conference paper.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 19:16 GMT+03:00 Vladimir Murzin :
> On 24/08/15 17:00, Andrey Ryabinin wrote:
>> 2015-08-24 18:44 GMT+03:00 Vladimir Murzin :
>>>
>>> Another option would be having "sparse" shadow memory based on page
>>> extension. I did play with that some time ago based on ideas from
>>> original v1 KASan support for x86/arm - it is how 614be38 "irqchip:
>>> gic-v3: Fix out of bounds access to cpu_logical_map" was caught.
>>> It doesn't require any VA reservations, only some contiguous memory for
>>> the page_ext itself, which serves as indirection level for the 0-order
>>> shadow pages.
>>
>> We won't be able to use inline instrumentation (I could live with that),
>> and most importantly, we won't be able to use stack instrumentation.
>> GCC needs to know shadow address for inline and/or stack instrumentation
>> to generate correct code.
>
> It's definitely a trade-off ;)
>
> Just for my understanding does that stack instrumentation is controlled
> via -asan-stack?
>

Yup.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Vladimir Murzin
On 24/08/15 17:00, Andrey Ryabinin wrote:
> 2015-08-24 18:44 GMT+03:00 Vladimir Murzin :
>>
>> Another option would be having "sparse" shadow memory based on page
>> extension. I did play with that some time ago based on ideas from
>> original v1 KASan support for x86/arm - it is how 614be38 "irqchip:
>> gic-v3: Fix out of bounds access to cpu_logical_map" was caught.
>> It doesn't require any VA reservations, only some contiguous memory for
>> the page_ext itself, which serves as indirection level for the 0-order
>> shadow pages.
> 
> We won't be able to use inline instrumentation (I could live with that),
> and most importantly, we won't be able to use stack instrumentation.
> GCC needs to know shadow address for inline and/or stack instrumentation
> to generate correct code.

It's definitely a trade-off ;)

Just for my understanding does that stack instrumentation is controlled
via -asan-stack?

Thanks
Vladimir

> 
>> In theory such design can be reused by others 32-bit arches and, I
>> think, nommu too. Additionally, the shadow pages might be movable with
>> help of driver-page migration patch series [1].
>> The cost is obvious - performance drop, although I didn't bother
>> measuring it.
>>
>> [1] https://lwn.net/Articles/650917/
>>
>> Cheers
>> Vladimir
>>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 18:44 GMT+03:00 Vladimir Murzin :
>
> Another option would be having "sparse" shadow memory based on page
> extension. I did play with that some time ago based on ideas from
> original v1 KASan support for x86/arm - it is how 614be38 "irqchip:
> gic-v3: Fix out of bounds access to cpu_logical_map" was caught.
> It doesn't require any VA reservations, only some contiguous memory for
> the page_ext itself, which serves as indirection level for the 0-order
> shadow pages.

We won't be able to use inline instrumentation (I could live with that),
and most importantly, we won't be able to use stack instrumentation.
GCC needs to know shadow address for inline and/or stack instrumentation
to generate correct code.

> In theory such design can be reused by others 32-bit arches and, I
> think, nommu too. Additionally, the shadow pages might be movable with
> help of driver-page migration patch series [1].
> The cost is obvious - performance drop, although I didn't bother
> measuring it.
>
> [1] https://lwn.net/Articles/650917/
>
> Cheers
> Vladimir
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Vladimir Murzin
On 24/08/15 15:15, Andrey Ryabinin wrote:
> 2015-08-24 16:45 GMT+03:00 Linus Walleij :
>> On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
>>  wrote:
>>> On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  
 wrote:

> I used vexpress. Anyway, it doesn't matter now, since I have an update
> with a lot of stuff fixed, and it works on hardware.
> I still need to do some work on it and tomorrow, probably, I will share.

 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.

 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.
>>>
>>> People may be excited about it because it's a new feature, but we really
>>> need to consider whether gobbling up 512MB of userspace for it is a good
>>> idea or not.  There are programs around which like to map large amounts
>>> of memory into their process space, and the more we steal from them, the
>>> more likely these programs are to fail.
>>
>> I looked at some different approaches over the last weeks for this
>> when playing around with KASan.
>>
>> It seems since KASan was developed on 64bit systems, this was
>> not much of an issue for them as they could take their shadow
>> memory from the vmalloc space.
>>
>> I think it is possible to actually just steal as much memory as is
>> needed to cover the kernel, and not 1/8 of the entire addressable
>> 32bit space. So instead of covering all from 0x0-0x
>> at least just MODULES_VADDR thru 0x should be enough.
>> So if that is 0xbf00-0x in most cases, 0x4100
>> bytes, then 1/8 of that, 0x820, 130MB should be enough.
>> (Andrey need to say if this is possible.)
>>
> 
> Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
> The only advantage of 512Mb shadow is better handling of user memory
> accesses bugs
> (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
> API).
> In case of 512Mb shadow we could to not map anything in shadow for
> user addresses, so such bug will
> guarantee  to crash the kernel.
> In case of 130Mb, the behavior will depend on memory layout of the
> current process.
> So, I think it's fine to keep shadow only for kernel addresses.

Another option would be having "sparse" shadow memory based on page
extension. I did play with that some time ago based on ideas from
original v1 KASan support for x86/arm - it is how 614be38 "irqchip:
gic-v3: Fix out of bounds access to cpu_logical_map" was caught.
It doesn't require any VA reservations, only some contiguous memory for
the page_ext itself, which serves as indirection level for the 0-order
shadow pages.
In theory such design can be reused by others 32-bit arches and, I
think, nommu too. Additionally, the shadow pages might be movable with
help of driver-page migration patch series [1].
The cost is obvious - performance drop, although I didn't bother
measuring it.

[1] https://lwn.net/Articles/650917/

Cheers
Vladimir

> 
>> That will probably miss some usecases I'm not familiar with, where
>> the kernel is actually executing something below 0xbf00...
>>
>> I looked at taking memory from vmalloc instead, but ran into
>> problems since this is subject to the highmem split and KASan
>> need to have it's address offset at compile time. On
>> Ux500 I managed to remove all the static maps and steal memory
>> from the top of the vmalloc area instead of the beginning, but
>> that is probably not generally feasible.
>>
>> I suspect you have better ideas than what I can come up
>> with though.
>>
>> Yours,
>> Linus Walleij
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 16:45 GMT+03:00 Linus Walleij :
> On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
>  wrote:
>> On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
>>> On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  
>>> wrote:
>>>
>>> > I used vexpress. Anyway, it doesn't matter now, since I have an update
>>> > with a lot of stuff fixed, and it works on hardware.
>>> > I still need to do some work on it and tomorrow, probably, I will share.
>>>
>>> Ah awesome. I have a stash of ARM boards so I can test it on a
>>> range of hardware once you feel it's ready.
>>>
>>> Sorry for pulling stuff out of your hands, people are excited about
>>> KASan ARM32 as it turns out.
>>
>> People may be excited about it because it's a new feature, but we really
>> need to consider whether gobbling up 512MB of userspace for it is a good
>> idea or not.  There are programs around which like to map large amounts
>> of memory into their process space, and the more we steal from them, the
>> more likely these programs are to fail.
>
> I looked at some different approaches over the last weeks for this
> when playing around with KASan.
>
> It seems since KASan was developed on 64bit systems, this was
> not much of an issue for them as they could take their shadow
> memory from the vmalloc space.
>
> I think it is possible to actually just steal as much memory as is
> needed to cover the kernel, and not 1/8 of the entire addressable
> 32bit space. So instead of covering all from 0x0-0x
> at least just MODULES_VADDR thru 0x should be enough.
> So if that is 0xbf00-0x in most cases, 0x4100
> bytes, then 1/8 of that, 0x820, 130MB should be enough.
> (Andrey need to say if this is possible.)
>

Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
The only advantage of 512Mb shadow is better handling of user memory
accesses bugs
(access to user memory without copy_from_user/copy_to_user/strlen_user etc API).
In case of 512Mb shadow we could to not map anything in shadow for
user addresses, so such bug will
guarantee  to crash the kernel.
In case of 130Mb, the behavior will depend on memory layout of the
current process.
So, I think it's fine to keep shadow only for kernel addresses.

> That will probably miss some usecases I'm not familiar with, where
> the kernel is actually executing something below 0xbf00...
>
> I looked at taking memory from vmalloc instead, but ran into
> problems since this is subject to the highmem split and KASan
> need to have it's address offset at compile time. On
> Ux500 I managed to remove all the static maps and steal memory
> from the top of the vmalloc area instead of the beginning, but
> that is probably not generally feasible.
>
> I suspect you have better ideas than what I can come up
> with though.
>
> Yours,
> Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Linus Walleij
On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
 wrote:
> On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
>> On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  
>> wrote:
>>
>> > I used vexpress. Anyway, it doesn't matter now, since I have an update
>> > with a lot of stuff fixed, and it works on hardware.
>> > I still need to do some work on it and tomorrow, probably, I will share.
>>
>> Ah awesome. I have a stash of ARM boards so I can test it on a
>> range of hardware once you feel it's ready.
>>
>> Sorry for pulling stuff out of your hands, people are excited about
>> KASan ARM32 as it turns out.
>
> People may be excited about it because it's a new feature, but we really
> need to consider whether gobbling up 512MB of userspace for it is a good
> idea or not.  There are programs around which like to map large amounts
> of memory into their process space, and the more we steal from them, the
> more likely these programs are to fail.

I looked at some different approaches over the last weeks for this
when playing around with KASan.

It seems since KASan was developed on 64bit systems, this was
not much of an issue for them as they could take their shadow
memory from the vmalloc space.

I think it is possible to actually just steal as much memory as is
needed to cover the kernel, and not 1/8 of the entire addressable
32bit space. So instead of covering all from 0x0-0x
at least just MODULES_VADDR thru 0x should be enough.
So if that is 0xbf00-0x in most cases, 0x4100
bytes, then 1/8 of that, 0x820, 130MB should be enough.
(Andrey need to say if this is possible.)

That will probably miss some usecases I'm not familiar with, where
the kernel is actually executing something below 0xbf00...

I looked at taking memory from vmalloc instead, but ran into
problems since this is subject to the highmem split and KASan
need to have it's address offset at compile time. On
Ux500 I managed to remove all the static maps and steal memory
from the top of the vmalloc area instead of the beginning, but
that is probably not generally feasible.

I suspect you have better ideas than what I can come up
with though.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Russell King - ARM Linux
On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
> On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  
> wrote:
> 
> > I used vexpress. Anyway, it doesn't matter now, since I have an update
> > with a lot of stuff fixed, and it works on hardware.
> > I still need to do some work on it and tomorrow, probably, I will share.
> 
> Ah awesome. I have a stash of ARM boards so I can test it on a
> range of hardware once you feel it's ready.
> 
> Sorry for pulling stuff out of your hands, people are excited about
> KASan ARM32 as it turns out.

People may be excited about it because it's a new feature, but we really
need to consider whether gobbling up 512MB of userspace for it is a good
idea or not.  There are programs around which like to map large amounts
of memory into their process space, and the more we steal from them, the
more likely these programs are to fail.

The other thing which I'm not happy about is having a 16K allocation per
thread - the 16K allocation for the PGD is already prone to invoking the
OOM killer after memory fragmentation has set in, we don't need another
16K allocation.  We're going from one 16K allocation per process to that
_and_ one 16K allocation per thread.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Linus Walleij
On Wed, Aug 19, 2015 at 4:51 PM, Andrey Ryabinin  wrote:
> On 08/19/2015 03:14 PM, Linus Walleij wrote:

>> Integrator/AP (ARMv5):
>>
>> This one mounted with an ARMv5 ARM926 tile. It boots nicely
>> (but takes forever) with KASan and run all test cases (!) just like
>> for the other platforms but before reaching userspace this happens:
>
> THREAD_SIZE hardcoded in act_mm macro.
>
> This hack should help:
>
> diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
> index c671f34..b1765f2 100644
> --- a/arch/arm/mm/proc-macros.S
> +++ b/arch/arm/mm/proc-macros.S
> @@ -32,6 +32,9 @@
> .macro  act_mm, rd
> bic \rd, sp, #8128
> bic \rd, \rd, #63
> +#ifdef CONFIG_KASAN
> +   bic \rd, \rd, #8192
> +#endif
> ldr \rd, [\rd, #TI_TASK]
> ldr \rd, [\rd, #TSK_ACTIVE_MM]
> .endm

Yes this work, thanks! I now get to userspace.
Tested-by: Linus Walleij 

I have compiled Trinity and running some stress on different boards.
The ARMv7 seems to rather die from random nasty stuff from the
syscall or OOM rather than any KASan-detected bugs, but I'll
keep hammering at it a big.

I have some odd patch I'll pass along.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Linus Walleij
On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
li...@arm.linux.org.uk wrote:
 On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

  I used vexpress. Anyway, it doesn't matter now, since I have an update
  with a lot of stuff fixed, and it works on hardware.
  I still need to do some work on it and tomorrow, probably, I will share.

 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.

 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.

 People may be excited about it because it's a new feature, but we really
 need to consider whether gobbling up 512MB of userspace for it is a good
 idea or not.  There are programs around which like to map large amounts
 of memory into their process space, and the more we steal from them, the
 more likely these programs are to fail.

I looked at some different approaches over the last weeks for this
when playing around with KASan.

It seems since KASan was developed on 64bit systems, this was
not much of an issue for them as they could take their shadow
memory from the vmalloc space.

I think it is possible to actually just steal as much memory as is
needed to cover the kernel, and not 1/8 of the entire addressable
32bit space. So instead of covering all from 0x0-0x
at least just MODULES_VADDR thru 0x should be enough.
So if that is 0xbf00-0x in most cases, 0x4100
bytes, then 1/8 of that, 0x820, 130MB should be enough.
(Andrey need to say if this is possible.)

That will probably miss some usecases I'm not familiar with, where
the kernel is actually executing something below 0xbf00...

I looked at taking memory from vmalloc instead, but ran into
problems since this is subject to the highmem split and KASan
need to have it's address offset at compile time. On
Ux500 I managed to remove all the static maps and steal memory
from the top of the vmalloc area instead of the beginning, but
that is probably not generally feasible.

I suspect you have better ideas than what I can come up
with though.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Linus Walleij
On Wed, Aug 19, 2015 at 4:51 PM, Andrey Ryabinin ryabinin@gmail.com wrote:
 On 08/19/2015 03:14 PM, Linus Walleij wrote:

 Integrator/AP (ARMv5):

 This one mounted with an ARMv5 ARM926 tile. It boots nicely
 (but takes forever) with KASan and run all test cases (!) just like
 for the other platforms but before reaching userspace this happens:

 THREAD_SIZE hardcoded in act_mm macro.

 This hack should help:

 diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
 index c671f34..b1765f2 100644
 --- a/arch/arm/mm/proc-macros.S
 +++ b/arch/arm/mm/proc-macros.S
 @@ -32,6 +32,9 @@
 .macro  act_mm, rd
 bic \rd, sp, #8128
 bic \rd, \rd, #63
 +#ifdef CONFIG_KASAN
 +   bic \rd, \rd, #8192
 +#endif
 ldr \rd, [\rd, #TI_TASK]
 ldr \rd, [\rd, #TSK_ACTIVE_MM]
 .endm

Yes this work, thanks! I now get to userspace.
Tested-by: Linus Walleij linus.wall...@linaro.org

I have compiled Trinity and running some stress on different boards.
The ARMv7 seems to rather die from random nasty stuff from the
syscall or OOM rather than any KASan-detected bugs, but I'll
keep hammering at it a big.

I have some odd patch I'll pass along.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Russell King - ARM Linux
On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:
 
  I used vexpress. Anyway, it doesn't matter now, since I have an update
  with a lot of stuff fixed, and it works on hardware.
  I still need to do some work on it and tomorrow, probably, I will share.
 
 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.
 
 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.

People may be excited about it because it's a new feature, but we really
need to consider whether gobbling up 512MB of userspace for it is a good
idea or not.  There are programs around which like to map large amounts
of memory into their process space, and the more we steal from them, the
more likely these programs are to fail.

The other thing which I'm not happy about is having a 16K allocation per
thread - the 16K allocation for the PGD is already prone to invoking the
OOM killer after memory fragmentation has set in, we don't need another
16K allocation.  We're going from one 16K allocation per process to that
_and_ one 16K allocation per thread.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 16:45 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
 On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
 li...@arm.linux.org.uk wrote:
 On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

  I used vexpress. Anyway, it doesn't matter now, since I have an update
  with a lot of stuff fixed, and it works on hardware.
  I still need to do some work on it and tomorrow, probably, I will share.

 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.

 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.

 People may be excited about it because it's a new feature, but we really
 need to consider whether gobbling up 512MB of userspace for it is a good
 idea or not.  There are programs around which like to map large amounts
 of memory into their process space, and the more we steal from them, the
 more likely these programs are to fail.

 I looked at some different approaches over the last weeks for this
 when playing around with KASan.

 It seems since KASan was developed on 64bit systems, this was
 not much of an issue for them as they could take their shadow
 memory from the vmalloc space.

 I think it is possible to actually just steal as much memory as is
 needed to cover the kernel, and not 1/8 of the entire addressable
 32bit space. So instead of covering all from 0x0-0x
 at least just MODULES_VADDR thru 0x should be enough.
 So if that is 0xbf00-0x in most cases, 0x4100
 bytes, then 1/8 of that, 0x820, 130MB should be enough.
 (Andrey need to say if this is possible.)


Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
The only advantage of 512Mb shadow is better handling of user memory
accesses bugs
(access to user memory without copy_from_user/copy_to_user/strlen_user etc API).
In case of 512Mb shadow we could to not map anything in shadow for
user addresses, so such bug will
guarantee  to crash the kernel.
In case of 130Mb, the behavior will depend on memory layout of the
current process.
So, I think it's fine to keep shadow only for kernel addresses.

 That will probably miss some usecases I'm not familiar with, where
 the kernel is actually executing something below 0xbf00...

 I looked at taking memory from vmalloc instead, but ran into
 problems since this is subject to the highmem split and KASan
 need to have it's address offset at compile time. On
 Ux500 I managed to remove all the static maps and steal memory
 from the top of the vmalloc area instead of the beginning, but
 that is probably not generally feasible.

 I suspect you have better ideas than what I can come up
 with though.

 Yours,
 Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 19:16 GMT+03:00 Vladimir Murzin vladimir.mur...@arm.com:
 On 24/08/15 17:00, Andrey Ryabinin wrote:
 2015-08-24 18:44 GMT+03:00 Vladimir Murzin vladimir.mur...@arm.com:

 Another option would be having sparse shadow memory based on page
 extension. I did play with that some time ago based on ideas from
 original v1 KASan support for x86/arm - it is how 614be38 irqchip:
 gic-v3: Fix out of bounds access to cpu_logical_map was caught.
 It doesn't require any VA reservations, only some contiguous memory for
 the page_ext itself, which serves as indirection level for the 0-order
 shadow pages.

 We won't be able to use inline instrumentation (I could live with that),
 and most importantly, we won't be able to use stack instrumentation.
 GCC needs to know shadow address for inline and/or stack instrumentation
 to generate correct code.

 It's definitely a trade-off ;)

 Just for my understanding does that stack instrumentation is controlled
 via -asan-stack?


Yup.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Andrey Ryabinin
2015-08-24 18:44 GMT+03:00 Vladimir Murzin vladimir.mur...@arm.com:

 Another option would be having sparse shadow memory based on page
 extension. I did play with that some time ago based on ideas from
 original v1 KASan support for x86/arm - it is how 614be38 irqchip:
 gic-v3: Fix out of bounds access to cpu_logical_map was caught.
 It doesn't require any VA reservations, only some contiguous memory for
 the page_ext itself, which serves as indirection level for the 0-order
 shadow pages.

We won't be able to use inline instrumentation (I could live with that),
and most importantly, we won't be able to use stack instrumentation.
GCC needs to know shadow address for inline and/or stack instrumentation
to generate correct code.

 In theory such design can be reused by others 32-bit arches and, I
 think, nommu too. Additionally, the shadow pages might be movable with
 help of driver-page migration patch series [1].
 The cost is obvious - performance drop, although I didn't bother
 measuring it.

 [1] https://lwn.net/Articles/650917/

 Cheers
 Vladimir

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Vladimir Murzin
On 24/08/15 17:00, Andrey Ryabinin wrote:
 2015-08-24 18:44 GMT+03:00 Vladimir Murzin vladimir.mur...@arm.com:

 Another option would be having sparse shadow memory based on page
 extension. I did play with that some time ago based on ideas from
 original v1 KASan support for x86/arm - it is how 614be38 irqchip:
 gic-v3: Fix out of bounds access to cpu_logical_map was caught.
 It doesn't require any VA reservations, only some contiguous memory for
 the page_ext itself, which serves as indirection level for the 0-order
 shadow pages.
 
 We won't be able to use inline instrumentation (I could live with that),
 and most importantly, we won't be able to use stack instrumentation.
 GCC needs to know shadow address for inline and/or stack instrumentation
 to generate correct code.

It's definitely a trade-off ;)

Just for my understanding does that stack instrumentation is controlled
via -asan-stack?

Thanks
Vladimir

 
 In theory such design can be reused by others 32-bit arches and, I
 think, nommu too. Additionally, the shadow pages might be movable with
 help of driver-page migration patch series [1].
 The cost is obvious - performance drop, although I didn't bother
 measuring it.

 [1] https://lwn.net/Articles/650917/

 Cheers
 Vladimir

 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Vladimir Murzin
On 24/08/15 15:15, Andrey Ryabinin wrote:
 2015-08-24 16:45 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
 On Mon, Aug 24, 2015 at 3:15 PM, Russell King - ARM Linux
 li...@arm.linux.org.uk wrote:
 On Tue, Jul 21, 2015 at 11:27:56PM +0200, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

 I used vexpress. Anyway, it doesn't matter now, since I have an update
 with a lot of stuff fixed, and it works on hardware.
 I still need to do some work on it and tomorrow, probably, I will share.

 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.

 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.

 People may be excited about it because it's a new feature, but we really
 need to consider whether gobbling up 512MB of userspace for it is a good
 idea or not.  There are programs around which like to map large amounts
 of memory into their process space, and the more we steal from them, the
 more likely these programs are to fail.

 I looked at some different approaches over the last weeks for this
 when playing around with KASan.

 It seems since KASan was developed on 64bit systems, this was
 not much of an issue for them as they could take their shadow
 memory from the vmalloc space.

 I think it is possible to actually just steal as much memory as is
 needed to cover the kernel, and not 1/8 of the entire addressable
 32bit space. So instead of covering all from 0x0-0x
 at least just MODULES_VADDR thru 0x should be enough.
 So if that is 0xbf00-0x in most cases, 0x4100
 bytes, then 1/8 of that, 0x820, 130MB should be enough.
 (Andrey need to say if this is possible.)

 
 Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
 The only advantage of 512Mb shadow is better handling of user memory
 accesses bugs
 (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
 API).
 In case of 512Mb shadow we could to not map anything in shadow for
 user addresses, so such bug will
 guarantee  to crash the kernel.
 In case of 130Mb, the behavior will depend on memory layout of the
 current process.
 So, I think it's fine to keep shadow only for kernel addresses.

Another option would be having sparse shadow memory based on page
extension. I did play with that some time ago based on ideas from
original v1 KASan support for x86/arm - it is how 614be38 irqchip:
gic-v3: Fix out of bounds access to cpu_logical_map was caught.
It doesn't require any VA reservations, only some contiguous memory for
the page_ext itself, which serves as indirection level for the 0-order
shadow pages.
In theory such design can be reused by others 32-bit arches and, I
think, nommu too. Additionally, the shadow pages might be movable with
help of driver-page migration patch series [1].
The cost is obvious - performance drop, although I didn't bother
measuring it.

[1] https://lwn.net/Articles/650917/

Cheers
Vladimir

 
 That will probably miss some usecases I'm not familiar with, where
 the kernel is actually executing something below 0xbf00...

 I looked at taking memory from vmalloc instead, but ran into
 problems since this is subject to the highmem split and KASan
 need to have it's address offset at compile time. On
 Ux500 I managed to remove all the static maps and steal memory
 from the top of the vmalloc area instead of the beginning, but
 that is probably not generally feasible.

 I suspect you have better ideas than what I can come up
 with though.

 Yours,
 Linus Walleij
 
 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-24 Thread Russell King - ARM Linux
On Mon, Aug 24, 2015 at 05:15:22PM +0300, Andrey Ryabinin wrote:
 Yes, ~130Mb (3G/1G split) should work. 512Mb shadow is optional.
 The only advantage of 512Mb shadow is better handling of user memory
 accesses bugs
 (access to user memory without copy_from_user/copy_to_user/strlen_user etc 
 API).

No need for that to be handed by KASan.  I have patches in linux-next,
now acked by Will, which prevent the kernel accessing userspace with
zero memory footprint.  No need for remapping, we have a way to quickly
turn off access to userspace mapped pages on non-LPAE 32-bit CPUs.
(LPAE is not supported yet - Catalin will be working on that using the
hooks I'm providing once he returns.)

This isn't a debugging thing, it's a security hardening thing.  Some
use-after-free bugs are potentially exploitable from userspace.  See
the recent blackhat conference paper.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-19 Thread Andrey Ryabinin
On 08/19/2015 03:14 PM, Linus Walleij wrote:
> On Wed, Jul 22, 2015 at 7:54 PM, Andrey Ryabinin  
> wrote:
> 
>> So here is updated version:
>> git://github.com/aryabinin/linux.git kasan/arm_v0_1
>>
>> The code is still ugly in some places and it probably have some bugs.
>> Lightly tested on exynos 5410/5420.
> 
> I compiled this for various ARM platforms and tested to boot.
> I used GCC version 4.9.3 20150113 (prerelease) (Linaro).
> 
> I get these compilation warnings no matter what I compile,
> I chose to ignore them:
> 
> WARNING: vmlinux.o(.meminit.text+0x2c):
> Section mismatch in reference from the function kasan_pte_populate()
> to the function
> .init.text:kasan_alloc_block.constprop.7()
> The function __meminit kasan_pte_populate() references
> a function __init kasan_alloc_block.constprop.7().
> If kasan_alloc_block.constprop.7 is only used by kasan_pte_populate then
> annotate kasan_alloc_block.constprop.7 with a matching annotation.
> 
> WARNING: vmlinux.o(.meminit.text+0x98):
> Section mismatch in reference from the function kasan_pmd_populate()
> to the function
> .init.text:kasan_alloc_block.constprop.7()
> The function __meminit kasan_pmd_populate() references
> a function __init kasan_alloc_block.constprop.7().
> If kasan_alloc_block.constprop.7 is only used by kasan_pmd_populate then
> annotate kasan_alloc_block.constprop.7 with a matching annotation.
> 
> These KASan outline tests run fine:
> 
> kasan test: kmalloc_oob_right out-of-bounds to right
> kasan test: kmalloc_oob_left out-of-bounds to left
> kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
> kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
> out-of-bounds to right
> kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
> kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
> kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
> kasan test: kmalloc_oob_in_memset out-of-bounds in memset
> kasan test: kmalloc_uaf use-after-free
> kasan test: kmalloc_uaf_memset use-after-free in memset
> kasan test: kmalloc_uaf2 use-after-free after another kmalloc
> kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
> 
> These two tests seems to not trigger KASan BUG()s, and seemse to
> be like so on all hardware, so I guess it is this kind of test
> that requires GCC 5.0:
> 
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> 
> 
> Hardware test targets:
> 
> Ux500 (ARMv7):
> 
> On Ux500 I get a real slow boot (as exepected) and after
> enabling the test cases produce KASan warnings
> expectedly.
> 
> MSM APQ8060 (ARMv7):
> 
> Also a real slow boot and the expected KASan warnings when
> running the tests.
> 
> Integrator/AP (ARMv5):
> 
> This one mounted with an ARMv5 ARM926 tile. It boots nicely
> (but takes forever) with KASan and run all test cases (!) just like
> for the other platforms but before reaching userspace this happens:
> 

THREAD_SIZE hardcoded in act_mm macro.

This hack should help:

diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index c671f34..b1765f2 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -32,6 +32,9 @@
.macro  act_mm, rd
bic \rd, sp, #8128
bic \rd, \rd, #63
+#ifdef CONFIG_KASAN
+   bic \rd, \rd, #8192
+#endif
ldr \rd, [\rd, #TI_TASK]
ldr \rd, [\rd, #TSK_ACTIVE_MM]
.endm
---




> 
> I then tested on the Footbridge, another ARMv4 system, the oldest I have
> SA110-based. This passes decompression and then you may *think* it hangs.
> But it doesn't. It just takes a few minutes to boot with KASan
> instrumentation, then all tests run fine also on this hardware.
> The crash logs scroll by on the physical console.
> 
> They keep scrolling forever however, and are still scrolling as I
> write this. I suspect some real memory usage bugs to be causing it,
> as it is exercising some ages old code that didn't see much scrutiny
> in recent years.
> 

I would suspect some kasan bug here.

BTW, we probably need to introduce one-shot mode in kasan to prevent such 
report spam.
I mean print only the first report and ignore the rest. The first report is the 
most important usually,
next reports usually just noise.


> 
> Yours,
> Linus Walleij
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-19 Thread Linus Walleij
On Wed, Jul 22, 2015 at 7:54 PM, Andrey Ryabinin  wrote:

> So here is updated version:
> git://github.com/aryabinin/linux.git kasan/arm_v0_1
>
> The code is still ugly in some places and it probably have some bugs.
> Lightly tested on exynos 5410/5420.

I compiled this for various ARM platforms and tested to boot.
I used GCC version 4.9.3 20150113 (prerelease) (Linaro).

I get these compilation warnings no matter what I compile,
I chose to ignore them:

WARNING: vmlinux.o(.meminit.text+0x2c):
Section mismatch in reference from the function kasan_pte_populate()
to the function
.init.text:kasan_alloc_block.constprop.7()
The function __meminit kasan_pte_populate() references
a function __init kasan_alloc_block.constprop.7().
If kasan_alloc_block.constprop.7 is only used by kasan_pte_populate then
annotate kasan_alloc_block.constprop.7 with a matching annotation.

WARNING: vmlinux.o(.meminit.text+0x98):
Section mismatch in reference from the function kasan_pmd_populate()
to the function
.init.text:kasan_alloc_block.constprop.7()
The function __meminit kasan_pmd_populate() references
a function __init kasan_alloc_block.constprop.7().
If kasan_alloc_block.constprop.7 is only used by kasan_pmd_populate then
annotate kasan_alloc_block.constprop.7 with a matching annotation.

These KASan outline tests run fine:

kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
kasan test: kmalloc_oob_in_memset out-of-bounds in memset
kasan test: kmalloc_uaf use-after-free
kasan test: kmalloc_uaf_memset use-after-free in memset
kasan test: kmalloc_uaf2 use-after-free after another kmalloc
kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc

These two tests seems to not trigger KASan BUG()s, and seemse to
be like so on all hardware, so I guess it is this kind of test
that requires GCC 5.0:

kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable


Hardware test targets:

Ux500 (ARMv7):

On Ux500 I get a real slow boot (as exepected) and after
enabling the test cases produce KASan warnings
expectedly.

MSM APQ8060 (ARMv7):

Also a real slow boot and the expected KASan warnings when
running the tests.

Integrator/AP (ARMv5):

This one mounted with an ARMv5 ARM926 tile. It boots nicely
(but takes forever) with KASan and run all test cases (!) just like
for the other platforms but before reaching userspace this happens:

Unable to handle kernel paging request at virtual address 00021144
pgd = c5a74000
[00021144] *pgd=
Internal error: Oops: 5 [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 24 Comm: modprobe Tainted: GB
4.2.0-rc2-77613-g11c2df68e4a8 #1
Hardware name: ARM Integrator/AP (Device Tree)
task: c69f8cc0 ti: c5a68000 task.ti: c5a68000
PC is at v4wbi_flush_user_tlb_range+0x10/0x4c
LR is at move_page_tables+0x320/0x46c
pc : []lr : []psr: 6013
sp : c5a6bd78  ip : c5a7  fp : 9f00
r10: 9eaab000  r9 : ffaab000  r8 : 0093d34f
r7 : c5a782ac  r6 : c5a68000  r5 : 9effe000  r4 : c0900044
r3 : 00021000  r2 : c5a4c000  r1 : 9f00  r0 : 9effe000
Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 0005317f  Table: 05a74000  DAC: 0015
Process modprobe (pid: 24, stack limit = 0xc5a68190)
Stack: (0xc5a6bd78 to 0xc5a6c000)
bd60:   9f00 2000
bd80:  9f00 c5a4c000   c5a4c020 c5a68000 c5a68004
bda0: c5a4c000 9effe000  c5a4c000 c59a6540 c5a4c004 c5a70034 c59a65cc
bdc0: 9effe000  2000 c00f4558 2000   9f00
bde0: 9eaa9000 c5a7 9eaab000 c5a4c04c  c5a4c000  
be00:    0f4a4438 b0a5485b 8881364f c0910aec 
be20:   c69f8cc0 0f4a4438 c0910aec 0018 9f00 c59a68c0
be40: c69f8cc0 c592db40 c59a6540 c592db50 9f00 c59a68c0 c69f8cc0 00c0
be60: c5a6be68 c015694c c59a6540 0080 c5a4aa80 c59a65d8 c5e9bc00 c592db6c
be80: c6a0da00 c592db40 0001  c59a6540 c00f4f10  2000
bea0: 9f00 c5a4c000 c69f8cc0 c00f5008 0017 c5a6bec4  0008
bec0: 9efff000 c00b5160 c7e937a0 c00a4684  c7e937a0 9efff000 c09089a8
bee0: c09089b0 c59a6540 c0908590 fff8 c59a65d4 c5a68000 c5a68004 c00f4b28
bf00: c59a65c8 0001 0018 c69f8cc0 c5a4ab60 0018 c6a38000 c59a6540
bf20: 0001 c59a65e8 c59a65cc c00f6fec c090476c  c59a65c8 038f
bf40: c5a4c028 c59a65c0 c59a65f0 c5a4c004  c69f8ec0  c6a38000
bf60: c5a11a00 c5a4ab60    

Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-19 Thread Linus Walleij
On Wed, Jul 22, 2015 at 7:54 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:

 So here is updated version:
 git://github.com/aryabinin/linux.git kasan/arm_v0_1

 The code is still ugly in some places and it probably have some bugs.
 Lightly tested on exynos 5410/5420.

I compiled this for various ARM platforms and tested to boot.
I used GCC version 4.9.3 20150113 (prerelease) (Linaro).

I get these compilation warnings no matter what I compile,
I chose to ignore them:

WARNING: vmlinux.o(.meminit.text+0x2c):
Section mismatch in reference from the function kasan_pte_populate()
to the function
.init.text:kasan_alloc_block.constprop.7()
The function __meminit kasan_pte_populate() references
a function __init kasan_alloc_block.constprop.7().
If kasan_alloc_block.constprop.7 is only used by kasan_pte_populate then
annotate kasan_alloc_block.constprop.7 with a matching annotation.

WARNING: vmlinux.o(.meminit.text+0x98):
Section mismatch in reference from the function kasan_pmd_populate()
to the function
.init.text:kasan_alloc_block.constprop.7()
The function __meminit kasan_pmd_populate() references
a function __init kasan_alloc_block.constprop.7().
If kasan_alloc_block.constprop.7 is only used by kasan_pmd_populate then
annotate kasan_alloc_block.constprop.7 with a matching annotation.

These KASan outline tests run fine:

kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
kasan test: kmalloc_oob_in_memset out-of-bounds in memset
kasan test: kmalloc_uaf use-after-free
kasan test: kmalloc_uaf_memset use-after-free in memset
kasan test: kmalloc_uaf2 use-after-free after another kmalloc
kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc

These two tests seems to not trigger KASan BUG()s, and seemse to
be like so on all hardware, so I guess it is this kind of test
that requires GCC 5.0:

kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable


Hardware test targets:

Ux500 (ARMv7):

On Ux500 I get a real slow boot (as exepected) and after
enabling the test cases produce KASan warnings
expectedly.

MSM APQ8060 (ARMv7):

Also a real slow boot and the expected KASan warnings when
running the tests.

Integrator/AP (ARMv5):

This one mounted with an ARMv5 ARM926 tile. It boots nicely
(but takes forever) with KASan and run all test cases (!) just like
for the other platforms but before reaching userspace this happens:

Unable to handle kernel paging request at virtual address 00021144
pgd = c5a74000
[00021144] *pgd=
Internal error: Oops: 5 [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 24 Comm: modprobe Tainted: GB
4.2.0-rc2-77613-g11c2df68e4a8 #1
Hardware name: ARM Integrator/AP (Device Tree)
task: c69f8cc0 ti: c5a68000 task.ti: c5a68000
PC is at v4wbi_flush_user_tlb_range+0x10/0x4c
LR is at move_page_tables+0x320/0x46c
pc : [c00182d0]lr : [c00ce9d0]psr: 6013
sp : c5a6bd78  ip : c5a7  fp : 9f00
r10: 9eaab000  r9 : ffaab000  r8 : 0093d34f
r7 : c5a782ac  r6 : c5a68000  r5 : 9effe000  r4 : c0900044
r3 : 00021000  r2 : c5a4c000  r1 : 9f00  r0 : 9effe000
Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 0005317f  Table: 05a74000  DAC: 0015
Process modprobe (pid: 24, stack limit = 0xc5a68190)
Stack: (0xc5a6bd78 to 0xc5a6c000)
bd60:   9f00 2000
bd80:  9f00 c5a4c000   c5a4c020 c5a68000 c5a68004
bda0: c5a4c000 9effe000  c5a4c000 c59a6540 c5a4c004 c5a70034 c59a65cc
bdc0: 9effe000  2000 c00f4558 2000   9f00
bde0: 9eaa9000 c5a7 9eaab000 c5a4c04c  c5a4c000  
be00:    0f4a4438 b0a5485b 8881364f c0910aec 
be20:   c69f8cc0 0f4a4438 c0910aec 0018 9f00 c59a68c0
be40: c69f8cc0 c592db40 c59a6540 c592db50 9f00 c59a68c0 c69f8cc0 00c0
be60: c5a6be68 c015694c c59a6540 0080 c5a4aa80 c59a65d8 c5e9bc00 c592db6c
be80: c6a0da00 c592db40 0001  c59a6540 c00f4f10  2000
bea0: 9f00 c5a4c000 c69f8cc0 c00f5008 0017 c5a6bec4  0008
bec0: 9efff000 c00b5160 c7e937a0 c00a4684  c7e937a0 9efff000 c09089a8
bee0: c09089b0 c59a6540 c0908590 fff8 c59a65d4 c5a68000 c5a68004 c00f4b28
bf00: c59a65c8 0001 0018 c69f8cc0 c5a4ab60 0018 c6a38000 c59a6540
bf20: 0001 c59a65e8 c59a65cc c00f6fec c090476c  c59a65c8 038f
bf40: c5a4c028 c59a65c0 c59a65f0 c5a4c004  c69f8ec0  c6a38000
bf60: c5a11a00 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-08-19 Thread Andrey Ryabinin
On 08/19/2015 03:14 PM, Linus Walleij wrote:
 On Wed, Jul 22, 2015 at 7:54 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:
 
 So here is updated version:
 git://github.com/aryabinin/linux.git kasan/arm_v0_1

 The code is still ugly in some places and it probably have some bugs.
 Lightly tested on exynos 5410/5420.
 
 I compiled this for various ARM platforms and tested to boot.
 I used GCC version 4.9.3 20150113 (prerelease) (Linaro).
 
 I get these compilation warnings no matter what I compile,
 I chose to ignore them:
 
 WARNING: vmlinux.o(.meminit.text+0x2c):
 Section mismatch in reference from the function kasan_pte_populate()
 to the function
 .init.text:kasan_alloc_block.constprop.7()
 The function __meminit kasan_pte_populate() references
 a function __init kasan_alloc_block.constprop.7().
 If kasan_alloc_block.constprop.7 is only used by kasan_pte_populate then
 annotate kasan_alloc_block.constprop.7 with a matching annotation.
 
 WARNING: vmlinux.o(.meminit.text+0x98):
 Section mismatch in reference from the function kasan_pmd_populate()
 to the function
 .init.text:kasan_alloc_block.constprop.7()
 The function __meminit kasan_pmd_populate() references
 a function __init kasan_alloc_block.constprop.7().
 If kasan_alloc_block.constprop.7 is only used by kasan_pmd_populate then
 annotate kasan_alloc_block.constprop.7 with a matching annotation.
 
 These KASan outline tests run fine:
 
 kasan test: kmalloc_oob_right out-of-bounds to right
 kasan test: kmalloc_oob_left out-of-bounds to left
 kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
 kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
 out-of-bounds to right
 kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
 kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
 kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
 kasan test: kmalloc_oob_in_memset out-of-bounds in memset
 kasan test: kmalloc_uaf use-after-free
 kasan test: kmalloc_uaf_memset use-after-free in memset
 kasan test: kmalloc_uaf2 use-after-free after another kmalloc
 kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
 
 These two tests seems to not trigger KASan BUG()s, and seemse to
 be like so on all hardware, so I guess it is this kind of test
 that requires GCC 5.0:
 
 kasan test: kasan_stack_oob out-of-bounds on stack
 kasan test: kasan_global_oob out-of-bounds global variable
 
 
 Hardware test targets:
 
 Ux500 (ARMv7):
 
 On Ux500 I get a real slow boot (as exepected) and after
 enabling the test cases produce KASan warnings
 expectedly.
 
 MSM APQ8060 (ARMv7):
 
 Also a real slow boot and the expected KASan warnings when
 running the tests.
 
 Integrator/AP (ARMv5):
 
 This one mounted with an ARMv5 ARM926 tile. It boots nicely
 (but takes forever) with KASan and run all test cases (!) just like
 for the other platforms but before reaching userspace this happens:
 

THREAD_SIZE hardcoded in act_mm macro.

This hack should help:

diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index c671f34..b1765f2 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -32,6 +32,9 @@
.macro  act_mm, rd
bic \rd, sp, #8128
bic \rd, \rd, #63
+#ifdef CONFIG_KASAN
+   bic \rd, \rd, #8192
+#endif
ldr \rd, [\rd, #TI_TASK]
ldr \rd, [\rd, #TSK_ACTIVE_MM]
.endm
---




 
 I then tested on the Footbridge, another ARMv4 system, the oldest I have
 SA110-based. This passes decompression and then you may *think* it hangs.
 But it doesn't. It just takes a few minutes to boot with KASan
 instrumentation, then all tests run fine also on this hardware.
 The crash logs scroll by on the physical console.
 
 They keep scrolling forever however, and are still scrolling as I
 write this. I suspect some real memory usage bugs to be causing it,
 as it is exercising some ages old code that didn't see much scrutiny
 in recent years.
 

I would suspect some kasan bug here.

BTW, we probably need to introduce one-shot mode in kasan to prevent such 
report spam.
I mean print only the first report and ignore the rest. The first report is the 
most important usually,
next reports usually just noise.


 
 Yours,
 Linus Walleij
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-22 Thread Andrey Ryabinin
On 07/22/2015 12:27 AM, Linus Walleij wrote:
> On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  
> wrote:
> 
>> I used vexpress. Anyway, it doesn't matter now, since I have an update
>> with a lot of stuff fixed, and it works on hardware.
>> I still need to do some work on it and tomorrow, probably, I will share.
> 
> Ah awesome. I have a stash of ARM boards so I can test it on a
> range of hardware once you feel it's ready.
> 

So here is updated version:
git://github.com/aryabinin/linux.git kasan/arm_v0_1

The code is still ugly in some places and it probably have some bugs.
Lightly tested on exynos 5410/5420.


> Sorry for pulling stuff out of your hands, people are excited about
> KASan ARM32 as it turns out.
> 
> Yours,
> Linus Walleij
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-22 Thread Andrey Ryabinin
On 07/22/2015 12:27 AM, Linus Walleij wrote:
 On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:
 
 I used vexpress. Anyway, it doesn't matter now, since I have an update
 with a lot of stuff fixed, and it works on hardware.
 I still need to do some work on it and tomorrow, probably, I will share.
 
 Ah awesome. I have a stash of ARM boards so I can test it on a
 range of hardware once you feel it's ready.
 

So here is updated version:
git://github.com/aryabinin/linux.git kasan/arm_v0_1

The code is still ugly in some places and it probably have some bugs.
Lightly tested on exynos 5410/5420.


 Sorry for pulling stuff out of your hands, people are excited about
 KASan ARM32 as it turns out.
 
 Yours,
 Linus Walleij
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Linus Walleij
On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin  wrote:

> I used vexpress. Anyway, it doesn't matter now, since I have an update
> with a lot of stuff fixed, and it works on hardware.
> I still need to do some work on it and tomorrow, probably, I will share.

Ah awesome. I have a stash of ARM boards so I can test it on a
range of hardware once you feel it's ready.

Sorry for pulling stuff out of your hands, people are excited about
KASan ARM32 as it turns out.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Andrey Ryabinin
On 07/21/2015 01:36 PM, Linus Walleij wrote:
> On Wed, Jun 17, 2015 at 11:32 PM, Andrey Ryabinin
>  wrote:
>> 2015-06-13 18:25 GMT+03:00 Linus Walleij :
>>>
>>> On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin  
>>> wrote:
 2015-06-11 16:39 GMT+03:00 Linus Walleij :
> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
> wrote:
>
>> This patch adds arch specific code for kernel address sanitizer
>> (see Documentation/kasan.txt).
>
> I looked closer at this again ... I am trying to get KASan up for
> ARM(32) with some tricks and hacks.
>

 I have some patches for that. They still need some polishing, but works 
 for me.
 I could share after I get back to office on Tuesday.
>>>
>>> OK! I'd be happy to test!
>>>
>>
>> I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0
>>
>> It far from ready. Firstly I've tried it only in qemu and it works.
> 
> Hm what QEMU model are you using? I tried to test it with Versatile
> (the most common) and it kinda boots and hangs:
> 

I used vexpress. Anyway, it doesn't matter now, since I have an update
with a lot of stuff fixed, and it works on hardware.
I still need to do some work on it and tomorrow, probably, I will share.




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Linus Walleij
On Wed, Jun 17, 2015 at 11:32 PM, Andrey Ryabinin
 wrote:
> 2015-06-13 18:25 GMT+03:00 Linus Walleij :
>>
>> On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin  
>> wrote:
>> > 2015-06-11 16:39 GMT+03:00 Linus Walleij :
>> >> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
>> >> wrote:
>> >>
>> >>> This patch adds arch specific code for kernel address sanitizer
>> >>> (see Documentation/kasan.txt).
>> >>
>> >> I looked closer at this again ... I am trying to get KASan up for
>> >> ARM(32) with some tricks and hacks.
>> >>
>> >
>> > I have some patches for that. They still need some polishing, but works 
>> > for me.
>> > I could share after I get back to office on Tuesday.
>>
>> OK! I'd be happy to test!
>>
>
> I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0
>
> It far from ready. Firstly I've tried it only in qemu and it works.

Hm what QEMU model are you using? I tried to test it with Versatile
(the most common) and it kinda boots and hangs:

Memory: 106116K/131072K available (3067K kernel code, 166K rwdata,
864K rodata, 3072K init, 130K bss, 24956K reserved, 0K cma-reserved)
Virtual kernel memory layout:
vector  : 0x - 0x1000   (   4 kB)
fixmap  : 0xffc0 - 0xfff0   (3072 kB)
kasan   : 0x9f00 - 0xbf00   ( 512 MB)
vmalloc : 0xc880 - 0xff00   ( 872 MB)
(...)

Looks correct, no highmem on this beast.

Then I get this.

Unable to handle kernel NULL pointer dereference at virtual address 0130
pgd = c5ea8000
[0130] *pgd=
Internal error: Oops: 5 [#1] ARM
Modules linked in:
CPU: 0 PID: 19 Comm: modprobe Not tainted 4.1.0-rc8+ #7
Hardware name: ARM-Versatile (Device Tree Support)
task: c5e0b5a0 ti: c5ea task.ti: c5ea
PC is at v4wbi_flush_user_tlb_range+0x10/0x4c
LR is at move_page_tables+0x218/0x308
pc : []lr : []psr: 2153
sp : c5ea7df0  ip : c5e8c000  fp : ff8ec000
r10: 00bf334f  r9 : c5ead3b0  r8 : 9e8ec000
r7 : 0001  r6 : 2000  r5 : 9f00  r4 : 9effe000
r3 :   r2 : c5e8a000  r1 : 9f00  r0 : 9effe000
Flags: nzCv  IRQs on  FIQs off  Mode SVC_32  ISA ARM  Segment user
Control: 00093177  Table: 05ea8000  DAC: 0015
Process modprobe (pid: 19, stack limit = 0xc5ea0190)
Stack: (0xc5ea7df0 to 0xc5ea8000)
7de0:  9f00 c5e8a000 
7e00:   c5e8a000 9effe000 00c0 c5e8a000 c68c5700 9e8ec000
7e20: c5e8c034 9effe000 9e8ea000 9f00 2000 c00a9384 2000 
7e40: c5e8c000   c5e8a000    
7e60:        
7e80: c5e870e0 5e5e8830 c0701808 0013 c5ea7ec0 c5eac000 c5e870e0 c5e26140
7ea0: c68c5700  c5ea7ec0 c5eac000 c5e870e0 c6a2d8c0 0001 c00e4530
7ec0: c68c5700 0080 c5df2480 9f00  c5ea 000a c00a99ec
7ee0: 0017 c5ea7ef4    c7f10e60 c0377bc0 c0bf3f99
7f00: 000f   c00a9c28 9efff000 c06fa2c8 c68c5700 c06f9eb8
7f20: 0001 fff8 c68c5700  0001 c00a9770 c5e0b5a0 0013
7f40: c5e87ee0 c06ef0c8 c6a6 c00aabd0 c5e8c034   c5e0b790
7f60: c06e8190 c5ddcc60 c5d4c300 003f    
7f80:  c00aad00    c0030f48 c5d4c300 c0030e54
7fa0:    c0014960    
7fc0:        
7fe0:     0013   
[] (v4wbi_flush_user_tlb_range) from [<>] (  (null))
Code: e592c020 e3cd3d7f e3c3303f e593300c (e5933130)
---[ end trace b3c4eba35670ba77 ]---

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Linus Walleij
On Tue, Jul 21, 2015 at 4:27 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:

 I used vexpress. Anyway, it doesn't matter now, since I have an update
 with a lot of stuff fixed, and it works on hardware.
 I still need to do some work on it and tomorrow, probably, I will share.

Ah awesome. I have a stash of ARM boards so I can test it on a
range of hardware once you feel it's ready.

Sorry for pulling stuff out of your hands, people are excited about
KASan ARM32 as it turns out.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Linus Walleij
On Wed, Jun 17, 2015 at 11:32 PM, Andrey Ryabinin
ryabinin@gmail.com wrote:
 2015-06-13 18:25 GMT+03:00 Linus Walleij linus.wall...@linaro.org:

 On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin ryabinin@gmail.com 
 wrote:
  2015-06-11 16:39 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
  On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
  wrote:
 
  This patch adds arch specific code for kernel address sanitizer
  (see Documentation/kasan.txt).
 
  I looked closer at this again ... I am trying to get KASan up for
  ARM(32) with some tricks and hacks.
 
 
  I have some patches for that. They still need some polishing, but works 
  for me.
  I could share after I get back to office on Tuesday.

 OK! I'd be happy to test!


 I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0

 It far from ready. Firstly I've tried it only in qemu and it works.

Hm what QEMU model are you using? I tried to test it with Versatile
(the most common) and it kinda boots and hangs:

Memory: 106116K/131072K available (3067K kernel code, 166K rwdata,
864K rodata, 3072K init, 130K bss, 24956K reserved, 0K cma-reserved)
Virtual kernel memory layout:
vector  : 0x - 0x1000   (   4 kB)
fixmap  : 0xffc0 - 0xfff0   (3072 kB)
kasan   : 0x9f00 - 0xbf00   ( 512 MB)
vmalloc : 0xc880 - 0xff00   ( 872 MB)
(...)

Looks correct, no highmem on this beast.

Then I get this.

Unable to handle kernel NULL pointer dereference at virtual address 0130
pgd = c5ea8000
[0130] *pgd=
Internal error: Oops: 5 [#1] ARM
Modules linked in:
CPU: 0 PID: 19 Comm: modprobe Not tainted 4.1.0-rc8+ #7
Hardware name: ARM-Versatile (Device Tree Support)
task: c5e0b5a0 ti: c5ea task.ti: c5ea
PC is at v4wbi_flush_user_tlb_range+0x10/0x4c
LR is at move_page_tables+0x218/0x308
pc : [c001e870]lr : [c008f230]psr: 2153
sp : c5ea7df0  ip : c5e8c000  fp : ff8ec000
r10: 00bf334f  r9 : c5ead3b0  r8 : 9e8ec000
r7 : 0001  r6 : 2000  r5 : 9f00  r4 : 9effe000
r3 :   r2 : c5e8a000  r1 : 9f00  r0 : 9effe000
Flags: nzCv  IRQs on  FIQs off  Mode SVC_32  ISA ARM  Segment user
Control: 00093177  Table: 05ea8000  DAC: 0015
Process modprobe (pid: 19, stack limit = 0xc5ea0190)
Stack: (0xc5ea7df0 to 0xc5ea8000)
7de0:  9f00 c5e8a000 
7e00:   c5e8a000 9effe000 00c0 c5e8a000 c68c5700 9e8ec000
7e20: c5e8c034 9effe000 9e8ea000 9f00 2000 c00a9384 2000 
7e40: c5e8c000   c5e8a000    
7e60:        
7e80: c5e870e0 5e5e8830 c0701808 0013 c5ea7ec0 c5eac000 c5e870e0 c5e26140
7ea0: c68c5700  c5ea7ec0 c5eac000 c5e870e0 c6a2d8c0 0001 c00e4530
7ec0: c68c5700 0080 c5df2480 9f00  c5ea 000a c00a99ec
7ee0: 0017 c5ea7ef4    c7f10e60 c0377bc0 c0bf3f99
7f00: 000f   c00a9c28 9efff000 c06fa2c8 c68c5700 c06f9eb8
7f20: 0001 fff8 c68c5700  0001 c00a9770 c5e0b5a0 0013
7f40: c5e87ee0 c06ef0c8 c6a6 c00aabd0 c5e8c034   c5e0b790
7f60: c06e8190 c5ddcc60 c5d4c300 003f    
7f80:  c00aad00    c0030f48 c5d4c300 c0030e54
7fa0:    c0014960    
7fc0:        
7fe0:     0013   
[c001e870] (v4wbi_flush_user_tlb_range) from [] (  (null))
Code: e592c020 e3cd3d7f e3c3303f e593300c (e5933130)
---[ end trace b3c4eba35670ba77 ]---

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-21 Thread Andrey Ryabinin
On 07/21/2015 01:36 PM, Linus Walleij wrote:
 On Wed, Jun 17, 2015 at 11:32 PM, Andrey Ryabinin
 ryabinin@gmail.com wrote:
 2015-06-13 18:25 GMT+03:00 Linus Walleij linus.wall...@linaro.org:

 On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin ryabinin@gmail.com 
 wrote:
 2015-06-11 16:39 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
 On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

 I looked closer at this again ... I am trying to get KASan up for
 ARM(32) with some tricks and hacks.


 I have some patches for that. They still need some polishing, but works 
 for me.
 I could share after I get back to office on Tuesday.

 OK! I'd be happy to test!


 I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0

 It far from ready. Firstly I've tried it only in qemu and it works.
 
 Hm what QEMU model are you using? I tried to test it with Versatile
 (the most common) and it kinda boots and hangs:
 

I used vexpress. Anyway, it doesn't matter now, since I have an update
with a lot of stuff fixed, and it works on hardware.
I still need to do some work on it and tomorrow, probably, I will share.




--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-17 Thread Andrey Ryabinin
On 07/16/2015 07:03 PM, Catalin Marinas wrote:
> On Thu, Jul 16, 2015 at 06:30:11PM +0300, Andrey Ryabinin wrote:
>>
>> I think this may work, if pud_none(*pud) will be replaced with 
>> !pud_val(*pud).
>> We can't use pud_none() because with 2-level page tables it's always false, 
>> so
>> we will never go down to pmd level where swapper_pg_dir populated.
> 
> The reason I used "do ... while" vs "while" or "for" is so that it gets
> down to the pmd level. The iteration over pgd is always done in the top
> loop via pgd_addr_end while the loops for missing levels (nopud, nopmd)
> are always a single iteration whether we check for pud_none or not. But
> when the level is present, we avoid looping when !pud_none().
> 

Right, dunno what I was thinking.
It seems to work. Lightly tested with every possible CONFIG_PGTABLE_LEVELS.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-17 Thread Andrey Ryabinin
On 07/16/2015 07:03 PM, Catalin Marinas wrote:
 On Thu, Jul 16, 2015 at 06:30:11PM +0300, Andrey Ryabinin wrote:

 I think this may work, if pud_none(*pud) will be replaced with 
 !pud_val(*pud).
 We can't use pud_none() because with 2-level page tables it's always false, 
 so
 we will never go down to pmd level where swapper_pg_dir populated.
 
 The reason I used do ... while vs while or for is so that it gets
 down to the pmd level. The iteration over pgd is always done in the top
 loop via pgd_addr_end while the loops for missing levels (nopud, nopmd)
 are always a single iteration whether we check for pud_none or not. But
 when the level is present, we avoid looping when !pud_none().
 

Right, dunno what I was thinking.
It seems to work. Lightly tested with every possible CONFIG_PGTABLE_LEVELS.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-16 Thread Catalin Marinas
On Thu, Jul 16, 2015 at 06:30:11PM +0300, Andrey Ryabinin wrote:
> On 07/15/2015 07:37 PM, Catalin Marinas wrote:
> > Ok, so simply taking the call out of the loop won't work unless we
> > conditionally define these functions (wouldn't be too bad since we have
> > some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
> > would be nicer without).
> > 
> > Anyway, I think we can keep the current iterations but exit early if
> > !pud_none() because it means we already populated it (reworked to match
> > other such patterns throughout the kernel with pgd_populate called from
> > the pud function; and untested):
> > 
> > void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long 
> > end)
> > {
> > pmd_t *pmd;
> > unsigned long next;
> > 
> > if (pud_none(*pud))
> > pud_populate(_mm, pud, kasan_zero_pmd);
> > 
> > pmd = pmd_offset(pud, addr);
> > do {
> > next = pmd_addr_end(addr, end);
> > kasan_early_pte_populate(pmd, addr, next);
> > } while (pmd++, addr = next, addr != end && pmd_none(*pmd));
> > }
> > 
> > void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long 
> > end)
> > {
> > pud_t *pud;
> > unsigned long next;
> > 
> > if (pgd_none(*pgd))
> > pgd_populate(_mm, pgd, kasan_zero_pud);
> > 
> > pud = pud_offset(pgd, addr);
> > do {
> > next = pud_addr_end(addr, end);
> > kasan_early_pmd_populate(pud, addr, next);
> > } while (pud++, addr = next, addr != end && pud_none(*pud));
> > }
> > 
> > Given that we check pud_none() after the first iterations, it covers the
> > lower levels if needed.
> 
> I think this may work, if pud_none(*pud) will be replaced with !pud_val(*pud).
> We can't use pud_none() because with 2-level page tables it's always false, so
> we will never go down to pmd level where swapper_pg_dir populated.

The reason I used "do ... while" vs "while" or "for" is so that it gets
down to the pmd level. The iteration over pgd is always done in the top
loop via pgd_addr_end while the loops for missing levels (nopud, nopmd)
are always a single iteration whether we check for pud_none or not. But
when the level is present, we avoid looping when !pud_none().

> But you gave me another idea how we could use p?d_none() and avoid rewriting 
> table entries:
> 
> 
> void kasan_early_pmd_populate(unsigned long start, unsigned long end, pte_t 
> *pte)
> {
>   unsigned long addr = start;
>   long next;
> 
>   do {
>   pgd_t *pgd = pgd_offset_k(addr);
>   pud_t *pud = pud_offset(pgd, addr);
>   pmd_t *pmd = pmd_offset(pud, addr);
> 
>   if (!pmd_none(*pmd))
>   break;
> 
>   pmd_populate_kernel(_mm, pmd, pte);
>   next = pgd_addr_end(addr, end);
>   next = pud_addr_end(addr, next)
>   next = pmd_addr_end(addr, next);
>   } while(addr = next, addr != end);
> }
> 
> void kasan_early_pud_populate(unsigned long start, unsigned long end, pmd_t 
> *pmd)
> {
>   unsigned long addr = start;
>   long next;
> 
>   do {
>   pgd_t *pgd = pgd_offset_k(addr);
>   pud_t *pud = pud_offset(pgd, addr);
> 
>   if (!pud_none(*pud))
>   break;
> 
>   pud_populate(_mm, pud, pmd);
>   next = pud_addr_end(addr, pgd_addr_end(addr, end));
>   } while(addr = next, addr != end);
> }
> 
> 
> void kasan_early_pgd_populate(...)
> {
>   //something similar to above
>   
> }
> 
> static void __init kasan_map_early_shadow(void)
> {
>   kasan_early_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
> kasan_zero_pud);
>   kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
> kasan_zero_pmd);
>   kasan_early_pmd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
> kasan_zero_pte);
>   kasan_early_pte_populate();
> }

While this would probably work, you still need #ifdef's since
kasan_zero_pud is not defined with 2 and 3 levels. That's what I
initially thought we should do but since you didn't like the #ifdef's, I
came up with another proposal.

So, I still prefer my suggestion above unless you find a problem with
it.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-16 Thread Andrey Ryabinin
On 07/15/2015 07:37 PM, Catalin Marinas wrote:
> Ok, so simply taking the call out of the loop won't work unless we
> conditionally define these functions (wouldn't be too bad since we have
> some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
> would be nicer without).
> 
> Anyway, I think we can keep the current iterations but exit early if
> !pud_none() because it means we already populated it (reworked to match
> other such patterns throughout the kernel with pgd_populate called from
> the pud function; and untested):
> 
> void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long 
> end)
> {
>   pmd_t *pmd;
>   unsigned long next;
> 
>   if (pud_none(*pud))
>   pud_populate(_mm, pud, kasan_zero_pmd);
> 
>   pmd = pmd_offset(pud, addr);
>   do {
>   next = pmd_addr_end(addr, end);
>   kasan_early_pte_populate(pmd, addr, next);
>   } while (pmd++, addr = next, addr != end && pmd_none(*pmd));
> }
> 
> void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long 
> end)
> {
>   pud_t *pud;
>   unsigned long next;
> 
>   if (pgd_none(*pgd))
>   pgd_populate(_mm, pgd, kasan_zero_pud);
> 
>   pud = pud_offset(pgd, addr);
>   do {
>   next = pud_addr_end(addr, end);
>   kasan_early_pmd_populate(pud, addr, next);
>   } while (pud++, addr = next, addr != end && pud_none(*pud));
> }
> 
> Given that we check pud_none() after the first iterations, it covers the
> lower levels if needed.
> 

I think this may work, if pud_none(*pud) will be replaced with !pud_val(*pud).
We can't use pud_none() because with 2-level page tables it's always false, so
we will never go down to pmd level where swapper_pg_dir populated.

But you gave me another idea how we could use p?d_none() and avoid rewriting 
table entries:


void kasan_early_pmd_populate(unsigned long start, unsigned long end, pte_t 
*pte)
{
unsigned long addr = start;
long next;

do {
pgd_t *pgd = pgd_offset_k(addr);
pud_t *pud = pud_offset(pgd, addr);
pmd_t *pmd = pmd_offset(pud, addr);

if (!pmd_none(*pmd))
break;

pmd_populate_kernel(_mm, pmd, pte);
next = pgd_addr_end(addr, end);
next = pud_addr_end(addr, next)
next = pmd_addr_end(addr, next);
} while(addr = next, addr != end);
}

void kasan_early_pud_populate(unsigned long start, unsigned long end, pmd_t 
*pmd)
{
unsigned long addr = start;
long next;

do {
pgd_t *pgd = pgd_offset_k(addr);
pud_t *pud = pud_offset(pgd, addr);

if (!pud_none(*pud))
break;

pud_populate(_mm, pud, pmd);
next = pud_addr_end(addr, pgd_addr_end(addr, end));
} while(addr = next, addr != end);
}


void kasan_early_pgd_populate(...)
{
//something similar to above

}

static void __init kasan_map_early_shadow(void)
{
kasan_early_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pud);
kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pmd);
kasan_early_pmd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pte);
kasan_early_pte_populate();
}





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-16 Thread Andrey Ryabinin
On 07/15/2015 07:37 PM, Catalin Marinas wrote:
 Ok, so simply taking the call out of the loop won't work unless we
 conditionally define these functions (wouldn't be too bad since we have
 some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
 would be nicer without).
 
 Anyway, I think we can keep the current iterations but exit early if
 !pud_none() because it means we already populated it (reworked to match
 other such patterns throughout the kernel with pgd_populate called from
 the pud function; and untested):
 
 void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long 
 end)
 {
   pmd_t *pmd;
   unsigned long next;
 
   if (pud_none(*pud))
   pud_populate(init_mm, pud, kasan_zero_pmd);
 
   pmd = pmd_offset(pud, addr);
   do {
   next = pmd_addr_end(addr, end);
   kasan_early_pte_populate(pmd, addr, next);
   } while (pmd++, addr = next, addr != end  pmd_none(*pmd));
 }
 
 void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long 
 end)
 {
   pud_t *pud;
   unsigned long next;
 
   if (pgd_none(*pgd))
   pgd_populate(init_mm, pgd, kasan_zero_pud);
 
   pud = pud_offset(pgd, addr);
   do {
   next = pud_addr_end(addr, end);
   kasan_early_pmd_populate(pud, addr, next);
   } while (pud++, addr = next, addr != end  pud_none(*pud));
 }
 
 Given that we check pud_none() after the first iterations, it covers the
 lower levels if needed.
 

I think this may work, if pud_none(*pud) will be replaced with !pud_val(*pud).
We can't use pud_none() because with 2-level page tables it's always false, so
we will never go down to pmd level where swapper_pg_dir populated.

But you gave me another idea how we could use p?d_none() and avoid rewriting 
table entries:


void kasan_early_pmd_populate(unsigned long start, unsigned long end, pte_t 
*pte)
{
unsigned long addr = start;
long next;

do {
pgd_t *pgd = pgd_offset_k(addr);
pud_t *pud = pud_offset(pgd, addr);
pmd_t *pmd = pmd_offset(pud, addr);

if (!pmd_none(*pmd))
break;

pmd_populate_kernel(init_mm, pmd, pte);
next = pgd_addr_end(addr, end);
next = pud_addr_end(addr, next)
next = pmd_addr_end(addr, next);
} while(addr = next, addr != end);
}

void kasan_early_pud_populate(unsigned long start, unsigned long end, pmd_t 
*pmd)
{
unsigned long addr = start;
long next;

do {
pgd_t *pgd = pgd_offset_k(addr);
pud_t *pud = pud_offset(pgd, addr);

if (!pud_none(*pud))
break;

pud_populate(init_mm, pud, pmd);
next = pud_addr_end(addr, pgd_addr_end(addr, end));
} while(addr = next, addr != end);
}


void kasan_early_pgd_populate(...)
{
//something similar to above

}

static void __init kasan_map_early_shadow(void)
{
kasan_early_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pud);
kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pmd);
kasan_early_pmd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pte);
kasan_early_pte_populate();
}





--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-16 Thread Catalin Marinas
On Thu, Jul 16, 2015 at 06:30:11PM +0300, Andrey Ryabinin wrote:
 On 07/15/2015 07:37 PM, Catalin Marinas wrote:
  Ok, so simply taking the call out of the loop won't work unless we
  conditionally define these functions (wouldn't be too bad since we have
  some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
  would be nicer without).
  
  Anyway, I think we can keep the current iterations but exit early if
  !pud_none() because it means we already populated it (reworked to match
  other such patterns throughout the kernel with pgd_populate called from
  the pud function; and untested):
  
  void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long 
  end)
  {
  pmd_t *pmd;
  unsigned long next;
  
  if (pud_none(*pud))
  pud_populate(init_mm, pud, kasan_zero_pmd);
  
  pmd = pmd_offset(pud, addr);
  do {
  next = pmd_addr_end(addr, end);
  kasan_early_pte_populate(pmd, addr, next);
  } while (pmd++, addr = next, addr != end  pmd_none(*pmd));
  }
  
  void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long 
  end)
  {
  pud_t *pud;
  unsigned long next;
  
  if (pgd_none(*pgd))
  pgd_populate(init_mm, pgd, kasan_zero_pud);
  
  pud = pud_offset(pgd, addr);
  do {
  next = pud_addr_end(addr, end);
  kasan_early_pmd_populate(pud, addr, next);
  } while (pud++, addr = next, addr != end  pud_none(*pud));
  }
  
  Given that we check pud_none() after the first iterations, it covers the
  lower levels if needed.
 
 I think this may work, if pud_none(*pud) will be replaced with !pud_val(*pud).
 We can't use pud_none() because with 2-level page tables it's always false, so
 we will never go down to pmd level where swapper_pg_dir populated.

The reason I used do ... while vs while or for is so that it gets
down to the pmd level. The iteration over pgd is always done in the top
loop via pgd_addr_end while the loops for missing levels (nopud, nopmd)
are always a single iteration whether we check for pud_none or not. But
when the level is present, we avoid looping when !pud_none().

 But you gave me another idea how we could use p?d_none() and avoid rewriting 
 table entries:
 
 
 void kasan_early_pmd_populate(unsigned long start, unsigned long end, pte_t 
 *pte)
 {
   unsigned long addr = start;
   long next;
 
   do {
   pgd_t *pgd = pgd_offset_k(addr);
   pud_t *pud = pud_offset(pgd, addr);
   pmd_t *pmd = pmd_offset(pud, addr);
 
   if (!pmd_none(*pmd))
   break;
 
   pmd_populate_kernel(init_mm, pmd, pte);
   next = pgd_addr_end(addr, end);
   next = pud_addr_end(addr, next)
   next = pmd_addr_end(addr, next);
   } while(addr = next, addr != end);
 }
 
 void kasan_early_pud_populate(unsigned long start, unsigned long end, pmd_t 
 *pmd)
 {
   unsigned long addr = start;
   long next;
 
   do {
   pgd_t *pgd = pgd_offset_k(addr);
   pud_t *pud = pud_offset(pgd, addr);
 
   if (!pud_none(*pud))
   break;
 
   pud_populate(init_mm, pud, pmd);
   next = pud_addr_end(addr, pgd_addr_end(addr, end));
   } while(addr = next, addr != end);
 }
 
 
 void kasan_early_pgd_populate(...)
 {
   //something similar to above
   
 }
 
 static void __init kasan_map_early_shadow(void)
 {
   kasan_early_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
 kasan_zero_pud);
   kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
 kasan_zero_pmd);
   kasan_early_pmd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
 kasan_zero_pte);
   kasan_early_pte_populate();
 }

While this would probably work, you still need #ifdef's since
kasan_zero_pud is not defined with 2 and 3 levels. That's what I
initially thought we should do but since you didn't like the #ifdef's, I
came up with another proposal.

So, I still prefer my suggestion above unless you find a problem with
it.

-- 
Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-15 Thread Catalin Marinas
On Wed, Jul 15, 2015 at 11:55:20AM +0300, Andrey Ryabinin wrote:
> On 07/14/2015 06:04 PM, Catalin Marinas wrote:
> > On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
> >>>   kasan_early_pte_populate();
> >>>   kasan_early_pmd_populate(..., pte);
> >>>   kasan_early_pud_populate(..., pmd);
> >>>   kasan_early_pgd_populate(..., pud);
> >>>
> >>> (or in reverse order)
> >>
> >> Unless, I'm missing something, this will either work only with 4-level
> >> page tables. We could do this without repopulation by using
> >> CONFIG_PGTABLE_LEVELS ifdefs.
> > 
> > Or you could move kasan_early_*_populate outside the loop. You already
> > do this for the pte at the beginning of the kasan_map_early_shadow()
> > function (and it probably makes more sense to create a separate
> > kasan_early_pte_populate).
> 
> Ok, let's try to implement that.
> And for example, let's consider CONFIG_PGTABLE_LEVELS=3 case:
> 
>  * pgd_populate() is nop, so kasan_early_pgd_populate() won't do anything.
> 
>  * pud_populate() in kasan_early_pud_populate() actually will setup pgd 
> entries in swapper_pg_dir,
>so pud_populate() should be called for the whole shadow range: 
> [KASAN_SHADOW_START, KASAN_SHADOW_END]
>   IOW: kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
> kasan_zero_pmd);
>   
>   We will need to slightly change kasan_early_pud_populate() 
> implementation for that
>   (Current implementation implies that [start, end) addresses belong to 
> one pgd)
> 
>   void kasan_early_pud_populate(unsigned long start, unsigned long end, 
> pmd_t *pmd)
>   {
>   unsigned long addr;
>   long next;
> 
>   for (addr = start; addr < end; addr = next) {
>   pud_t *pud = pud_offset(pgd_offset_k(addr), addr);
>   pud_populate(_mm, pud, pmd);
>   next = pud_addr_end(addr, pgd_addr_end(addr, end));
>   }
>   }
> 
>   But, wait! In 4-level page tables case this will be the same 
> repopulation as we had before!

Ok, so simply taking the call out of the loop won't work unless we
conditionally define these functions (wouldn't be too bad since we have
some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
would be nicer without).

Anyway, I think we can keep the current iterations but exit early if
!pud_none() because it means we already populated it (reworked to match
other such patterns throughout the kernel with pgd_populate called from
the pud function; and untested):

void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long end)
{
pmd_t *pmd;
unsigned long next;

if (pud_none(*pud))
pud_populate(_mm, pud, kasan_zero_pmd);

pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
kasan_early_pte_populate(pmd, addr, next);
} while (pmd++, addr = next, addr != end && pmd_none(*pmd));
}

void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long end)
{
pud_t *pud;
unsigned long next;

if (pgd_none(*pgd))
pgd_populate(_mm, pgd, kasan_zero_pud);

pud = pud_offset(pgd, addr);
do {
next = pud_addr_end(addr, end);
kasan_early_pmd_populate(pud, addr, next);
} while (pud++, addr = next, addr != end && pud_none(*pud));
}

Given that we check pud_none() after the first iterations, it covers the
lower levels if needed.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-15 Thread Andrey Ryabinin
On 07/14/2015 06:04 PM, Catalin Marinas wrote:
> On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
 +#if CONFIG_PGTABLE_LEVELS > 3
 +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
 +#endif
 +#if CONFIG_PGTABLE_LEVELS > 2
 +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
 +#endif
 +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
 +
 +static void __init kasan_early_pmd_populate(unsigned long start,
 +  unsigned long end, pud_t *pud)
 +{
 +  unsigned long addr;
 +  unsigned long next;
 +  pmd_t *pmd;
 +
 +  pmd = pmd_offset(pud, start);
 +  for (addr = start; addr < end; addr = next, pmd++) {
 +  pmd_populate_kernel(_mm, pmd, kasan_zero_pte);
 +  next = pmd_addr_end(addr, end);
 +  }
 +}
 +
 +static void __init kasan_early_pud_populate(unsigned long start,
 +  unsigned long end, pgd_t *pgd)
 +{
 +  unsigned long addr;
 +  unsigned long next;
 +  pud_t *pud;
 +
 +  pud = pud_offset(pgd, start);
 +  for (addr = start; addr < end; addr = next, pud++) {
 +  pud_populate(_mm, pud, kasan_zero_pmd);
 +  next = pud_addr_end(addr, end);
 +  kasan_early_pmd_populate(addr, next, pud);
 +  }
 +}
 +
 +static void __init kasan_map_early_shadow(pgd_t *pgdp)
 +{
 +  int i;
 +  unsigned long start = KASAN_SHADOW_START;
 +  unsigned long end = KASAN_SHADOW_END;
 +  unsigned long addr;
 +  unsigned long next;
 +  pgd_t *pgd;
 +
 +  for (i = 0; i < PTRS_PER_PTE; i++)
 +  set_pte(_zero_pte[i], pfn_pte(
 +  virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
 +
 +  pgd = pgd_offset_k(start);
 +  for (addr = start; addr < end; addr = next, pgd++) {
 +  pgd_populate(_mm, pgd, kasan_zero_pud);
 +  next = pgd_addr_end(addr, end);
 +  kasan_early_pud_populate(addr, next, pgd);
 +  }
>>>
>>> I prefer to use "do ... while" constructs similar to __create_mapping()
>>> (or zero_{pgd,pud,pmd}_populate as you are more familiar with them).
>>>
>>> But what I don't get here is that you repopulate the pud page for every
>>> pgd (and so on for pmd). You don't need this recursive call all the way
>>> to kasan_early_pmd_populate() but just sequential:
>>
>> This repopulation needed for 3,2 level page tables configurations.
>>
>> E.g. for 3-level page tables we need to call pud_populate(_mm,
>> pud, kasan_zero_pmd) for each pud in [KASAN_SHADOW_START,
>> KASAN_SHADOW_END] range, this causes repopopulation for 4-level page
>> tables, since we need to pud_populate() only [KASAN_SHADOW_START,
>> KASAN_SHADOW_START + PGDIR_SIZE] range.
> 
> I'm referring to writing the same information multiple times over the
> same entry. kasan_map_early_shadow() goes over each pgd entry and writes
> the address of kasan_zero_pud. That's fine so far. However, in the same
> loop you call kasan_early_pud_populate(). The latter retrieves the pud
> page via pud_offset(pgd, start) which would always be kasan_zero_pud

Not always. E.g. if we have 3-level page tables pud = pgd, pgd_populate() is 
nop, and
pud_populate in fact populates pgd.

pud_offset(pgd, start) will return (swapper_pg_dir + pgd_index(start)) and 
pud_populate()
will fill that entry with the address of kasan_zero_pmd. So we need to 
pud_populate() for
each pgd.

> because that's what you wrote via pgd_populate() in each pgd entry. So
> for each pgd entry, you keep populating the same kasan_zero_pud page
> with pointers to kasan_zero_pmd. And so on for the pmd.
> 

Yes, I'm perfectly understand that. And this was done intentionally since I 
don't
see the way to make this work for all possible CONFIG_PGTABLE_LEVELS without 
rewrites
or without #ifdefs (and you didn't like them in v1).


>>> kasan_early_pte_populate();
>>> kasan_early_pmd_populate(..., pte);
>>> kasan_early_pud_populate(..., pmd);
>>> kasan_early_pgd_populate(..., pud);
>>>
>>> (or in reverse order)
>>
>> Unless, I'm missing something, this will either work only with 4-level
>> page tables. We could do this without repopulation by using
>> CONFIG_PGTABLE_LEVELS ifdefs.
> 
> Or you could move kasan_early_*_populate outside the loop. You already
> do this for the pte at the beginning of the kasan_map_early_shadow()
> function (and it probably makes more sense to create a separate
> kasan_early_pte_populate).
> 


Ok, let's try to implement that.
And for example, let's consider CONFIG_PGTABLE_LEVELS=3 case:

 * pgd_populate() is nop, so kasan_early_pgd_populate() won't do anything.

 * pud_populate() in kasan_early_pud_populate() actually will setup pgd entries 
in swapper_pg_dir,
   so pud_populate() should be called for the whole shadow range: 
[KASAN_SHADOW_START, KASAN_SHADOW_END]
IOW: 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-15 Thread Andrey Ryabinin
On 07/14/2015 06:04 PM, Catalin Marinas wrote:
 On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
 +#if CONFIG_PGTABLE_LEVELS  3
 +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
 +#endif
 +#if CONFIG_PGTABLE_LEVELS  2
 +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
 +#endif
 +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
 +
 +static void __init kasan_early_pmd_populate(unsigned long start,
 +  unsigned long end, pud_t *pud)
 +{
 +  unsigned long addr;
 +  unsigned long next;
 +  pmd_t *pmd;
 +
 +  pmd = pmd_offset(pud, start);
 +  for (addr = start; addr  end; addr = next, pmd++) {
 +  pmd_populate_kernel(init_mm, pmd, kasan_zero_pte);
 +  next = pmd_addr_end(addr, end);
 +  }
 +}
 +
 +static void __init kasan_early_pud_populate(unsigned long start,
 +  unsigned long end, pgd_t *pgd)
 +{
 +  unsigned long addr;
 +  unsigned long next;
 +  pud_t *pud;
 +
 +  pud = pud_offset(pgd, start);
 +  for (addr = start; addr  end; addr = next, pud++) {
 +  pud_populate(init_mm, pud, kasan_zero_pmd);
 +  next = pud_addr_end(addr, end);
 +  kasan_early_pmd_populate(addr, next, pud);
 +  }
 +}
 +
 +static void __init kasan_map_early_shadow(pgd_t *pgdp)
 +{
 +  int i;
 +  unsigned long start = KASAN_SHADOW_START;
 +  unsigned long end = KASAN_SHADOW_END;
 +  unsigned long addr;
 +  unsigned long next;
 +  pgd_t *pgd;
 +
 +  for (i = 0; i  PTRS_PER_PTE; i++)
 +  set_pte(kasan_zero_pte[i], pfn_pte(
 +  virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
 +
 +  pgd = pgd_offset_k(start);
 +  for (addr = start; addr  end; addr = next, pgd++) {
 +  pgd_populate(init_mm, pgd, kasan_zero_pud);
 +  next = pgd_addr_end(addr, end);
 +  kasan_early_pud_populate(addr, next, pgd);
 +  }

 I prefer to use do ... while constructs similar to __create_mapping()
 (or zero_{pgd,pud,pmd}_populate as you are more familiar with them).

 But what I don't get here is that you repopulate the pud page for every
 pgd (and so on for pmd). You don't need this recursive call all the way
 to kasan_early_pmd_populate() but just sequential:

 This repopulation needed for 3,2 level page tables configurations.

 E.g. for 3-level page tables we need to call pud_populate(init_mm,
 pud, kasan_zero_pmd) for each pud in [KASAN_SHADOW_START,
 KASAN_SHADOW_END] range, this causes repopopulation for 4-level page
 tables, since we need to pud_populate() only [KASAN_SHADOW_START,
 KASAN_SHADOW_START + PGDIR_SIZE] range.
 
 I'm referring to writing the same information multiple times over the
 same entry. kasan_map_early_shadow() goes over each pgd entry and writes
 the address of kasan_zero_pud. That's fine so far. However, in the same
 loop you call kasan_early_pud_populate(). The latter retrieves the pud
 page via pud_offset(pgd, start) which would always be kasan_zero_pud

Not always. E.g. if we have 3-level page tables pud = pgd, pgd_populate() is 
nop, and
pud_populate in fact populates pgd.

pud_offset(pgd, start) will return (swapper_pg_dir + pgd_index(start)) and 
pud_populate()
will fill that entry with the address of kasan_zero_pmd. So we need to 
pud_populate() for
each pgd.

 because that's what you wrote via pgd_populate() in each pgd entry. So
 for each pgd entry, you keep populating the same kasan_zero_pud page
 with pointers to kasan_zero_pmd. And so on for the pmd.
 

Yes, I'm perfectly understand that. And this was done intentionally since I 
don't
see the way to make this work for all possible CONFIG_PGTABLE_LEVELS without 
rewrites
or without #ifdefs (and you didn't like them in v1).


 kasan_early_pte_populate();
 kasan_early_pmd_populate(..., pte);
 kasan_early_pud_populate(..., pmd);
 kasan_early_pgd_populate(..., pud);

 (or in reverse order)

 Unless, I'm missing something, this will either work only with 4-level
 page tables. We could do this without repopulation by using
 CONFIG_PGTABLE_LEVELS ifdefs.
 
 Or you could move kasan_early_*_populate outside the loop. You already
 do this for the pte at the beginning of the kasan_map_early_shadow()
 function (and it probably makes more sense to create a separate
 kasan_early_pte_populate).
 


Ok, let's try to implement that.
And for example, let's consider CONFIG_PGTABLE_LEVELS=3 case:

 * pgd_populate() is nop, so kasan_early_pgd_populate() won't do anything.

 * pud_populate() in kasan_early_pud_populate() actually will setup pgd entries 
in swapper_pg_dir,
   so pud_populate() should be called for the whole shadow range: 
[KASAN_SHADOW_START, KASAN_SHADOW_END]
IOW: kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
kasan_zero_pmd);

We will need to slightly change kasan_early_pud_populate() 
implementation for that
(Current implementation implies that [start, end) addresses belong to 
one pgd)

void 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-15 Thread Catalin Marinas
On Wed, Jul 15, 2015 at 11:55:20AM +0300, Andrey Ryabinin wrote:
 On 07/14/2015 06:04 PM, Catalin Marinas wrote:
  On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
kasan_early_pte_populate();
kasan_early_pmd_populate(..., pte);
kasan_early_pud_populate(..., pmd);
kasan_early_pgd_populate(..., pud);
 
  (or in reverse order)
 
  Unless, I'm missing something, this will either work only with 4-level
  page tables. We could do this without repopulation by using
  CONFIG_PGTABLE_LEVELS ifdefs.
  
  Or you could move kasan_early_*_populate outside the loop. You already
  do this for the pte at the beginning of the kasan_map_early_shadow()
  function (and it probably makes more sense to create a separate
  kasan_early_pte_populate).
 
 Ok, let's try to implement that.
 And for example, let's consider CONFIG_PGTABLE_LEVELS=3 case:
 
  * pgd_populate() is nop, so kasan_early_pgd_populate() won't do anything.
 
  * pud_populate() in kasan_early_pud_populate() actually will setup pgd 
 entries in swapper_pg_dir,
so pud_populate() should be called for the whole shadow range: 
 [KASAN_SHADOW_START, KASAN_SHADOW_END]
   IOW: kasan_early_pud_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, 
 kasan_zero_pmd);
   
   We will need to slightly change kasan_early_pud_populate() 
 implementation for that
   (Current implementation implies that [start, end) addresses belong to 
 one pgd)
 
   void kasan_early_pud_populate(unsigned long start, unsigned long end, 
 pmd_t *pmd)
   {
   unsigned long addr;
   long next;
 
   for (addr = start; addr  end; addr = next) {
   pud_t *pud = pud_offset(pgd_offset_k(addr), addr);
   pud_populate(init_mm, pud, pmd);
   next = pud_addr_end(addr, pgd_addr_end(addr, end));
   }
   }
 
   But, wait! In 4-level page tables case this will be the same 
 repopulation as we had before!

Ok, so simply taking the call out of the loop won't work unless we
conditionally define these functions (wouldn't be too bad since we have
some #if CONFIG_PGTABLE_LEVELS already introduced by this patch but it
would be nicer without).

Anyway, I think we can keep the current iterations but exit early if
!pud_none() because it means we already populated it (reworked to match
other such patterns throughout the kernel with pgd_populate called from
the pud function; and untested):

void kasan_early_pmd_populate(pud_t *pud, unsigned long addr, unsigned long end)
{
pmd_t *pmd;
unsigned long next;

if (pud_none(*pud))
pud_populate(init_mm, pud, kasan_zero_pmd);

pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
kasan_early_pte_populate(pmd, addr, next);
} while (pmd++, addr = next, addr != end  pmd_none(*pmd));
}

void kasan_early_pud_populate(pgd_t *pgd, unsigned long addr, unsigned long end)
{
pud_t *pud;
unsigned long next;

if (pgd_none(*pgd))
pgd_populate(init_mm, pgd, kasan_zero_pud);

pud = pud_offset(pgd, addr);
do {
next = pud_addr_end(addr, end);
kasan_early_pmd_populate(pud, addr, next);
} while (pud++, addr = next, addr != end  pud_none(*pud));
}

Given that we check pud_none() after the first iterations, it covers the
lower levels if needed.

-- 
Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-14 Thread Catalin Marinas
On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
> >> +#if CONFIG_PGTABLE_LEVELS > 3
> >> +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
> >> +#endif
> >> +#if CONFIG_PGTABLE_LEVELS > 2
> >> +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
> >> +#endif
> >> +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
> >> +
> >> +static void __init kasan_early_pmd_populate(unsigned long start,
> >> +  unsigned long end, pud_t *pud)
> >> +{
> >> +  unsigned long addr;
> >> +  unsigned long next;
> >> +  pmd_t *pmd;
> >> +
> >> +  pmd = pmd_offset(pud, start);
> >> +  for (addr = start; addr < end; addr = next, pmd++) {
> >> +  pmd_populate_kernel(_mm, pmd, kasan_zero_pte);
> >> +  next = pmd_addr_end(addr, end);
> >> +  }
> >> +}
> >> +
> >> +static void __init kasan_early_pud_populate(unsigned long start,
> >> +  unsigned long end, pgd_t *pgd)
> >> +{
> >> +  unsigned long addr;
> >> +  unsigned long next;
> >> +  pud_t *pud;
> >> +
> >> +  pud = pud_offset(pgd, start);
> >> +  for (addr = start; addr < end; addr = next, pud++) {
> >> +  pud_populate(_mm, pud, kasan_zero_pmd);
> >> +  next = pud_addr_end(addr, end);
> >> +  kasan_early_pmd_populate(addr, next, pud);
> >> +  }
> >> +}
> >> +
> >> +static void __init kasan_map_early_shadow(pgd_t *pgdp)
> >> +{
> >> +  int i;
> >> +  unsigned long start = KASAN_SHADOW_START;
> >> +  unsigned long end = KASAN_SHADOW_END;
> >> +  unsigned long addr;
> >> +  unsigned long next;
> >> +  pgd_t *pgd;
> >> +
> >> +  for (i = 0; i < PTRS_PER_PTE; i++)
> >> +  set_pte(_zero_pte[i], pfn_pte(
> >> +  virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
> >> +
> >> +  pgd = pgd_offset_k(start);
> >> +  for (addr = start; addr < end; addr = next, pgd++) {
> >> +  pgd_populate(_mm, pgd, kasan_zero_pud);
> >> +  next = pgd_addr_end(addr, end);
> >> +  kasan_early_pud_populate(addr, next, pgd);
> >> +  }
> > 
> > I prefer to use "do ... while" constructs similar to __create_mapping()
> > (or zero_{pgd,pud,pmd}_populate as you are more familiar with them).
> > 
> > But what I don't get here is that you repopulate the pud page for every
> > pgd (and so on for pmd). You don't need this recursive call all the way
> > to kasan_early_pmd_populate() but just sequential:
> 
> This repopulation needed for 3,2 level page tables configurations.
> 
> E.g. for 3-level page tables we need to call pud_populate(_mm,
> pud, kasan_zero_pmd) for each pud in [KASAN_SHADOW_START,
> KASAN_SHADOW_END] range, this causes repopopulation for 4-level page
> tables, since we need to pud_populate() only [KASAN_SHADOW_START,
> KASAN_SHADOW_START + PGDIR_SIZE] range.

I'm referring to writing the same information multiple times over the
same entry. kasan_map_early_shadow() goes over each pgd entry and writes
the address of kasan_zero_pud. That's fine so far. However, in the same
loop you call kasan_early_pud_populate(). The latter retrieves the pud
page via pud_offset(pgd, start) which would always be kasan_zero_pud
because that's what you wrote via pgd_populate() in each pgd entry. So
for each pgd entry, you keep populating the same kasan_zero_pud page
with pointers to kasan_zero_pmd. And so on for the pmd.

> > kasan_early_pte_populate();
> > kasan_early_pmd_populate(..., pte);
> > kasan_early_pud_populate(..., pmd);
> > kasan_early_pgd_populate(..., pud);
> > 
> > (or in reverse order)
> 
> Unless, I'm missing something, this will either work only with 4-level
> page tables. We could do this without repopulation by using
> CONFIG_PGTABLE_LEVELS ifdefs.

Or you could move kasan_early_*_populate outside the loop. You already
do this for the pte at the beginning of the kasan_map_early_shadow()
function (and it probably makes more sense to create a separate
kasan_early_pte_populate).

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-14 Thread Catalin Marinas
On Fri, Jul 10, 2015 at 08:11:03PM +0300, Andrey Ryabinin wrote:
  +#if CONFIG_PGTABLE_LEVELS  3
  +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
  +#endif
  +#if CONFIG_PGTABLE_LEVELS  2
  +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
  +#endif
  +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
  +
  +static void __init kasan_early_pmd_populate(unsigned long start,
  +  unsigned long end, pud_t *pud)
  +{
  +  unsigned long addr;
  +  unsigned long next;
  +  pmd_t *pmd;
  +
  +  pmd = pmd_offset(pud, start);
  +  for (addr = start; addr  end; addr = next, pmd++) {
  +  pmd_populate_kernel(init_mm, pmd, kasan_zero_pte);
  +  next = pmd_addr_end(addr, end);
  +  }
  +}
  +
  +static void __init kasan_early_pud_populate(unsigned long start,
  +  unsigned long end, pgd_t *pgd)
  +{
  +  unsigned long addr;
  +  unsigned long next;
  +  pud_t *pud;
  +
  +  pud = pud_offset(pgd, start);
  +  for (addr = start; addr  end; addr = next, pud++) {
  +  pud_populate(init_mm, pud, kasan_zero_pmd);
  +  next = pud_addr_end(addr, end);
  +  kasan_early_pmd_populate(addr, next, pud);
  +  }
  +}
  +
  +static void __init kasan_map_early_shadow(pgd_t *pgdp)
  +{
  +  int i;
  +  unsigned long start = KASAN_SHADOW_START;
  +  unsigned long end = KASAN_SHADOW_END;
  +  unsigned long addr;
  +  unsigned long next;
  +  pgd_t *pgd;
  +
  +  for (i = 0; i  PTRS_PER_PTE; i++)
  +  set_pte(kasan_zero_pte[i], pfn_pte(
  +  virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
  +
  +  pgd = pgd_offset_k(start);
  +  for (addr = start; addr  end; addr = next, pgd++) {
  +  pgd_populate(init_mm, pgd, kasan_zero_pud);
  +  next = pgd_addr_end(addr, end);
  +  kasan_early_pud_populate(addr, next, pgd);
  +  }
  
  I prefer to use do ... while constructs similar to __create_mapping()
  (or zero_{pgd,pud,pmd}_populate as you are more familiar with them).
  
  But what I don't get here is that you repopulate the pud page for every
  pgd (and so on for pmd). You don't need this recursive call all the way
  to kasan_early_pmd_populate() but just sequential:
 
 This repopulation needed for 3,2 level page tables configurations.
 
 E.g. for 3-level page tables we need to call pud_populate(init_mm,
 pud, kasan_zero_pmd) for each pud in [KASAN_SHADOW_START,
 KASAN_SHADOW_END] range, this causes repopopulation for 4-level page
 tables, since we need to pud_populate() only [KASAN_SHADOW_START,
 KASAN_SHADOW_START + PGDIR_SIZE] range.

I'm referring to writing the same information multiple times over the
same entry. kasan_map_early_shadow() goes over each pgd entry and writes
the address of kasan_zero_pud. That's fine so far. However, in the same
loop you call kasan_early_pud_populate(). The latter retrieves the pud
page via pud_offset(pgd, start) which would always be kasan_zero_pud
because that's what you wrote via pgd_populate() in each pgd entry. So
for each pgd entry, you keep populating the same kasan_zero_pud page
with pointers to kasan_zero_pmd. And so on for the pmd.

  kasan_early_pte_populate();
  kasan_early_pmd_populate(..., pte);
  kasan_early_pud_populate(..., pmd);
  kasan_early_pgd_populate(..., pud);
  
  (or in reverse order)
 
 Unless, I'm missing something, this will either work only with 4-level
 page tables. We could do this without repopulation by using
 CONFIG_PGTABLE_LEVELS ifdefs.

Or you could move kasan_early_*_populate outside the loop. You already
do this for the pte at the beginning of the kasan_map_early_shadow()
function (and it probably makes more sense to create a separate
kasan_early_pte_populate).

-- 
Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-10 Thread Andrey Ryabinin
>>  select HAVE_ARCH_KGDB
>>  select HAVE_ARCH_SECCOMP_FILTER
>>  select HAVE_ARCH_TRACEHOOK
>> @@ -119,6 +120,12 @@ config GENERIC_CSUM
>>  config GENERIC_CALIBRATE_DELAY
>>  def_bool y
>>  
>> +config KASAN_SHADOW_OFFSET
>> +hex
>> +default 0xdfff2000 if ARM64_VA_BITS_48
>> +default 0xdc80 if ARM64_VA_BITS_42
>> +default 0xdf90 if ARM64_VA_BITS_39
>> +
> 
> How were these numbers generated? I can probably guess but we need a
> comment in this file and a BUILD_BUG elsewhere (kasan_init.c) if we
> change the memory map and they no longer match.
> 

Ok, will do.

Probably the simplest way to get this number is:
KASAN_SHADOW_END - (1ULL << (64 - 3))

64 is number of bits in pointer, 3 is KASAN_SHADOW_SCALE_SHIFT,
so [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END] covers [0, -1ULL] addresses.


>> diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
>> new file mode 100644
>> index 000..65ac50d
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kasan.h
>> @@ -0,0 +1,24 @@
>> +#ifndef __ASM_KASAN_H
>> +#define __ASM_KASAN_H
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#ifdef CONFIG_KASAN
>> +
>> +#include 
>> +
>> +/*
>> + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
>> + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
>> + */
>> +#define KASAN_SHADOW_START  (UL(0x) << (VA_BITS))
>> +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL << (VA_BITS - 
>> 3)))
> 
> Can you define a VA_START in asm/memory.h so that we avoid this long
> list of f's here and in pgtable.h?
> 

Sure, will do.

> Another BUILD_BUG we need is to ensure that KASAN_SHADOW_START/END
> covers an exact number of pgd entries, otherwise the logic in
> kasan_init.c can go wrong (it seems to be the case in all VA_BITS
> configurations but just in case we forget about this requirement in the
> future).
> 
>> diff --git a/arch/arm64/include/asm/pgtable.h 
>> b/arch/arm64/include/asm/pgtable.h
>> index bd5db28..8700f66 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -40,7 +40,14 @@
>>   *  fixed mappings and modules
>>   */
>>  #define VMEMMAP_SIZEALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * 
>> sizeof(struct page), PUD_SIZE)
>> +
>> +#ifndef CONFIG_KASAN
>>  #define VMALLOC_START   (UL(0x) << VA_BITS)
> 
> And here we could just use VA_START.
> 
>> +#else
>> +#include 
>> +#define VMALLOC_START   KASAN_SHADOW_END
>> +#endif
> 
> We could add a SZ_64K guard page here (just in case, the KASan shadow
> probably never reaches KASAN_SHADOW_END).
> 

Ok.

>> diff --git a/arch/arm64/include/asm/string.h 
>> b/arch/arm64/include/asm/string.h
>> index 64d2d48..bff522c 100644
>> --- a/arch/arm64/include/asm/string.h
>> +++ b/arch/arm64/include/asm/string.h
>> @@ -36,17 +36,33 @@ extern __kernel_size_t strnlen(const char *, 
>> __kernel_size_t);
>>  
>>  #define __HAVE_ARCH_MEMCPY
>>  extern void *memcpy(void *, const void *, __kernel_size_t);
>> +extern void *__memcpy(void *, const void *, __kernel_size_t);
>>  
>>  #define __HAVE_ARCH_MEMMOVE
>>  extern void *memmove(void *, const void *, __kernel_size_t);
>> +extern void *__memmove(void *, const void *, __kernel_size_t);
>>  
>>  #define __HAVE_ARCH_MEMCHR
>>  extern void *memchr(const void *, int, __kernel_size_t);
>>  
>>  #define __HAVE_ARCH_MEMSET
>>  extern void *memset(void *, int, __kernel_size_t);
>> +extern void *__memset(void *, int, __kernel_size_t);
>>  
>>  #define __HAVE_ARCH_MEMCMP
>>  extern int memcmp(const void *, const void *, size_t);
>>  
>> +
>> +#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
>> +
>> +/*
>> + * For files that not instrumented (e.g. mm/slub.c) we
> 
> Missing an "are".
> 
>> diff --git a/arch/arm64/include/asm/thread_info.h 
>> b/arch/arm64/include/asm/thread_info.h
>> index dcd06d1..cfe5ea5 100644
>> --- a/arch/arm64/include/asm/thread_info.h
>> +++ b/arch/arm64/include/asm/thread_info.h
>> @@ -24,10 +24,18 @@
>>  #include 
>>  
>>  #ifndef CONFIG_ARM64_64K_PAGES
>> +#ifndef CONFIG_KASAN
>>  #define THREAD_SIZE_ORDER   2
>> +#else
>> +#define THREAD_SIZE_ORDER   3
>> +#endif
>>  #endif
>>  
>> +#ifndef CONFIG_KASAN
>>  #define THREAD_SIZE 16384
>> +#else
>> +#define THREAD_SIZE 32768
>> +#endif
>>  #define THREAD_START_SP (THREAD_SIZE - 16)
> 
> Have you actually seen it failing with the 16KB THREAD_SIZE? You may run
> into other problems with 8 4KB pages per stack.
> 

Actually no, so I guess that we could try with 16K.

I've seen it failing on ARM32 with 8K stack (we use some old version of kasan 
for our ARM kernels),
but that's a different story


>>  #ifndef __ASSEMBLY__
>> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
>> index 19f915e..650b1e8 100644
>> --- a/arch/arm64/kernel/head.S
>> +++ b/arch/arm64/kernel/head.S
>> @@ -486,6 +486,9 @@ 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-10 Thread Andrey Ryabinin
  select HAVE_ARCH_KGDB
  select HAVE_ARCH_SECCOMP_FILTER
  select HAVE_ARCH_TRACEHOOK
 @@ -119,6 +120,12 @@ config GENERIC_CSUM
  config GENERIC_CALIBRATE_DELAY
  def_bool y
  
 +config KASAN_SHADOW_OFFSET
 +hex
 +default 0xdfff2000 if ARM64_VA_BITS_48
 +default 0xdc80 if ARM64_VA_BITS_42
 +default 0xdf90 if ARM64_VA_BITS_39
 +
 
 How were these numbers generated? I can probably guess but we need a
 comment in this file and a BUILD_BUG elsewhere (kasan_init.c) if we
 change the memory map and they no longer match.
 

Ok, will do.

Probably the simplest way to get this number is:
KASAN_SHADOW_END - (1ULL  (64 - 3))

64 is number of bits in pointer, 3 is KASAN_SHADOW_SCALE_SHIFT,
so [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END] covers [0, -1ULL] addresses.


 diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
 new file mode 100644
 index 000..65ac50d
 --- /dev/null
 +++ b/arch/arm64/include/asm/kasan.h
 @@ -0,0 +1,24 @@
 +#ifndef __ASM_KASAN_H
 +#define __ASM_KASAN_H
 +
 +#ifndef __ASSEMBLY__
 +
 +#ifdef CONFIG_KASAN
 +
 +#include asm/memory.h
 +
 +/*
 + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
 + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
 + */
 +#define KASAN_SHADOW_START  (UL(0x)  (VA_BITS))
 +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL  (VA_BITS - 
 3)))
 
 Can you define a VA_START in asm/memory.h so that we avoid this long
 list of f's here and in pgtable.h?
 

Sure, will do.

 Another BUILD_BUG we need is to ensure that KASAN_SHADOW_START/END
 covers an exact number of pgd entries, otherwise the logic in
 kasan_init.c can go wrong (it seems to be the case in all VA_BITS
 configurations but just in case we forget about this requirement in the
 future).
 
 diff --git a/arch/arm64/include/asm/pgtable.h 
 b/arch/arm64/include/asm/pgtable.h
 index bd5db28..8700f66 100644
 --- a/arch/arm64/include/asm/pgtable.h
 +++ b/arch/arm64/include/asm/pgtable.h
 @@ -40,7 +40,14 @@
   *  fixed mappings and modules
   */
  #define VMEMMAP_SIZEALIGN((1UL  (VA_BITS - PAGE_SHIFT)) * 
 sizeof(struct page), PUD_SIZE)
 +
 +#ifndef CONFIG_KASAN
  #define VMALLOC_START   (UL(0x)  VA_BITS)
 
 And here we could just use VA_START.
 
 +#else
 +#include asm/kasan.h
 +#define VMALLOC_START   KASAN_SHADOW_END
 +#endif
 
 We could add a SZ_64K guard page here (just in case, the KASan shadow
 probably never reaches KASAN_SHADOW_END).
 

Ok.

 diff --git a/arch/arm64/include/asm/string.h 
 b/arch/arm64/include/asm/string.h
 index 64d2d48..bff522c 100644
 --- a/arch/arm64/include/asm/string.h
 +++ b/arch/arm64/include/asm/string.h
 @@ -36,17 +36,33 @@ extern __kernel_size_t strnlen(const char *, 
 __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCPY
  extern void *memcpy(void *, const void *, __kernel_size_t);
 +extern void *__memcpy(void *, const void *, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMMOVE
  extern void *memmove(void *, const void *, __kernel_size_t);
 +extern void *__memmove(void *, const void *, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCHR
  extern void *memchr(const void *, int, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMSET
  extern void *memset(void *, int, __kernel_size_t);
 +extern void *__memset(void *, int, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCMP
  extern int memcmp(const void *, const void *, size_t);
  
 +
 +#if defined(CONFIG_KASAN)  !defined(__SANITIZE_ADDRESS__)
 +
 +/*
 + * For files that not instrumented (e.g. mm/slub.c) we
 
 Missing an are.
 
 diff --git a/arch/arm64/include/asm/thread_info.h 
 b/arch/arm64/include/asm/thread_info.h
 index dcd06d1..cfe5ea5 100644
 --- a/arch/arm64/include/asm/thread_info.h
 +++ b/arch/arm64/include/asm/thread_info.h
 @@ -24,10 +24,18 @@
  #include linux/compiler.h
  
  #ifndef CONFIG_ARM64_64K_PAGES
 +#ifndef CONFIG_KASAN
  #define THREAD_SIZE_ORDER   2
 +#else
 +#define THREAD_SIZE_ORDER   3
 +#endif
  #endif
  
 +#ifndef CONFIG_KASAN
  #define THREAD_SIZE 16384
 +#else
 +#define THREAD_SIZE 32768
 +#endif
  #define THREAD_START_SP (THREAD_SIZE - 16)
 
 Have you actually seen it failing with the 16KB THREAD_SIZE? You may run
 into other problems with 8 4KB pages per stack.
 

Actually no, so I guess that we could try with 16K.

I've seen it failing on ARM32 with 8K stack (we use some old version of kasan 
for our ARM kernels),
but that's a different story


  #ifndef __ASSEMBLY__
 diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
 index 19f915e..650b1e8 100644
 --- a/arch/arm64/kernel/head.S
 +++ b/arch/arm64/kernel/head.S
 @@ -486,6 +486,9 @@ __mmap_switched:
  str_l   x21, __fdt_pointer, x5  // Save FDT pointer
  str_l   x24, memstart_addr, x6  // Save PHYS_OFFSET
  mov x29, #0
 +#ifdef CONFIG_KASAN
 +b   kasan_early_init
 +#endif
   

Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-08 Thread Catalin Marinas
On Fri, May 15, 2015 at 04:59:04PM +0300, Andrey Ryabinin wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7796af4..4cc73cc 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -44,6 +44,7 @@ config ARM64
>   select HAVE_ARCH_AUDITSYSCALL
>   select HAVE_ARCH_BITREVERSE
>   select HAVE_ARCH_JUMP_LABEL
> + select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP

Just curious, why the dependency?

>   select HAVE_ARCH_KGDB
>   select HAVE_ARCH_SECCOMP_FILTER
>   select HAVE_ARCH_TRACEHOOK
> @@ -119,6 +120,12 @@ config GENERIC_CSUM
>  config GENERIC_CALIBRATE_DELAY
>   def_bool y
>  
> +config KASAN_SHADOW_OFFSET
> + hex
> + default 0xdfff2000 if ARM64_VA_BITS_48
> + default 0xdc80 if ARM64_VA_BITS_42
> + default 0xdf90 if ARM64_VA_BITS_39
> +

How were these numbers generated? I can probably guess but we need a
comment in this file and a BUILD_BUG elsewhere (kasan_init.c) if we
change the memory map and they no longer match.

> diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
> new file mode 100644
> index 000..65ac50d
> --- /dev/null
> +++ b/arch/arm64/include/asm/kasan.h
> @@ -0,0 +1,24 @@
> +#ifndef __ASM_KASAN_H
> +#define __ASM_KASAN_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#ifdef CONFIG_KASAN
> +
> +#include 
> +
> +/*
> + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
> + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
> + */
> +#define KASAN_SHADOW_START  (UL(0x) << (VA_BITS))
> +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL << (VA_BITS - 3)))

Can you define a VA_START in asm/memory.h so that we avoid this long
list of f's here and in pgtable.h?

Another BUILD_BUG we need is to ensure that KASAN_SHADOW_START/END
covers an exact number of pgd entries, otherwise the logic in
kasan_init.c can go wrong (it seems to be the case in all VA_BITS
configurations but just in case we forget about this requirement in the
future).

> diff --git a/arch/arm64/include/asm/pgtable.h 
> b/arch/arm64/include/asm/pgtable.h
> index bd5db28..8700f66 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -40,7 +40,14 @@
>   *   fixed mappings and modules
>   */
>  #define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * 
> sizeof(struct page), PUD_SIZE)
> +
> +#ifndef CONFIG_KASAN
>  #define VMALLOC_START(UL(0x) << VA_BITS)

And here we could just use VA_START.

> +#else
> +#include 
> +#define VMALLOC_STARTKASAN_SHADOW_END
> +#endif

We could add a SZ_64K guard page here (just in case, the KASan shadow
probably never reaches KASAN_SHADOW_END).

> diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
> index 64d2d48..bff522c 100644
> --- a/arch/arm64/include/asm/string.h
> +++ b/arch/arm64/include/asm/string.h
> @@ -36,17 +36,33 @@ extern __kernel_size_t strnlen(const char *, 
> __kernel_size_t);
>  
>  #define __HAVE_ARCH_MEMCPY
>  extern void *memcpy(void *, const void *, __kernel_size_t);
> +extern void *__memcpy(void *, const void *, __kernel_size_t);
>  
>  #define __HAVE_ARCH_MEMMOVE
>  extern void *memmove(void *, const void *, __kernel_size_t);
> +extern void *__memmove(void *, const void *, __kernel_size_t);
>  
>  #define __HAVE_ARCH_MEMCHR
>  extern void *memchr(const void *, int, __kernel_size_t);
>  
>  #define __HAVE_ARCH_MEMSET
>  extern void *memset(void *, int, __kernel_size_t);
> +extern void *__memset(void *, int, __kernel_size_t);
>  
>  #define __HAVE_ARCH_MEMCMP
>  extern int memcmp(const void *, const void *, size_t);
>  
> +
> +#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
> +
> +/*
> + * For files that not instrumented (e.g. mm/slub.c) we

Missing an "are".

> diff --git a/arch/arm64/include/asm/thread_info.h 
> b/arch/arm64/include/asm/thread_info.h
> index dcd06d1..cfe5ea5 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -24,10 +24,18 @@
>  #include 
>  
>  #ifndef CONFIG_ARM64_64K_PAGES
> +#ifndef CONFIG_KASAN
>  #define THREAD_SIZE_ORDER2
> +#else
> +#define THREAD_SIZE_ORDER3
> +#endif
>  #endif
>  
> +#ifndef CONFIG_KASAN
>  #define THREAD_SIZE  16384
> +#else
> +#define THREAD_SIZE  32768
> +#endif
>  #define THREAD_START_SP  (THREAD_SIZE - 16)

Have you actually seen it failing with the 16KB THREAD_SIZE? You may run
into other problems with 8 4KB pages per stack.

>  #ifndef __ASSEMBLY__
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 19f915e..650b1e8 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -486,6 +486,9 @@ __mmap_switched:
>   str_l   x21, __fdt_pointer, x5  // Save FDT pointer
>   str_l   x24, memstart_addr, x6  // Save PHYS_OFFSET
>   mov x29, #0
> +#ifdef 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-07-08 Thread Catalin Marinas
On Fri, May 15, 2015 at 04:59:04PM +0300, Andrey Ryabinin wrote:
 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
 index 7796af4..4cc73cc 100644
 --- a/arch/arm64/Kconfig
 +++ b/arch/arm64/Kconfig
 @@ -44,6 +44,7 @@ config ARM64
   select HAVE_ARCH_AUDITSYSCALL
   select HAVE_ARCH_BITREVERSE
   select HAVE_ARCH_JUMP_LABEL
 + select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP

Just curious, why the dependency?

   select HAVE_ARCH_KGDB
   select HAVE_ARCH_SECCOMP_FILTER
   select HAVE_ARCH_TRACEHOOK
 @@ -119,6 +120,12 @@ config GENERIC_CSUM
  config GENERIC_CALIBRATE_DELAY
   def_bool y
  
 +config KASAN_SHADOW_OFFSET
 + hex
 + default 0xdfff2000 if ARM64_VA_BITS_48
 + default 0xdc80 if ARM64_VA_BITS_42
 + default 0xdf90 if ARM64_VA_BITS_39
 +

How were these numbers generated? I can probably guess but we need a
comment in this file and a BUILD_BUG elsewhere (kasan_init.c) if we
change the memory map and they no longer match.

 diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
 new file mode 100644
 index 000..65ac50d
 --- /dev/null
 +++ b/arch/arm64/include/asm/kasan.h
 @@ -0,0 +1,24 @@
 +#ifndef __ASM_KASAN_H
 +#define __ASM_KASAN_H
 +
 +#ifndef __ASSEMBLY__
 +
 +#ifdef CONFIG_KASAN
 +
 +#include asm/memory.h
 +
 +/*
 + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
 + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
 + */
 +#define KASAN_SHADOW_START  (UL(0x)  (VA_BITS))
 +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL  (VA_BITS - 3)))

Can you define a VA_START in asm/memory.h so that we avoid this long
list of f's here and in pgtable.h?

Another BUILD_BUG we need is to ensure that KASAN_SHADOW_START/END
covers an exact number of pgd entries, otherwise the logic in
kasan_init.c can go wrong (it seems to be the case in all VA_BITS
configurations but just in case we forget about this requirement in the
future).

 diff --git a/arch/arm64/include/asm/pgtable.h 
 b/arch/arm64/include/asm/pgtable.h
 index bd5db28..8700f66 100644
 --- a/arch/arm64/include/asm/pgtable.h
 +++ b/arch/arm64/include/asm/pgtable.h
 @@ -40,7 +40,14 @@
   *   fixed mappings and modules
   */
  #define VMEMMAP_SIZE ALIGN((1UL  (VA_BITS - PAGE_SHIFT)) * 
 sizeof(struct page), PUD_SIZE)
 +
 +#ifndef CONFIG_KASAN
  #define VMALLOC_START(UL(0x)  VA_BITS)

And here we could just use VA_START.

 +#else
 +#include asm/kasan.h
 +#define VMALLOC_STARTKASAN_SHADOW_END
 +#endif

We could add a SZ_64K guard page here (just in case, the KASan shadow
probably never reaches KASAN_SHADOW_END).

 diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
 index 64d2d48..bff522c 100644
 --- a/arch/arm64/include/asm/string.h
 +++ b/arch/arm64/include/asm/string.h
 @@ -36,17 +36,33 @@ extern __kernel_size_t strnlen(const char *, 
 __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCPY
  extern void *memcpy(void *, const void *, __kernel_size_t);
 +extern void *__memcpy(void *, const void *, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMMOVE
  extern void *memmove(void *, const void *, __kernel_size_t);
 +extern void *__memmove(void *, const void *, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCHR
  extern void *memchr(const void *, int, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMSET
  extern void *memset(void *, int, __kernel_size_t);
 +extern void *__memset(void *, int, __kernel_size_t);
  
  #define __HAVE_ARCH_MEMCMP
  extern int memcmp(const void *, const void *, size_t);
  
 +
 +#if defined(CONFIG_KASAN)  !defined(__SANITIZE_ADDRESS__)
 +
 +/*
 + * For files that not instrumented (e.g. mm/slub.c) we

Missing an are.

 diff --git a/arch/arm64/include/asm/thread_info.h 
 b/arch/arm64/include/asm/thread_info.h
 index dcd06d1..cfe5ea5 100644
 --- a/arch/arm64/include/asm/thread_info.h
 +++ b/arch/arm64/include/asm/thread_info.h
 @@ -24,10 +24,18 @@
  #include linux/compiler.h
  
  #ifndef CONFIG_ARM64_64K_PAGES
 +#ifndef CONFIG_KASAN
  #define THREAD_SIZE_ORDER2
 +#else
 +#define THREAD_SIZE_ORDER3
 +#endif
  #endif
  
 +#ifndef CONFIG_KASAN
  #define THREAD_SIZE  16384
 +#else
 +#define THREAD_SIZE  32768
 +#endif
  #define THREAD_START_SP  (THREAD_SIZE - 16)

Have you actually seen it failing with the 16KB THREAD_SIZE? You may run
into other problems with 8 4KB pages per stack.

  #ifndef __ASSEMBLY__
 diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
 index 19f915e..650b1e8 100644
 --- a/arch/arm64/kernel/head.S
 +++ b/arch/arm64/kernel/head.S
 @@ -486,6 +486,9 @@ __mmap_switched:
   str_l   x21, __fdt_pointer, x5  // Save FDT pointer
   str_l   x24, memstart_addr, x6  // Save PHYS_OFFSET
   mov x29, #0
 +#ifdef CONFIG_KASAN
 + b   kasan_early_init
 +#endif
   b   start_kernel
  

Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-17 Thread Andrey Ryabinin
2015-06-13 18:25 GMT+03:00 Linus Walleij :
>
> On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin  
> wrote:
> > 2015-06-11 16:39 GMT+03:00 Linus Walleij :
> >> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
> >> wrote:
> >>
> >>> This patch adds arch specific code for kernel address sanitizer
> >>> (see Documentation/kasan.txt).
> >>
> >> I looked closer at this again ... I am trying to get KASan up for
> >> ARM(32) with some tricks and hacks.
> >>
> >
> > I have some patches for that. They still need some polishing, but works for 
> > me.
> > I could share after I get back to office on Tuesday.
>
> OK! I'd be happy to test!
>

I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0

It far from ready. Firstly I've tried it only in qemu and it works.
Today, I've tried to run it on bare metal (exynos5420), but it hangs
somewhere after early_irq_init().
So, it probably doesn't  worth for trying/testing yet.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-17 Thread Andrey Ryabinin
2015-06-13 18:25 GMT+03:00 Linus Walleij linus.wall...@linaro.org:

 On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin ryabinin@gmail.com 
 wrote:
  2015-06-11 16:39 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
  On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
  wrote:
 
  This patch adds arch specific code for kernel address sanitizer
  (see Documentation/kasan.txt).
 
  I looked closer at this again ... I am trying to get KASan up for
  ARM(32) with some tricks and hacks.
 
 
  I have some patches for that. They still need some polishing, but works for 
  me.
  I could share after I get back to office on Tuesday.

 OK! I'd be happy to test!


I've pushed it here : git://github.com/aryabinin/linux.git kasan/arm_v0

It far from ready. Firstly I've tried it only in qemu and it works.
Today, I've tried to run it on bare metal (exynos5420), but it hangs
somewhere after early_irq_init().
So, it probably doesn't  worth for trying/testing yet.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-13 Thread Linus Walleij
On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin  wrote:
> 2015-06-11 16:39 GMT+03:00 Linus Walleij :
>> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
>> wrote:
>>
>>> This patch adds arch specific code for kernel address sanitizer
>>> (see Documentation/kasan.txt).
>>
>> I looked closer at this again ... I am trying to get KASan up for
>> ARM(32) with some tricks and hacks.
>>
>
> I have some patches for that. They still need some polishing, but works for 
> me.
> I could share after I get back to office on Tuesday.

OK! I'd be happy to test!

I have a WIP patch too, but it was trying to do a physical carveout
at early boot because of misunderstandings as to how KASan works...
Yeah I'm a slow learner.

>>> +/*
>>> + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
>>> + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
>>> + */
>>> +#define KASAN_SHADOW_START  (UL(0x) << (VA_BITS))
>>> +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL << (VA_BITS - 
>>> 3)))
>>
>> Will this not mean that shadow start to end actually covers *all*
>> virtual addresses including userspace and what not? However a
>> large portion of this shadow memory will be unused because the
>> SHADOW_OFFSET only works for code compiled for the kernel
>> anyway.
>>
>
> SHADOW_OFFSET:SHADOW_END - covers *all* 64bits of virtual addresses.
> SHADOW_OFFSET:SHADOW_START - unused shadow.
> SHADOW_START:SHADOW_END - covers only kernel virtual addresses (used shadow).

Aha. I see now...

>> When working on ARM32 I certainly could not map
>> (1UL << (VA_BITS -3)) i.e. for 32 bit (1UL << 29) bytes (512 MB) of
>
> Why not? We can just take it from TASK_SIZE.

Yeah the idea to steal it from userspace occured to me too...
with ARM32 having a highmem split in the middle of vmalloc I was
quite stressed when trying to chisel it out from the vmalloc area.

I actually managed to remove all static iomaps from my platform
so I could allocate the KASan memory from high to log addresses
at 0xf700-0xff00 but it puts requirements on all
the ARM32 platforms to rid their static maps :P

>> Is it correct that when the pte's, pgd's and pud's are populated
>> KASan really doesn't kick in, it's just done to have some scratch
>> memory with whatever contents so as to do dummy updates
>> for the __asan_loadN() and __asan_storeN() calls, and no checks
>> start until the shadow memory is populated in kasan_init()
>> i.e. there are no KASan checks for any code executing up
>> to that point, just random writes and reads?
>
> Yes, kasan_early_init() setups scratch memory with whatever contents.
> But  KASan checks shadow before kasan_init(), that's the reason why we
> need scratch shadow.
>
> So checks are performed, but KASan don't print any reports, because
> init_task has non-zero kasan_depth flag (see include/linux/init_task.h)
> We check that flag in kasan_report() and print report iff it have zero value.
>
> In kasan_init() after shadow populated, we enable reports by setting
> kasan_depth to zero.

Aha now I understand how this works! Now I understand
this init_task.kasan_depth = 0 too.

>> Are we just lucky that these functions never do any loads
>> and stores?
>>
>
> We relay on fact that these functions are static inline and do not call other
> functions from other (instrumented) files.

Aha, makes perfect sense.

I think I understand the code a bit now ... it maps all the KASan
shadow memory to the physical memory of the zero page and let all
updates hit that memory until the memory manager is up and running,
then you allocate physical memory backing the shadow in
kasan_populate_zero_shadow().

I misunderstood it such that the backing physical shadow memory
had to be available when we do the early call... no wonder I
got lost.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-13 Thread Linus Walleij
On Fri, Jun 12, 2015 at 8:14 PM, Andrey Ryabinin ryabinin@gmail.com wrote:
 2015-06-11 16:39 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
 On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

 I looked closer at this again ... I am trying to get KASan up for
 ARM(32) with some tricks and hacks.


 I have some patches for that. They still need some polishing, but works for 
 me.
 I could share after I get back to office on Tuesday.

OK! I'd be happy to test!

I have a WIP patch too, but it was trying to do a physical carveout
at early boot because of misunderstandings as to how KASan works...
Yeah I'm a slow learner.

 +/*
 + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
 + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
 + */
 +#define KASAN_SHADOW_START  (UL(0x)  (VA_BITS))
 +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL  (VA_BITS - 
 3)))

 Will this not mean that shadow start to end actually covers *all*
 virtual addresses including userspace and what not? However a
 large portion of this shadow memory will be unused because the
 SHADOW_OFFSET only works for code compiled for the kernel
 anyway.


 SHADOW_OFFSET:SHADOW_END - covers *all* 64bits of virtual addresses.
 SHADOW_OFFSET:SHADOW_START - unused shadow.
 SHADOW_START:SHADOW_END - covers only kernel virtual addresses (used shadow).

Aha. I see now...

 When working on ARM32 I certainly could not map
 (1UL  (VA_BITS -3)) i.e. for 32 bit (1UL  29) bytes (512 MB) of

 Why not? We can just take it from TASK_SIZE.

Yeah the idea to steal it from userspace occured to me too...
with ARM32 having a highmem split in the middle of vmalloc I was
quite stressed when trying to chisel it out from the vmalloc area.

I actually managed to remove all static iomaps from my platform
so I could allocate the KASan memory from high to log addresses
at 0xf700-0xff00 but it puts requirements on all
the ARM32 platforms to rid their static maps :P

 Is it correct that when the pte's, pgd's and pud's are populated
 KASan really doesn't kick in, it's just done to have some scratch
 memory with whatever contents so as to do dummy updates
 for the __asan_loadN() and __asan_storeN() calls, and no checks
 start until the shadow memory is populated in kasan_init()
 i.e. there are no KASan checks for any code executing up
 to that point, just random writes and reads?

 Yes, kasan_early_init() setups scratch memory with whatever contents.
 But  KASan checks shadow before kasan_init(), that's the reason why we
 need scratch shadow.

 So checks are performed, but KASan don't print any reports, because
 init_task has non-zero kasan_depth flag (see include/linux/init_task.h)
 We check that flag in kasan_report() and print report iff it have zero value.

 In kasan_init() after shadow populated, we enable reports by setting
 kasan_depth to zero.

Aha now I understand how this works! Now I understand
this init_task.kasan_depth = 0 too.

 Are we just lucky that these functions never do any loads
 and stores?


 We relay on fact that these functions are static inline and do not call other
 functions from other (instrumented) files.

Aha, makes perfect sense.

I think I understand the code a bit now ... it maps all the KASan
shadow memory to the physical memory of the zero page and let all
updates hit that memory until the memory manager is up and running,
then you allocate physical memory backing the shadow in
kasan_populate_zero_shadow().

I misunderstood it such that the backing physical shadow memory
had to be available when we do the early call... no wonder I
got lost.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-12 Thread Andrey Ryabinin
2015-06-11 16:39 GMT+03:00 Linus Walleij :
> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
> wrote:
>
>> This patch adds arch specific code for kernel address sanitizer
>> (see Documentation/kasan.txt).
>
> I looked closer at this again ... I am trying to get KASan up for
> ARM(32) with some tricks and hacks.
>

I have some patches for that. They still need some polishing, but works for me.
I could share after I get back to office on Tuesday.


>> +config KASAN_SHADOW_OFFSET
>> +   hex
>> +   default 0xdfff2000 if ARM64_VA_BITS_48
>> +   default 0xdc80 if ARM64_VA_BITS_42
>> +   default 0xdf90 if ARM64_VA_BITS_39
>
> So IIUC these offsets are simply chosen to satisfy the equation
>
> SHADOW = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> + KASAN_SHADOW_OFFSET;
>
> For all memory that needs to be covered, i.e. kernel text+data,
> modules text+data, any other kernel-mode running code+data.
>
> And it needs to be statically assigned like this because the offset
> is used at compile time.
>
> Atleast that is how I understood it... correct me if wrong.
> (Dunno if all is completely obvious to everyone else in the world...)
>

I think you understood this right.

>> +/*
>> + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
>> + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
>> + */
>> +#define KASAN_SHADOW_START  (UL(0x) << (VA_BITS))
>> +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL << (VA_BITS - 
>> 3)))
>
> Will this not mean that shadow start to end actually covers *all*
> virtual addresses including userspace and what not? However a
> large portion of this shadow memory will be unused because the
> SHADOW_OFFSET only works for code compiled for the kernel
> anyway.
>

SHADOW_OFFSET:SHADOW_END - covers *all* 64bits of virtual addresses.
SHADOW_OFFSET:SHADOW_START - unused shadow.
SHADOW_START:SHADOW_END - covers only kernel virtual addresses (used shadow).


> When working on ARM32 I certainly could not map
> (1UL << (VA_BITS -3)) i.e. for 32 bit (1UL << 29) bytes (512 MB) of

Why not? We can just take it from TASK_SIZE.

> virtual memory for shadow, instead I had to restrict it to the size that
> actually maps the memory used by the kernel.
>

That should work too.

> I tried shrinking it down but it crashed on me so tell me how
> wrong I am ... :)
>
> But here comes the real tricks, where I need some help to
> understand the patch set, maybe some comments should be
> inserted here and there to ease understanding:
>

...

>> +
>> +void __init kasan_early_init(void)
>> +{
>> +   kasan_map_early_shadow(swapper_pg_dir);
>> +   start_kernel();
>> +}
>
> So as far as I can see, kasan_early_init() needs to be called before
> we even run start_kernel() because every memory access would
> crash unless the MMU is set up for the shadow memory.
>

 kasan_early_init() should be called before *any* instrumented function.

> Is it correct that when the pte's, pgd's and pud's are populated
> KASan really doesn't kick in, it's just done to have some scratch
> memory with whatever contents so as to do dummy updates
> for the __asan_loadN() and __asan_storeN() calls, and no checks
> start until the shadow memory is populated in kasan_init()
> i.e. there are no KASan checks for any code executing up
> to that point, just random writes and reads?
>

Yes, kasan_early_init() setups scratch memory with whatever contents.
But  KASan checks shadow before kasan_init(), that's the reason why we
need scratch shadow.
So checks are performed, but KASan don't print any reports, because
init_task has non-zero kasan_depth flag (see include/linux/init_task.h)
We check that flag in kasan_report() and print report iff it have zero value.

In kasan_init() after shadow populated, we enable reports by setting
kasan_depth to zero.


> Also, this code under kasan_early_init(), must that not be
> written extremely carefully to avoid any loads and stores?
> I.e. even if this file is obviously compiled with
> KASAN_SANITIZE_kasan_init.o := n so that it is not
> instrumented, I'm thinking about the functions it is calling,
> like set_pte(), pgd_populate(), pmd_offset() etc etc.
>

These functions are not instrumented as well, because they are static
and located in headers.
So these functions will be in kasan_init.o translation unit and will
be compiled without -fsanitize=kernel-address.

> Are we just lucky that these functions never do any loads
> and stores?
>

We relay on fact that these functions are static inline and do not call other
functions from other (instrumented) files.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-12 Thread Andrey Ryabinin
2015-06-11 16:39 GMT+03:00 Linus Walleij linus.wall...@linaro.org:
 On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

 I looked closer at this again ... I am trying to get KASan up for
 ARM(32) with some tricks and hacks.


I have some patches for that. They still need some polishing, but works for me.
I could share after I get back to office on Tuesday.


 +config KASAN_SHADOW_OFFSET
 +   hex
 +   default 0xdfff2000 if ARM64_VA_BITS_48
 +   default 0xdc80 if ARM64_VA_BITS_42
 +   default 0xdf90 if ARM64_VA_BITS_39

 So IIUC these offsets are simply chosen to satisfy the equation

 SHADOW = (void *)((unsigned long)addr  KASAN_SHADOW_SCALE_SHIFT)
 + KASAN_SHADOW_OFFSET;

 For all memory that needs to be covered, i.e. kernel text+data,
 modules text+data, any other kernel-mode running code+data.

 And it needs to be statically assigned like this because the offset
 is used at compile time.

 Atleast that is how I understood it... correct me if wrong.
 (Dunno if all is completely obvious to everyone else in the world...)


I think you understood this right.

 +/*
 + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
 + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
 + */
 +#define KASAN_SHADOW_START  (UL(0x)  (VA_BITS))
 +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL  (VA_BITS - 
 3)))

 Will this not mean that shadow start to end actually covers *all*
 virtual addresses including userspace and what not? However a
 large portion of this shadow memory will be unused because the
 SHADOW_OFFSET only works for code compiled for the kernel
 anyway.


SHADOW_OFFSET:SHADOW_END - covers *all* 64bits of virtual addresses.
SHADOW_OFFSET:SHADOW_START - unused shadow.
SHADOW_START:SHADOW_END - covers only kernel virtual addresses (used shadow).


 When working on ARM32 I certainly could not map
 (1UL  (VA_BITS -3)) i.e. for 32 bit (1UL  29) bytes (512 MB) of

Why not? We can just take it from TASK_SIZE.

 virtual memory for shadow, instead I had to restrict it to the size that
 actually maps the memory used by the kernel.


That should work too.

 I tried shrinking it down but it crashed on me so tell me how
 wrong I am ... :)

 But here comes the real tricks, where I need some help to
 understand the patch set, maybe some comments should be
 inserted here and there to ease understanding:


...

 +
 +void __init kasan_early_init(void)
 +{
 +   kasan_map_early_shadow(swapper_pg_dir);
 +   start_kernel();
 +}

 So as far as I can see, kasan_early_init() needs to be called before
 we even run start_kernel() because every memory access would
 crash unless the MMU is set up for the shadow memory.


 kasan_early_init() should be called before *any* instrumented function.

 Is it correct that when the pte's, pgd's and pud's are populated
 KASan really doesn't kick in, it's just done to have some scratch
 memory with whatever contents so as to do dummy updates
 for the __asan_loadN() and __asan_storeN() calls, and no checks
 start until the shadow memory is populated in kasan_init()
 i.e. there are no KASan checks for any code executing up
 to that point, just random writes and reads?


Yes, kasan_early_init() setups scratch memory with whatever contents.
But  KASan checks shadow before kasan_init(), that's the reason why we
need scratch shadow.
So checks are performed, but KASan don't print any reports, because
init_task has non-zero kasan_depth flag (see include/linux/init_task.h)
We check that flag in kasan_report() and print report iff it have zero value.

In kasan_init() after shadow populated, we enable reports by setting
kasan_depth to zero.


 Also, this code under kasan_early_init(), must that not be
 written extremely carefully to avoid any loads and stores?
 I.e. even if this file is obviously compiled with
 KASAN_SANITIZE_kasan_init.o := n so that it is not
 instrumented, I'm thinking about the functions it is calling,
 like set_pte(), pgd_populate(), pmd_offset() etc etc.


These functions are not instrumented as well, because they are static
and located in headers.
So these functions will be in kasan_init.o translation unit and will
be compiled without -fsanitize=kernel-address.

 Are we just lucky that these functions never do any loads
 and stores?


We relay on fact that these functions are static inline and do not call other
functions from other (instrumented) files.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-11 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  wrote:

> This patch adds arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).

I looked closer at this again ... I am trying to get KASan up for
ARM(32) with some tricks and hacks.

> +config KASAN_SHADOW_OFFSET
> +   hex
> +   default 0xdfff2000 if ARM64_VA_BITS_48
> +   default 0xdc80 if ARM64_VA_BITS_42
> +   default 0xdf90 if ARM64_VA_BITS_39

So IIUC these offsets are simply chosen to satisfy the equation

SHADOW = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ KASAN_SHADOW_OFFSET;

For all memory that needs to be covered, i.e. kernel text+data,
modules text+data, any other kernel-mode running code+data.

And it needs to be statically assigned like this because the offset
is used at compile time.

Atleast that is how I understood it... correct me if wrong.
(Dunno if all is completely obvious to everyone else in the world...)

> +/*
> + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
> + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
> + */
> +#define KASAN_SHADOW_START  (UL(0x) << (VA_BITS))
> +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL << (VA_BITS - 3)))

Will this not mean that shadow start to end actually covers *all*
virtual addresses including userspace and what not? However a
large portion of this shadow memory will be unused because the
SHADOW_OFFSET only works for code compiled for the kernel
anyway.

When working on ARM32 I certainly could not map
(1UL << (VA_BITS -3)) i.e. for 32 bit (1UL << 29) bytes (512 MB) of
virtual memory for shadow, instead I had to restrict it to the size that
actually maps the memory used by the kernel.

I tried shrinking it down but it crashed on me so tell me how
wrong I am ... :)

But here comes the real tricks, where I need some help to
understand the patch set, maybe some comments should be
inserted here and there to ease understanding:

> +++ b/arch/arm64/mm/kasan_init.c
> @@ -0,0 +1,143 @@
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
> +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
> +
> +#if CONFIG_PGTABLE_LEVELS > 3
> +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
> +#endif
> +#if CONFIG_PGTABLE_LEVELS > 2
> +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
> +#endif
> +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
> +
> +static void __init kasan_early_pmd_populate(unsigned long start,
> +   unsigned long end, pud_t *pud)
> +{
> +   unsigned long addr;
> +   unsigned long next;
> +   pmd_t *pmd;
> +
> +   pmd = pmd_offset(pud, start);
> +   for (addr = start; addr < end; addr = next, pmd++) {
> +   pmd_populate_kernel(_mm, pmd, kasan_zero_pte);
> +   next = pmd_addr_end(addr, end);
> +   }
> +}
> +
> +static void __init kasan_early_pud_populate(unsigned long start,
> +   unsigned long end, pgd_t *pgd)
> +{
> +   unsigned long addr;
> +   unsigned long next;
> +   pud_t *pud;
> +
> +   pud = pud_offset(pgd, start);
> +   for (addr = start; addr < end; addr = next, pud++) {
> +   pud_populate(_mm, pud, kasan_zero_pmd);
> +   next = pud_addr_end(addr, end);
> +   kasan_early_pmd_populate(addr, next, pud);
> +   }
> +}
> +
> +static void __init kasan_map_early_shadow(pgd_t *pgdp)
> +{
> +   int i;
> +   unsigned long start = KASAN_SHADOW_START;
> +   unsigned long end = KASAN_SHADOW_END;
> +   unsigned long addr;
> +   unsigned long next;
> +   pgd_t *pgd;
> +
> +   for (i = 0; i < PTRS_PER_PTE; i++)
> +   set_pte(_zero_pte[i], pfn_pte(
> +   virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
> +
> +   pgd = pgd_offset_k(start);
> +   for (addr = start; addr < end; addr = next, pgd++) {
> +   pgd_populate(_mm, pgd, kasan_zero_pud);
> +   next = pgd_addr_end(addr, end);
> +   kasan_early_pud_populate(addr, next, pgd);
> +   }
> +}
> +
> +void __init kasan_early_init(void)
> +{
> +   kasan_map_early_shadow(swapper_pg_dir);
> +   start_kernel();
> +}

So as far as I can see, kasan_early_init() needs to be called before
we even run start_kernel() because every memory access would
crash unless the MMU is set up for the shadow memory.

Is it correct that when the pte's, pgd's and pud's are populated
KASan really doesn't kick in, it's just done to have some scratch
memory with whatever contents so as to do dummy updates
for the __asan_loadN() and __asan_storeN() calls, and no checks
start until the shadow memory is populated in kasan_init()
i.e. there are 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-06-11 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

I looked closer at this again ... I am trying to get KASan up for
ARM(32) with some tricks and hacks.

 +config KASAN_SHADOW_OFFSET
 +   hex
 +   default 0xdfff2000 if ARM64_VA_BITS_48
 +   default 0xdc80 if ARM64_VA_BITS_42
 +   default 0xdf90 if ARM64_VA_BITS_39

So IIUC these offsets are simply chosen to satisfy the equation

SHADOW = (void *)((unsigned long)addr  KASAN_SHADOW_SCALE_SHIFT)
+ KASAN_SHADOW_OFFSET;

For all memory that needs to be covered, i.e. kernel text+data,
modules text+data, any other kernel-mode running code+data.

And it needs to be statically assigned like this because the offset
is used at compile time.

Atleast that is how I understood it... correct me if wrong.
(Dunno if all is completely obvious to everyone else in the world...)

 +/*
 + * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
 + * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
 + */
 +#define KASAN_SHADOW_START  (UL(0x)  (VA_BITS))
 +#define KASAN_SHADOW_END(KASAN_SHADOW_START + (1UL  (VA_BITS - 3)))

Will this not mean that shadow start to end actually covers *all*
virtual addresses including userspace and what not? However a
large portion of this shadow memory will be unused because the
SHADOW_OFFSET only works for code compiled for the kernel
anyway.

When working on ARM32 I certainly could not map
(1UL  (VA_BITS -3)) i.e. for 32 bit (1UL  29) bytes (512 MB) of
virtual memory for shadow, instead I had to restrict it to the size that
actually maps the memory used by the kernel.

I tried shrinking it down but it crashed on me so tell me how
wrong I am ... :)

But here comes the real tricks, where I need some help to
understand the patch set, maybe some comments should be
inserted here and there to ease understanding:

 +++ b/arch/arm64/mm/kasan_init.c
 @@ -0,0 +1,143 @@
 +#include linux/kasan.h
 +#include linux/kernel.h
 +#include linux/memblock.h
 +#include linux/start_kernel.h
 +
 +#include asm/page.h
 +#include asm/pgalloc.h
 +#include asm/pgtable.h
 +#include asm/tlbflush.h
 +
 +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
 +static pgd_t tmp_page_table[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
 +
 +#if CONFIG_PGTABLE_LEVELS  3
 +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;
 +#endif
 +#if CONFIG_PGTABLE_LEVELS  2
 +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;
 +#endif
 +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;
 +
 +static void __init kasan_early_pmd_populate(unsigned long start,
 +   unsigned long end, pud_t *pud)
 +{
 +   unsigned long addr;
 +   unsigned long next;
 +   pmd_t *pmd;
 +
 +   pmd = pmd_offset(pud, start);
 +   for (addr = start; addr  end; addr = next, pmd++) {
 +   pmd_populate_kernel(init_mm, pmd, kasan_zero_pte);
 +   next = pmd_addr_end(addr, end);
 +   }
 +}
 +
 +static void __init kasan_early_pud_populate(unsigned long start,
 +   unsigned long end, pgd_t *pgd)
 +{
 +   unsigned long addr;
 +   unsigned long next;
 +   pud_t *pud;
 +
 +   pud = pud_offset(pgd, start);
 +   for (addr = start; addr  end; addr = next, pud++) {
 +   pud_populate(init_mm, pud, kasan_zero_pmd);
 +   next = pud_addr_end(addr, end);
 +   kasan_early_pmd_populate(addr, next, pud);
 +   }
 +}
 +
 +static void __init kasan_map_early_shadow(pgd_t *pgdp)
 +{
 +   int i;
 +   unsigned long start = KASAN_SHADOW_START;
 +   unsigned long end = KASAN_SHADOW_END;
 +   unsigned long addr;
 +   unsigned long next;
 +   pgd_t *pgd;
 +
 +   for (i = 0; i  PTRS_PER_PTE; i++)
 +   set_pte(kasan_zero_pte[i], pfn_pte(
 +   virt_to_pfn(kasan_zero_page), PAGE_KERNEL));
 +
 +   pgd = pgd_offset_k(start);
 +   for (addr = start; addr  end; addr = next, pgd++) {
 +   pgd_populate(init_mm, pgd, kasan_zero_pud);
 +   next = pgd_addr_end(addr, end);
 +   kasan_early_pud_populate(addr, next, pgd);
 +   }
 +}
 +
 +void __init kasan_early_init(void)
 +{
 +   kasan_map_early_shadow(swapper_pg_dir);
 +   start_kernel();
 +}

So as far as I can see, kasan_early_init() needs to be called before
we even run start_kernel() because every memory access would
crash unless the MMU is set up for the shadow memory.

Is it correct that when the pte's, pgd's and pud's are populated
KASan really doesn't kick in, it's just done to have some scratch
memory with whatever contents so as to do dummy updates
for the __asan_loadN() and __asan_storeN() calls, and no checks
start until the shadow 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-27 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  wrote:

> This patch adds arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).

OK fixed a newer GCC (4.9.3, so still just KASAN_OUTLINE), compiled
and booted on the ARM Juno Development System:

kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
==
BUG: KASan: out of bounds access in kmalloc_large_oob_rigth+0x60/0x78
at addr ffc06516a00a
Write of size 1 by task swapper/0/1
page:ffbdc3945a00 count:1 mapcount:0 mapping:  (null) index:0x0
flags: 0x4000(head)
page dumped because: kasan: bad access detected
CPU: 2 PID: 1 Comm: swapper/0 Tainted: GB   4.1.0-rc4+ #9
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[] dump_backtrace+0x0/0x15c
[] show_stack+0x10/0x1c
[] dump_stack+0xac/0x104
[] kasan_report_error+0x3e4/0x400
[] kasan_report+0x40/0x4c
[] __asan_store1+0x70/0x78
[] kmalloc_large_oob_rigth+0x5c/0x78
[] kmalloc_tests_init+0x14/0x4c
[] do_one_initcall+0xa0/0x1f4
[] kernel_init_freeable+0x1ec/0x294
[] kernel_init+0xc/0xec
Memory state around the buggy address:
 ffc065169f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffc065169f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffc06516a000: 00 02 fe fe fe fe fe fe fe fe fe fe fe fe fe fe
  ^
 ffc06516a080: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
 ffc06516a100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
==
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
==
BUG: KASan: out of bounds access in
kmalloc_oob_krealloc_more+0xa0/0xc0 at addr ffc06501cd93
Write of size 1 by task swapper/0/1
=
BUG kmalloc-64 (Tainted: GB  ): kasan: bad access detected
-

INFO: Allocated in kmalloc_oob_krealloc_more+0x48/0xc0 age=4 cpu=2 pid=1
alloc_debug_processing+0x170/0x17c
__slab_alloc.isra.59.constprop.61+0x34c/0x36c
kmem_cache_alloc+0x1a4/0x1e0
kmalloc_oob_krealloc_more+0x44/0xc0
kmalloc_tests_init+0x18/0x4c
do_one_initcall+0xa0/0x1f4
kernel_init_freeable+0x1ec/0x294
kernel_init+0xc/0xec
ret_from_fork+0xc/0x50
INFO: Slab 0xffbdc3940700 objects=21 used=19 fp=0xffc06501d080
flags=0x4080
INFO: Object 0xffc06501cd80 @offset=3456 fp=0xffc06501cf00

Bytes b4 ffc06501cd70: 00 08 00 00 08 08 01 01 00 00 00 00 02 10
00 00  
Object ffc06501cd80: 00 cf 01 65 c0 ff ff ff 01 04 0c 00 01 04 10
c1  ...e
Object ffc06501cd90: 00 82 60 28 58 01 04 43 98 48 48 24 01 81 b4
40  ..`(X..C.HH$...@
Object ffc06501cda0: 00 80 09 0a 69 a1 3d 82 08 01 34 65 21 31 b0
00  i.=...4e!1..
Object ffc06501cdb0: 04 42 4d a7 10 26 18 52 27 23 c2 1e 08 01 40
81  .BM..&.R'#@.
Padding ffc06501cef0: 81 20 00 50 00 08 00 0b 00 0c 50 40 01 48 40
42  . .P..P@.H@B
CPU: 2 PID: 1 Comm: swapper/0 Tainted: GB   4.1.0-rc4+ #9
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[] dump_backtrace+0x0/0x15c
[] show_stack+0x10/0x1c
[] dump_stack+0xac/0x104
[] print_trailer+0xdc/0x140
[] object_err+0x38/0x4c
[] kasan_report_error+0x1b0/0x400
[] kasan_report+0x40/0x4c
[] __asan_store1+0x70/0x78
[] kmalloc_oob_krealloc_more+0x9c/0xc0
[] kmalloc_tests_init+0x18/0x4c
[] do_one_initcall+0xa0/0x1f4
[] kernel_init_freeable+0x1ec/0x294
[] kernel_init+0xc/0xec
Memory state around the buggy address:
 ffc06501cc80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffc06501cd00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffc06501cd80: 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc fc
 ^
 ffc06501ce00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffc06501ce80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

(etc)

This is how it should look I guess, so:
Tested-by: Linus Walleij 

Now I have to fix all the naturally occuring KASan OOB bugs
that started to appear in my boot crawl :O

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-27 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

OK fixed a newer GCC (4.9.3, so still just KASAN_OUTLINE), compiled
and booted on the ARM Juno Development System:

kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
==
BUG: KASan: out of bounds access in kmalloc_large_oob_rigth+0x60/0x78
at addr ffc06516a00a
Write of size 1 by task swapper/0/1
page:ffbdc3945a00 count:1 mapcount:0 mapping:  (null) index:0x0
flags: 0x4000(head)
page dumped because: kasan: bad access detected
CPU: 2 PID: 1 Comm: swapper/0 Tainted: GB   4.1.0-rc4+ #9
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[ffc8aea8] dump_backtrace+0x0/0x15c
[ffc8b014] show_stack+0x10/0x1c
[ffc00080997c] dump_stack+0xac/0x104
[ffc0001ea4d8] kasan_report_error+0x3e4/0x400
[ffc0001ea5dc] kasan_report+0x40/0x4c
[ffc0001e9a8c] __asan_store1+0x70/0x78
[ffc000a5ae78] kmalloc_large_oob_rigth+0x5c/0x78
[ffc000a5b6c0] kmalloc_tests_init+0x14/0x4c
[ffc82940] do_one_initcall+0xa0/0x1f4
[ffc000a3bdbc] kernel_init_freeable+0x1ec/0x294
[ffc000804c5c] kernel_init+0xc/0xec
Memory state around the buggy address:
 ffc065169f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffc065169f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffc06516a000: 00 02 fe fe fe fe fe fe fe fe fe fe fe fe fe fe
  ^
 ffc06516a080: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
 ffc06516a100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
==
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
==
BUG: KASan: out of bounds access in
kmalloc_oob_krealloc_more+0xa0/0xc0 at addr ffc06501cd93
Write of size 1 by task swapper/0/1
=
BUG kmalloc-64 (Tainted: GB  ): kasan: bad access detected
-

INFO: Allocated in kmalloc_oob_krealloc_more+0x48/0xc0 age=4 cpu=2 pid=1
alloc_debug_processing+0x170/0x17c
__slab_alloc.isra.59.constprop.61+0x34c/0x36c
kmem_cache_alloc+0x1a4/0x1e0
kmalloc_oob_krealloc_more+0x44/0xc0
kmalloc_tests_init+0x18/0x4c
do_one_initcall+0xa0/0x1f4
kernel_init_freeable+0x1ec/0x294
kernel_init+0xc/0xec
ret_from_fork+0xc/0x50
INFO: Slab 0xffbdc3940700 objects=21 used=19 fp=0xffc06501d080
flags=0x4080
INFO: Object 0xffc06501cd80 @offset=3456 fp=0xffc06501cf00

Bytes b4 ffc06501cd70: 00 08 00 00 08 08 01 01 00 00 00 00 02 10
00 00  
Object ffc06501cd80: 00 cf 01 65 c0 ff ff ff 01 04 0c 00 01 04 10
c1  ...e
Object ffc06501cd90: 00 82 60 28 58 01 04 43 98 48 48 24 01 81 b4
40  ..`(X..C.HH$...@
Object ffc06501cda0: 00 80 09 0a 69 a1 3d 82 08 01 34 65 21 31 b0
00  i.=...4e!1..
Object ffc06501cdb0: 04 42 4d a7 10 26 18 52 27 23 c2 1e 08 01 40
81  .BM...R'#@.
Padding ffc06501cef0: 81 20 00 50 00 08 00 0b 00 0c 50 40 01 48 40
42  . .P..P@.H@B
CPU: 2 PID: 1 Comm: swapper/0 Tainted: GB   4.1.0-rc4+ #9
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[ffc8aea8] dump_backtrace+0x0/0x15c
[ffc8b014] show_stack+0x10/0x1c
[ffc00080997c] dump_stack+0xac/0x104
[ffc0001e3940] print_trailer+0xdc/0x140
[ffc0001e8384] object_err+0x38/0x4c
[ffc0001ea2a4] kasan_report_error+0x1b0/0x400
[ffc0001ea5dc] kasan_report+0x40/0x4c
[ffc0001e9a8c] __asan_store1+0x70/0x78
[ffc000a5b3a4] kmalloc_oob_krealloc_more+0x9c/0xc0
[ffc000a5b6c4] kmalloc_tests_init+0x18/0x4c
[ffc82940] do_one_initcall+0xa0/0x1f4
[ffc000a3bdbc] kernel_init_freeable+0x1ec/0x294
[ffc000804c5c] kernel_init+0xc/0xec
Memory state around the buggy address:
 ffc06501cc80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffc06501cd00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffc06501cd80: 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc fc
 ^
 ffc06501ce00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffc06501ce80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

(etc)

This is how it should look I guess, so:
Tested-by: Linus Walleij linus.wall...@linaro.org

Now I have to fix all the naturally occuring KASan OOB bugs
that started to appear in my boot crawl :O

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Linus Walleij
On Tue, May 26, 2015 at 4:22 PM, Andrey Ryabinin  wrote:
> On 05/26/2015 05:12 PM, Andrey Ryabinin wrote:
>> On 05/26/2015 04:35 PM, Linus Walleij wrote:
>>> I wonder were the problem lies, any hints where to start looking
>>> to fix this?
>>>
>>
>> I suspect that your compiler lack -fsantize=kernel-address support.
>> It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address 
>> on aarch64.
>>
>
> In that case you should get something like this, during kernel build:
> scripts/Makefile.kasan:17: Cannot use CONFIG_KASAN: 
> -fsanitize=kernel-address is not supported by compiler

Aha yep that's it when I look closer...

I'm going back and rebuilding my compiler. May as well do a trunk
5.0 build and try to get KASAN_INLINE working while I'm at it.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Andrey Ryabinin
On 05/26/2015 05:12 PM, Andrey Ryabinin wrote:
> On 05/26/2015 04:35 PM, Linus Walleij wrote:
>> I wonder were the problem lies, any hints where to start looking
>> to fix this?
>>
> 
> I suspect that your compiler lack -fsantize=kernel-address support.
> It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address on 
> aarch64.
> 

In that case you should get something like this, during kernel build:
scripts/Makefile.kasan:17: Cannot use CONFIG_KASAN: 
-fsanitize=kernel-address is not supported by compiler


Also you may check you gcc by compiling simple program:
$ cat test.c
void main(void) {
}

$ gcc -fsanitize=kernel-address test.c


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Andrey Ryabinin
On 05/26/2015 04:35 PM, Linus Walleij wrote:
> On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  
> wrote:
> 
> And then at boot I just get this:
> 
> kasan test: kmalloc_oob_right out-of-bounds to right
> kasan test: kmalloc_oob_left out-of-bounds to left
> kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
> kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
> out-of-bounds to right
> kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
> kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
> kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
> kasan test: kmalloc_oob_in_memset out-of-bounds in memset
> kasan test: kmalloc_uaf use-after-free
> kasan test: kmalloc_uaf_memset use-after-free in memset
> kasan test: kmalloc_uaf2 use-after-free after another kmalloc
> kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> 
> W00t no nice KASan warnings (which is what I expect).
> 
> This is my compiler by the way:
> $ arm-linux-gnueabihf-gcc --version
> arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1-4.9-2014.09 -
> Linaro GCC 4.9-2014.09) 4.9.2 20140904 (prerelease)
> 
> I did the same exercise on the foundation model (FVP) and I guess
> that is what you developed the patch set on because there I got
> nice KASan dumps:
> 

That's not kasan dumps. That is slub debug output.
KASan warnings starts with
"BUG: KASan: use after free/out of bounds access "
line.

> I wonder were the problem lies, any hints where to start looking
> to fix this?
> 

I suspect that your compiler lack -fsantize=kernel-address support.
It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address on 
aarch64.

I tested this patchset on Cavium Thunder-x and on FVP also and didn't observe 
any problems.

> Yours,
> Linus Walleij
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin  wrote:

> This patch adds arch specific code for kernel address sanitizer
> (see Documentation/kasan.txt).

I'm trying to test this on the Juno hardware (39 VA bits).

I get this at boot:

Virtual kernel memory layout:
kasan   : 0xff80 - 0xff90   (64 MB)
vmalloc : 0xff90 - 0xffbdbfff   (   182 GB)

Nice, kasan is shadowing vmem perfectly. Also
shadowing itself it appears, well whatever.

I enable CONFIG_KASAN, CONFIG_KASAN_OUTLINE,
CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG_ON,  and
CONFIG_TEST_KASAN.

I patch the test like this because I'm not using any loadable
modules:

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 39f24d6721e5..b3353dbe5f58 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -45,7 +45,7 @@ endchoice

 config TEST_KASAN
tristate "Module for testing kasan for bug detection"
-   depends on m && KASAN
+   depends on KASAN
help
  This is a test module doing various nasty things like
  out of bounds accesses, use after free. It is useful for testing
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 098c08eddfab..fb54486eacd6 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -273,5 +273,5 @@ static int __init kmalloc_tests_init(void)
return -EAGAIN;
 }

-module_init(kmalloc_tests_init);
+late_initcall(kmalloc_tests_init);
 MODULE_LICENSE("GPL");

And then at boot I just get this:

kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
kasan test: kmalloc_oob_in_memset out-of-bounds in memset
kasan test: kmalloc_uaf use-after-free
kasan test: kmalloc_uaf_memset use-after-free in memset
kasan test: kmalloc_uaf2 use-after-free after another kmalloc
kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable

W00t no nice KASan warnings (which is what I expect).

This is my compiler by the way:
$ arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1-4.9-2014.09 -
Linaro GCC 4.9-2014.09) 4.9.2 20140904 (prerelease)

I did the same exercise on the foundation model (FVP) and I guess
that is what you developed the patch set on because there I got
nice KASan dumps:

Virtual kernel memory layout:
kasan   : 0xff80 - 0xff90   (64 MB)
vmalloc : 0xff90 - 0xffbdbfff   (   182 GB)
(...)
kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
=
BUG kmalloc-4096 (Tainted: G S ): Redzone overwritten
-

Disabling lock debugging due to kernel taint
INFO: 0xffc0676bc480-0xffc0676bc480. First byte 0x0 instead of 0xcc
INFO: Allocated in kmalloc_node_oob_right+0x40/0x8c age=0 cpu=1 pid=1
alloc_debug_processing+0x170/0x17c
__slab_alloc.isra.59.constprop.61+0x354/0x374
kmem_cache_alloc+0x1a4/0x1e0
kmalloc_node_oob_right+0x3c/0x8c
kmalloc_tests_init+0x10/0x4c
do_one_initcall+0x88/0x1a0
kernel_init_freeable+0x16c/0x210
kernel_init+0xc/0xd8
ret_from_fork+0xc/0x50
INFO: Freed in cleanup_uevent_env+0x10/0x18 age=0 cpu=3 pid=724
free_debug_processing+0x214/0x30c
__slab_free+0x2b0/0x3f8
kfree+0x1a4/0x1dc
cleanup_uevent_env+0xc/0x18
call_usermodehelper_freeinfo+0x18/0x30
umh_complete+0x34/0x40
call_usermodehelper+0x170/0x18c
ret_from_fork+0xc/0x50
INFO: Slab 0xffbdc39dae00 objects=7 used=1 fp=0xffc0676b9180
flags=0x4081
INFO: Object 0xffc0676bb480 @offset=13440 fp=0xffc0676b8000

Bytes b4 ffc0676bb470: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
5a 5a  
Object ffc0676bb480: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb490: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb4a0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb4b0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
(...)
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
kasan test: 

Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Linus Walleij
On Tue, May 26, 2015 at 4:22 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:
 On 05/26/2015 05:12 PM, Andrey Ryabinin wrote:
 On 05/26/2015 04:35 PM, Linus Walleij wrote:
 I wonder were the problem lies, any hints where to start looking
 to fix this?


 I suspect that your compiler lack -fsantize=kernel-address support.
 It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address 
 on aarch64.


 In that case you should get something like this, during kernel build:
 scripts/Makefile.kasan:17: Cannot use CONFIG_KASAN: 
 -fsanitize=kernel-address is not supported by compiler

Aha yep that's it when I look closer...

I'm going back and rebuilding my compiler. May as well do a trunk
5.0 build and try to get KASAN_INLINE working while I'm at it.

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Linus Walleij
On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com wrote:

 This patch adds arch specific code for kernel address sanitizer
 (see Documentation/kasan.txt).

I'm trying to test this on the Juno hardware (39 VA bits).

I get this at boot:

Virtual kernel memory layout:
kasan   : 0xff80 - 0xff90   (64 MB)
vmalloc : 0xff90 - 0xffbdbfff   (   182 GB)

Nice, kasan is shadowing vmem perfectly. Also
shadowing itself it appears, well whatever.

I enable CONFIG_KASAN, CONFIG_KASAN_OUTLINE,
CONFIG_STACKTRACE, CONFIG_SLUB_DEBUG_ON,  and
CONFIG_TEST_KASAN.

I patch the test like this because I'm not using any loadable
modules:

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 39f24d6721e5..b3353dbe5f58 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -45,7 +45,7 @@ endchoice

 config TEST_KASAN
tristate Module for testing kasan for bug detection
-   depends on m  KASAN
+   depends on KASAN
help
  This is a test module doing various nasty things like
  out of bounds accesses, use after free. It is useful for testing
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 098c08eddfab..fb54486eacd6 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -273,5 +273,5 @@ static int __init kmalloc_tests_init(void)
return -EAGAIN;
 }

-module_init(kmalloc_tests_init);
+late_initcall(kmalloc_tests_init);
 MODULE_LICENSE(GPL);

And then at boot I just get this:

kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
kasan test: kmalloc_oob_in_memset out-of-bounds in memset
kasan test: kmalloc_uaf use-after-free
kasan test: kmalloc_uaf_memset use-after-free in memset
kasan test: kmalloc_uaf2 use-after-free after another kmalloc
kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable

W00t no nice KASan warnings (which is what I expect).

This is my compiler by the way:
$ arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1-4.9-2014.09 -
Linaro GCC 4.9-2014.09) 4.9.2 20140904 (prerelease)

I did the same exercise on the foundation model (FVP) and I guess
that is what you developed the patch set on because there I got
nice KASan dumps:

Virtual kernel memory layout:
kasan   : 0xff80 - 0xff90   (64 MB)
vmalloc : 0xff90 - 0xffbdbfff   (   182 GB)
(...)
kasan test: kmalloc_oob_right out-of-bounds to right
kasan test: kmalloc_oob_left out-of-bounds to left
kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
=
BUG kmalloc-4096 (Tainted: G S ): Redzone overwritten
-

Disabling lock debugging due to kernel taint
INFO: 0xffc0676bc480-0xffc0676bc480. First byte 0x0 instead of 0xcc
INFO: Allocated in kmalloc_node_oob_right+0x40/0x8c age=0 cpu=1 pid=1
alloc_debug_processing+0x170/0x17c
__slab_alloc.isra.59.constprop.61+0x354/0x374
kmem_cache_alloc+0x1a4/0x1e0
kmalloc_node_oob_right+0x3c/0x8c
kmalloc_tests_init+0x10/0x4c
do_one_initcall+0x88/0x1a0
kernel_init_freeable+0x16c/0x210
kernel_init+0xc/0xd8
ret_from_fork+0xc/0x50
INFO: Freed in cleanup_uevent_env+0x10/0x18 age=0 cpu=3 pid=724
free_debug_processing+0x214/0x30c
__slab_free+0x2b0/0x3f8
kfree+0x1a4/0x1dc
cleanup_uevent_env+0xc/0x18
call_usermodehelper_freeinfo+0x18/0x30
umh_complete+0x34/0x40
call_usermodehelper+0x170/0x18c
ret_from_fork+0xc/0x50
INFO: Slab 0xffbdc39dae00 objects=7 used=1 fp=0xffc0676b9180
flags=0x4081
INFO: Object 0xffc0676bb480 @offset=13440 fp=0xffc0676b8000

Bytes b4 ffc0676bb470: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
5a 5a  
Object ffc0676bb480: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb490: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb4a0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
Object ffc0676bb4b0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b
6b  
(...)
kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
out-of-bounds to right
kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more

Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Andrey Ryabinin
On 05/26/2015 05:12 PM, Andrey Ryabinin wrote:
 On 05/26/2015 04:35 PM, Linus Walleij wrote:
 I wonder were the problem lies, any hints where to start looking
 to fix this?

 
 I suspect that your compiler lack -fsantize=kernel-address support.
 It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address on 
 aarch64.
 

In that case you should get something like this, during kernel build:
scripts/Makefile.kasan:17: Cannot use CONFIG_KASAN: 
-fsanitize=kernel-address is not supported by compiler


Also you may check you gcc by compiling simple program:
$ cat test.c
void main(void) {
}

$ gcc -fsanitize=kernel-address test.c


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] arm64: add KASan support

2015-05-26 Thread Andrey Ryabinin
On 05/26/2015 04:35 PM, Linus Walleij wrote:
 On Fri, May 15, 2015 at 3:59 PM, Andrey Ryabinin a.ryabi...@samsung.com 
 wrote:
 
 And then at boot I just get this:
 
 kasan test: kmalloc_oob_right out-of-bounds to right
 kasan test: kmalloc_oob_left out-of-bounds to left
 kasan test: kmalloc_node_oob_right kmalloc_node(): out-of-bounds to right
 kasan test: kmalloc_large_oob_rigth kmalloc large allocation:
 out-of-bounds to right
 kasan test: kmalloc_oob_krealloc_more out-of-bounds after krealloc more
 kasan test: kmalloc_oob_krealloc_less out-of-bounds after krealloc less
 kasan test: kmalloc_oob_16 kmalloc out-of-bounds for 16-bytes access
 kasan test: kmalloc_oob_in_memset out-of-bounds in memset
 kasan test: kmalloc_uaf use-after-free
 kasan test: kmalloc_uaf_memset use-after-free in memset
 kasan test: kmalloc_uaf2 use-after-free after another kmalloc
 kasan test: kmem_cache_oob out-of-bounds in kmem_cache_alloc
 kasan test: kasan_stack_oob out-of-bounds on stack
 kasan test: kasan_global_oob out-of-bounds global variable
 
 W00t no nice KASan warnings (which is what I expect).
 
 This is my compiler by the way:
 $ arm-linux-gnueabihf-gcc --version
 arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1-4.9-2014.09 -
 Linaro GCC 4.9-2014.09) 4.9.2 20140904 (prerelease)
 
 I did the same exercise on the foundation model (FVP) and I guess
 that is what you developed the patch set on because there I got
 nice KASan dumps:
 

That's not kasan dumps. That is slub debug output.
KASan warnings starts with
BUG: KASan: use after free/out of bounds access 
line.

 I wonder were the problem lies, any hints where to start looking
 to fix this?
 

I suspect that your compiler lack -fsantize=kernel-address support.
It seems that GCC 4.9.2 doesn't supports -fsanitize=address/kernel-address on 
aarch64.

I tested this patchset on Cavium Thunder-x and on FVP also and didn't observe 
any problems.

 Yours,
 Linus Walleij
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/