On Wed, May 25, 2016 at 6:09 PM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
> On Tue, May 24, 2016 at 02:15:22PM -0700, Thomas Garnier wrote:
>> This commit reorganizes the previous SLAB freelist randomization to
>> prepare for the SLUB implementation. It moves functions t
On Wed, May 25, 2016 at 3:25 PM, Kees Cook <keesc...@chromium.org> wrote:
> On Tue, May 24, 2016 at 2:15 PM, Thomas Garnier <thgar...@google.com> wrote:
>> Implements Freelist randomization for the SLUB allocator. It was
>> previous implemented for the SLAB a
On Wed, May 25, 2016 at 6:49 PM, Joonsoo Kim <js1...@gmail.com> wrote:
> 2016-05-25 6:15 GMT+09:00 Thomas Garnier <thgar...@google.com>:
>> Implements Freelist randomization for the SLUB allocator. It was
>> previous implemented for the SLAB allocator. Both use the s
chosen
because they provide a bit more entropy early on boot and better
performance when specific arch instructions are not available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
Reviewed-by: Kees Cook <keesc...@chromium.org>
---
Based on next-20160526
---
include/linux/sla
This is PATCH v1 for the SLUB Freelist randomization. The patch is now based
on the Linux master branch (as the based SLAB patch was merged).
Changes since RFC v2:
- Redone slab_test testing to decide best entropy approach on new page
creation.
- Moved to use get_random_int as best approach
Context Switches 189140 (2282.15)
Sleeps 99008.6 (768.091)
After:
Average Optimal load -j 12 Run (std deviation):
Elapsed Time 102.47 (0.562732)
User Time 1045.3 (1.34263)
System Time 88.311 (0.342554)
Percent CPU 1105.8 (6.49444)
Context Switches 189081 (2355.78)
Sleeps 99231.5 (800.358)
On Wed, Jun 22, 2016 at 5:47 AM, Jason Cooper wrote:
> Hey Kees,
>
> On Tue, Jun 21, 2016 at 05:46:57PM -0700, Kees Cook wrote:
>> Notable problems that needed solving:
> ...
>> - Reasonable entropy is needed early at boot before get_random_bytes()
>>is available.
>
>
On Fri, Jun 17, 2016 at 2:02 AM, Ingo Molnar <mi...@kernel.org> wrote:
>
> * Kees Cook <keesc...@chromium.org> wrote:
>
>> From: Thomas Garnier <thgar...@google.com>
>>
>> Minor change that allows early boot physical mapping of PUD level virtual
>
On Fri, Jun 17, 2016 at 3:26 AM, Ingo Molnar wrote:
>
> * Kees Cook wrote:
>
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -1993,6 +1993,23 @@ config PHYSICAL_ALIGN
>>
>> Don't change this unless you know what you are doing.
>>
>>
Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
SLAB freelist. This security feature reduces the predictability of
the kernel slab allocator against heap overflows.
Randomized lists are pre-computed using a Fisher-Yates shuffle and
re-used on slab creation for performance.
Apr 6, 2016 at 12:35 PM, Thomas Garnier <thgar...@google.com> wrote:
> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> > SLAB freelist.
>
> It may be useful to describe _how_ it randomizes it (i.e. a high-level
> description of what needed changin
That's a use after free. The randomization of the freelist should not
have much effect on that. I was going to quote this exploit that is
applicable to SLAB as well:
https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow
Regards.
Thomas
On Thu, Apr 7, 2016 at 9:17 AM,
Yes, sorry about that. It will be in the next RFC or PATCH.
On Wed, Apr 6, 2016 at 1:54 PM, Greg KH <gre...@linuxfoundation.org> wrote:
> On Wed, Apr 06, 2016 at 12:35:48PM -0700, Thomas Garnier wrote:
>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
&
e count:978 tsc_interval:244028721)
Type:kmem bulk_fallback Per elem: 107 cycles(tsc) 30.833 ns (step:250)
- (measurement period time:0.308335100 sec time_interval:308335100)
- (invoke count:1000 tsc_interval:1076566255)
Type:kmem bulk_quick_reuse Per elem: 24 cycles(tsc) 6.947 ns (step:250)
- (
On Mon, Apr 25, 2016 at 2:38 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Mon, 25 Apr 2016 14:14:33 -0700 Thomas Garnier <thgar...@google.com> wrote:
>
>> >>> + /* Get best entropy at this stage */
>> >>> + get_random_byte
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160422
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
ses. An additional low memory page is used to ensure each CPU can
start with a PGD aligned virtual address (for realmode).
x86/dump_pagetable was updated to correctly display each section.
Updated documentation on x86_64 memory layout accordingly.
Signed-off-by: Thomas Garnier <thgar...@google.c
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160422
---
arch/x86/boot/compressed/kaslr.c | 76 +++---
arch/x86/inclu
This is PATCH v1 for KASLR memory implementation on x86_64. Minor changes
were done based on RFC v1 comments.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published showing that static
memory can be overwitten to
This is PATCH v1 for KASLR memory implementation on x86_64. Minor changes
were done based on RFC v1 comments.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published showing that static
memory can be overwitten to
Minor change that allows early boot physical mapping of PUD level virtual
addresses. This change prepares usage of different virtual addresses for
KASLR memory randomization. It has no impact on default usage.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20
142 cycles
1 times kmalloc(128)/kfree -> 121 cycles
1 times kmalloc(256)/kfree -> 119 cycles
1 times kmalloc(512)/kfree -> 119 cycles
1 times kmalloc(1024)/kfree -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
1 times kmalloc(4096)/kfree -> 119 cycle
On Mon, Apr 25, 2016 at 2:13 PM, Thomas Garnier <thgar...@google.com> wrote:
> On Mon, Apr 25, 2016 at 2:10 PM, Andrew Morton
> <a...@linux-foundation.org> wrote:
>> On Mon, 25 Apr 2016 13:39:23 -0700 Thomas Garnier <thgar...@google.com>
>> wrote
On Mon, Apr 25, 2016 at 2:10 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Mon, 25 Apr 2016 13:39:23 -0700 Thomas Garnier <thgar...@google.com> wrote:
>
>> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist.
Make sense, thanks for the details.
On Thu, Apr 21, 2016 at 1:15 PM, H. Peter Anvin <h...@zytor.com> wrote:
> On April 21, 2016 8:52:01 AM PDT, Thomas Garnier <thgar...@google.com> wrote:
>>On Thu, Apr 21, 2016 at 8:46 AM, H. Peter Anvin <h...@zytor.com> wrote:
>&
On Tue, Apr 26, 2016 at 4:17 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Tue, 26 Apr 2016 09:21:10 -0700 Thomas Garnier <thgar...@google.com> wrote:
>
>> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist.
24)/kfree -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
1 times kmalloc(4096)/kfree -> 119 cycles
1 times kmalloc(8192)/kfree -> 119 cycles
1 times kmalloc(16384)/kfree -> 119 cycles
Signed-off-by: Thomas Garnier <thgar...@google.com>
Acked-by: Chr
On Wed, Apr 27, 2016 at 12:16 PM, Andrew Morton
<a...@linux-foundation.org> wrote:
> On Wed, 27 Apr 2016 10:20:59 -0700 Thomas Garnier <thgar...@google.com> wrote:
>
>> Provides an optional config (CONFIG_SLAB_FREELIST_RANDOM) to randomize
>> the SLAB freelist.
>
Any feedback on this patch proposal?
Thanks,
Thomas
On Mon, Apr 25, 2016 at 9:39 AM, Thomas Garnier <thgar...@google.com> wrote:
> This is PATCH v1 for KASLR memory implementation on x86_64. Minor changes
> were done based on RFC v1 comments.
>
> ***Background:
> The c
ee -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
1 times kmalloc(4096)/kfree -> 119 cycles
1 times kmalloc(8192)/kfree -> 119 cycles
1 times kmalloc(16384)/kfree -> 119 cycles
Signed-off-by: Thomas Garnier <thgar...@google.com>
Acked-by: Christoph L
Make sense. I think it is still valuable to randomize earlier pages. I
will adapt the code, test and send patch v4.
Thanks for the quick feedback,
Thomas
On Mon, Apr 25, 2016 at 5:40 PM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
> On Mon, Apr 25, 2016 at 01:39:23PM -0700, Thomas Garn
On Thu, Apr 21, 2016 at 8:46 AM, H. Peter Anvin <h...@zytor.com> wrote:
> On April 21, 2016 6:30:24 AM PDT, Boris Ostrovsky
> <boris.ostrov...@oracle.com> wrote:
>>
>>
>>On 04/15/2016 06:03 PM, Thomas Garnier wrote:
>>> +void __init kernel_ra
but increase performance for machines without arch
specific randomization instructions.
Thanks,
Thomas
On Wed, May 18, 2016 at 7:07 PM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
> On Wed, May 18, 2016 at 12:12:13PM -0700, Thomas Garnier wrote:
>> I thought the mix of slab_test & k
I thought the mix of slab_test & kernbench would show a diverse
picture on perf data. Is there another test that you think would be
useful?
Thanks,
Thomas
On Wed, May 18, 2016 at 12:02 PM, Christoph Lameter <c...@linux.com> wrote:
> On Wed, 18 May 2016, Thomas Garnier wrote:
>
Yes, I agree that it is not related to the changes.
On Wed, May 18, 2016 at 11:24 AM, Christoph Lameter <c...@linux.com> wrote:
> 0.On Wed, 18 May 2016, Thomas Garnier wrote:
>
>> slab_test, before:
>> 1 times kmalloc(8) -> 67 cycles kfree -> 101 cycles
>>
On Thu, May 19, 2016 at 7:15 PM, Joonsoo Kim <js1...@gmail.com> wrote:
> 2016-05-20 5:20 GMT+09:00 Thomas Garnier <thgar...@google.com>:
>> I ran the test given by Joonsoo and it gave me these minimum cycles
>> per size across 20 usage:
>
> I can't understand w
On Tue, May 10, 2016 at 11:24 AM, Kees Cook <keesc...@chromium.org> wrote:
> On Tue, May 3, 2016 at 12:31 PM, Thomas Garnier <thgar...@google.com> wrote:
>> Add a new option (CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING) to define
>> the padding used for the physical memory
On Tue, May 10, 2016 at 12:05 PM, Kees Cook <keesc...@chromium.org> wrote:
> On Tue, May 3, 2016 at 12:31 PM, Thomas Garnier <thgar...@google.com> wrote:
>> Move the KASLR entropy functions in x86/libray to be used in early
>> kernel boot for KASLR memory randomization.
On Tue, May 10, 2016 at 11:53 AM, Kees Cook <keesc...@chromium.org> wrote:
> On Tue, May 3, 2016 at 12:31 PM, Thomas Garnier <thgar...@google.com> wrote:
>> Randomizes the virtual address space of kernel memory sections (physical
>> memory mapping, vmalloc & vme
fter
1,0.076,0.069
2,0.072,0.069
3,0.066,0.066
4,0.066,0.068
5,0.066,0.067
6,0.066,0.069
7,0.067,0.066
8,0.063,0.067
9,0.067,0.065
10,0.068,0.071
average,0.0677,0.0677
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
Documentation/x86/x86_64/mm.txt |
This is PATCH v4 for KASLR memory implementation for x86_64.
Recent changes:
Add performance information on commit.
Add details on PUD alignment.
Add information on testing against the KASLR bypass exploit.
Rebase on next-20160511 and merge recent KASLR changes.
Integrate
.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/mm/init_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bce2e5d..f205f39 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/
]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improving the system]
>
> url:
> https://github.com/0day-ci/linux/commits/Thomas-Garnier/x86-boot-KASLR-memory-randomization/20160513-001319
> config: i386-tinyconfig (attached as .config)
>
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/boot/compressed/kaslr.c | 77 +++---
arch/x86/include/asm/k
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 7 ++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git
This is PATCH v5 for KASLR memory implementation for x86_64.
Recent changes:
Add performance information on commit.
Add details on PUD alignment.
Add information on testing against the KASLR bypass exploit.
Rebase on next-20160511 and merge recent KASLR changes.
Integrate
.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/mm/init_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index bce2e5d..f205f39 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/boot/compressed/kaslr.c | 77 +++---
arch/x86/include/asm/k
fter
1,0.076,0.069
2,0.072,0.069
3,0.066,0.066
4,0.066,0.068
5,0.066,0.067
6,0.066,0.069
7,0.067,0.066
8,0.063,0.067
9,0.067,0.065
10,0.068,0.071
average,0.0677,0.0677
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
Documentation/x86/x86_64/mm.txt |
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160511
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 7 ++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git
Any feedback on the patch? Ingo? Kees?
Kees mentioned he will take care of the build warning on the KASLR
refactor (the function is not used right now).
Thanks,
Thomas
On Thu, May 12, 2016 at 12:28 PM, Thomas Garnier <thgar...@google.com> wrote:
> This is PATCH v5 for KAS
This is RFC v1 for the SLUB Freelist randomization.
***Background:
This proposal follows the previous SLAB Freelist patch submitted to next.
It resuses parts of previous implementation and keep a similar approach.
The kernel heap allocators are using a sequential freelist making their
allocation
-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160517
---
include/linux/slab_def.h | 11 +++-
mm/slab.c| 66 +---
mm/slab.h| 16
mm/slab_common.c
ycles
Kernbench, before:
Average Optimal load -j 12 Run (std deviation):
Elapsed Time 101.873 (1.16069)
User Time 1045.22 (1.60447)
System Time 88.969 (0.559195)
Percent CPU 1112.9 (13.8279)
Context Switches 189140 (2282.15)
Sleeps 99008.6 (768.091)
After:
Average Optimal load -j 12 Run (std deviation
, May 2, 2016 at 3:00 PM, Dave Hansen <dave.han...@linux.intel.com> wrote:
> On 05/02/2016 02:41 PM, Thomas Garnier wrote:
>> -#define __PAGE_OFFSET _AC(0x8800, UL)
>> +#define __PAGE_OFFSET_BASE _AC(0x8800, UL)
>> +#ifdef CONFIG_R
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
arch/x86/boot/compressed/kaslr.c | 76 +++---
arch/x86/inclu
s 97681.6 (1031.11)
Hackbench shows 0% difference on average (hackbench 90
repeated 10 times):
attemp,before,after
1,0.076,0.069
2,0.072,0.069
3,0.066,0.066
4,0.066,0.068
5,0.066,0.067
6,0.066,0.069
7,0.067,0.066
8,0.063,0.067
9,0.067,0.065
10,0.068,0.071
average,0.0677,0.0677
Signed-off-by: Tho
This is PATCH v3 for KASLR memory implementation for x86_64.
Recent changes:
Add performance information on commit.
Add details on PUD alignment.
Add information on testing against the KASLR bypass exploit.
Rebase on next-20160502.
***Background:
The current implementation of
.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
arch/x86/mm/init_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 89d9747..6adfbce 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/
On Mon, May 2, 2016 at 2:58 PM, Dave Hansen <dave.han...@linux.intel.com> wrote:
> On 05/02/2016 02:41 PM, Thomas Garnier wrote:
>> Minor change that allows early boot physical mapping of PUD level virtual
>> addresses. This change prepares usage of different virtual addresse
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
arch/x86/boot/compressed/kaslr.c | 76 +++---
arch/x86/inclu
Minor change that allows early boot physical mapping of PUD level virtual
addresses. This change prepares usage of different virtual addresses for
KASLR memory randomization. It has no impact on default usage.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20
This is PATCH v2 for KASLR memory implementation for x86_64. Edit commit
based on recents testing against the KASLR bypass exploits & rebase on
next-20160502.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published
splay each section.
Updated documentation on x86_64 memory layout accordingly.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
Documentation/x86/x86_64/mm.txt | 4 +
arch/x86/Kconfig| 15
arch/x86/include/asm/kaslr.h
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160502
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
5 cycles
1 times kmalloc(32)/kfree -> 115 cycles
1 times kmalloc(64)/kfree -> 120 cycles
1 times kmalloc(128)/kfree -> 127 cycles
1 times kmalloc(256)/kfree -> 119 cycles
1 times kmalloc(512)/kfree -> 112 cycles
1 times kmalloc(1024)/kfree -> 112 cycles
100
10:25:59 -0700 Thomas Garnier <thgar...@google.com>
>> wrote:
>> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> > SLAB freelist. The list is randomized during initialization of a new set
>> > of pages. The order on different freel
This is RFC v1 for KASLR memory implementation on x86_64. It was reviewed
early by Kees Cook.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published showing that static
memory can be overwitten to elevate
to ensure each CPU can
start with a PGD aligned virtual address (for realmode).
x86/dump_pagetable was updated to correctly display each section.
Updated documentation on x86_64 memory layout accordingly.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160413
---
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160413
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
Minor change that allows early boot physical mapping of PUD level virtual
addresses. This change prepares usage of different virtual addresses for
KASLR memory randomization. It has no impact on default usage.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160413
---
arch/x86/boot/compressed/aslr.c | 76 +++
arch/x86/inclu
mes kmalloc(128)/kfree -> 127 cycles
1 times kmalloc(256)/kfree -> 119 cycles
1 times kmalloc(512)/kfree -> 112 cycles
1 times kmalloc(1024)/kfree -> 112 cycles
10000 times kmalloc(2048)/kfree -> 112 cycles
1 times kmalloc(4096)/kfree -> 112 cycles
1 times
I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.
On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier
mes kmalloc(128)/kfree -> 127 cycles
1 times kmalloc(256)/kfree -> 119 cycles
1 times kmalloc(512)/kfree -> 112 cycles
1 times kmalloc(1024)/kfree -> 112 cycles
10000 times kmalloc(2048)/kfree -> 112 cycles
1 times kmalloc(4096)/kfree -> 112 cycles
1 times
On Tue, Apr 19, 2016 at 12:15 AM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
> On Mon, Apr 18, 2016 at 10:14:39AM -0700, Thomas Garnier wrote:
>> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist. The list is randomized during initia
On Tue, Apr 19, 2016 at 7:27 AM, Joerg Roedel <jroe...@suse.de> wrote:
> Hi Thomas,
>
> On Fri, Apr 15, 2016 at 03:03:12PM -0700, Thomas Garnier wrote:
>> +/*
>> + * Create PGD aligned trampoline table to allow real mode initialization
>> + * of additional CPUs. Co
On Wed, Apr 20, 2016 at 1:08 AM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
> On Tue, Apr 19, 2016 at 09:44:54AM -0700, Thomas Garnier wrote:
>> On Tue, Apr 19, 2016 at 12:15 AM, Joonsoo Kim <iamjoonsoo@lge.com> wrote:
>> > On Mon, Apr 18, 2016 at 10:14:39
On Thu, Apr 21, 2016 at 6:30 AM, Boris Ostrovsky
<boris.ostrov...@oracle.com> wrote:
>
>
> On 04/15/2016 06:03 PM, Thomas Garnier wrote:
>>
>> +void __init kernel_randomize_memory(void)
>> +{
>> + size_t i;
>> + unsigned long addr = memory
08:59 AM, Thomas Garnier wrote:
>>
>> I will send the next version today. Note that I get_random_bytes_arch
>> is used because at that stage we have 0 bits of entropy. It seemed
>> like a better idea to use the arch version that will fallback on
>> get
Yes, it is. Certainly happened while editing patches (sorry about
that), will be fixed on next iteration once I get a bit more feedback.
On Mon, Apr 18, 2016 at 7:46 AM, Joerg Roedel <jroe...@suse.de> wrote:
> On Fri, Apr 15, 2016 at 03:03:12PM -0700, Thomas Garnier wrote:
>&g
Time 102.47 (0.562732)
User Time 1045.3 (1.34263)
System Time 88.311 (0.342554)
Percent CPU 1105.8 (6.49444)
Context Switches 189081 (2355.78)
Sleeps 99231.5 (800.358)
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on 0e01df100b6bf22a1de61b66657502a6454153c5
---
include/linu
functions are changed to align with the SLUB
implementation, now using get_random_* functions.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on 0e01df100b6bf22a1de61b66657502a6454153c5
---
include/linux/slab_def.h | 11 +++-
mm/slab.c
This is RFC v2 for the SLUB Freelist randomization. The patch is now based
on the Linux master branch (as the based SLAB patch was merged).
Changes since RFC v1:
- Redone slab_test testing to decide best entropy approach on new page
creation.
- Moved to use get_random_int as best approach to
I am sorry, there has been parallel work between KASLR memory
randomization and hibernation support. That's why hibernation was not
tested, it was not supported when the feature was created.
Communication will be better next time.
I will work on identifying the problem and pushing a fix.
Thanks
Add vmemmap in the list of randomized memory regions.
The vmemmap region holds a representation of the physical memory (through
a struct page array). An attacker could use this region to disclose the
kernel memory layout (walking the page linked list).
Signed-off-by: Thomas Garnier <th
***Background:
KASLR memory randomization for x86_64 was added when KASLR did not support
hibernation. Now that it does, some changes are needed.
***Problems that needed solving:
Hibernation was failing on reboot with a GP fault when CONFIG_RANDOMIZE_MEMORY
was enabled. Two issues were
When KASLR memory randomization is used, __PAGE_OFFSET is a global
variable changed during boot. The assembly code was using the variable
as an immediate value to calculate the cr3 physical address. The
physical address was incorrect resulting to a GP fault.
Signed-off-by: Thomas Garnier <th
Correctly setup the temporary mapping for hibernation. Previous
implementation assumed the address was aligned on the PGD level. With
KASLR memory randomization enabled, the address is randomized on the PUD
level. This change supports unaligned address up to PMD.
Signed-off-by: Thomas Garnier
On Wed, Jul 27, 2016 at 8:59 AM, Thomas Garnier <thgar...@google.com> wrote:
> Add vmemmap in the list of randomized memory regions.
>
> The vmemmap region holds a representation of the physical memory (through
> a struct page array). An attacker could use this region to dis
On Tue, Aug 2, 2016 at 1:14 AM, Ingo Molnar <mi...@kernel.org> wrote:
>
> * Thomas Garnier <thgar...@google.com> wrote:
>
>> On Wed, Jul 27, 2016 at 8:59 AM, Thomas Garnier <thgar...@google.com> wrote:
>> > Add vmemmap in the list of randomized memory r
On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki <r...@rjwysocki.net> wrote:
> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>> variable changed during boot. The assembly code w
On Wed, Aug 10, 2016 at 6:35 PM, Rafael J. Wysocki <raf...@kernel.org> wrote:
> On Thu, Aug 11, 2016 at 3:17 AM, Thomas Garnier <thgar...@google.com> wrote:
>> On Wed, Aug 10, 2016 at 5:35 PM, Rafael J. Wysocki <raf...@kernel.org> wrote:
>>> On Wed, Aug 1
On Wed, Aug 10, 2016 at 5:35 PM, Rafael J. Wysocki wrote:
> On Wed, Aug 10, 2016 at 11:59 PM, Jiri Kosina wrote:
>> On Wed, 10 Aug 2016, Rafael J. Wysocki wrote:
>>
>>> So I used your .config to generate one for my test machine and with
>>> that I can
On Thu, Aug 11, 2016 at 2:33 PM, Rafael J. Wysocki <r...@rjwysocki.net> wrote:
> On Thursday, August 11, 2016 11:47:27 AM Thomas Garnier wrote:
>> On Wed, Aug 10, 2016 at 6:35 PM, Rafael J. Wysocki <raf...@kernel.org> wrote:
>> > On Thu, Aug 11, 2016 at 3:17 AM, Thoma
the tracing & the exception handler functions tried to
use a per-cpu variable.
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160808
Thanks to Rafael, Jiri & Borislav in tracking down this bug.
---
kernel/power/hibernate.c | 4 ++--
1 file changed, 2 in
On Tue, Aug 9, 2016 at 9:54 AM, Borislav Petkov <b...@suse.de> wrote:
> On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
>> Default implementation expects 6 pages maximum are needed for low page
>> allocations. If KASLR memory randomization is enabled, the
while doing extensive testing of KASLR memory
randomization on different type of hardware.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier <thgar...@google.com>
---
Based on next-20160805
---
arch/x86/mm/init.c | 14 --
Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-
1 - 100 of 834 matches
Mail list logo