On Mon, Apr 25, 2016 at 2:38 PM, Andrew Morton
wrote:
> On Mon, 25 Apr 2016 14:14:33 -0700 Thomas Garnier wrote:
>
>> >>> + /* Get best entropy at this stage */
>> >>> + get_random_bytes_arch(, sizeof(seed));
>> >>
>&g
Make sense. I think it is still valuable to randomize earlier pages. I
will adapt the code, test and send patch v4.
Thanks for the quick feedback,
Thomas
On Mon, Apr 25, 2016 at 5:40 PM, Joonsoo Kim wrote:
> On Mon, Apr 25, 2016 at 01:39:23PM -0700, Thomas Garnier wrote:
>> Provides an
ee -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
1 times kmalloc(4096)/kfree -> 119 cycles
1 times kmalloc(8192)/kfree -> 119 cycles
1 times kmalloc(16384)/kfree -> 119 cycles
Signed-off-by: Thomas Garnier
Acked-by: Christoph Lameter
---
Based on next-20160
On Tue, Apr 26, 2016 at 4:17 PM, Andrew Morton
wrote:
> On Tue, 26 Apr 2016 09:21:10 -0700 Thomas Garnier wrote:
>
>> Provides an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist. The list is randomized during initialization of a new set
>
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/boot/compressed/kaslr.c | 76 +++---
arch/x86/include/asm/kaslr.h | 6
Minor change that allows early boot physical mapping of PUD level virtual
addresses. This change prepares usage of different virtual addresses for
KASLR memory randomization. It has no impact on default usage.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/mm/init_64.c
This is PATCH v2 for KASLR memory implementation for x86_64. Edit commit
based on recents testing against the KASLR bypass exploits & rebase on
next-20160502.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published
splay each section.
Updated documentation on x86_64 memory layout accordingly.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
Documentation/x86/x86_64/mm.txt | 4 +
arch/x86/Kconfig| 15
arch/x86/include/asm/kaslr.h| 12 +++
ar
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig
On Mon, May 2, 2016 at 2:58 PM, Dave Hansen wrote:
> On 05/02/2016 02:41 PM, Thomas Garnier wrote:
>> Minor change that allows early boot physical mapping of PUD level virtual
>> addresses. This change prepares usage of different virtual addresses for
>> KASLR memory
, May 2, 2016 at 3:00 PM, Dave Hansen wrote:
> On 05/02/2016 02:41 PM, Thomas Garnier wrote:
>> -#define __PAGE_OFFSET _AC(0x8800, UL)
>> +#define __PAGE_OFFSET_BASE _AC(0x8800, UL)
>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>
s 97681.6 (1031.11)
Hackbench shows 0% difference on average (hackbench 90
repeated 10 times):
attemp,before,after
1,0.076,0.069
2,0.072,0.069
3,0.066,0.066
4,0.066,0.068
5,0.066,0.067
6,0.066,0.069
7,0.067,0.066
8,0.063,0.067
9,0.067,0.065
10,0.068,0.071
average,0.0677,0.0677
Signed-off-by: Thomas Garnier
This is PATCH v3 for KASLR memory implementation for x86_64.
Recent changes:
Add performance information on commit.
Add details on PUD alignment.
Add information on testing against the KASLR bypass exploit.
Rebase on next-20160502.
***Background:
The current implementation of
.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/mm/init_64.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 89d9747..6adfbce 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -526,10
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier
---
Based on next-20160502
---
arch/x86/boot/compressed/kaslr.c | 76 +++---
arch/x86/include/asm/kaslr.h | 6
Any feedback on the patch? Ingo? Kees?
Kees mentioned he will take care of the build warning on the KASLR
refactor (the function is not used right now).
Thanks,
Thomas
On Thu, May 12, 2016 at 12:28 PM, Thomas Garnier wrote:
> This is PATCH v5 for KASLR memory implementation for x86
This is RFC v1 for the SLUB Freelist randomization.
***Background:
This proposal follows the previous SLAB Freelist patch submitted to next.
It resuses parts of previous implementation and keep a similar approach.
The kernel heap allocators are using a sequential freelist making their
allocation
-by: Thomas Garnier
---
Based on next-20160517
---
include/linux/slab_def.h | 11 +++-
mm/slab.c| 66 +---
mm/slab.h| 16
mm/slab_common.c | 50
4 files changed
ycles
Kernbench, before:
Average Optimal load -j 12 Run (std deviation):
Elapsed Time 101.873 (1.16069)
User Time 1045.22 (1.60447)
System Time 88.969 (0.559195)
Percent CPU 1112.9 (13.8279)
Context Switches 189140 (2282.15)
Sleeps 99008.6 (768.091)
After:
Average Optimal load -j 12 Run (std de
Yes, I agree that it is not related to the changes.
On Wed, May 18, 2016 at 11:24 AM, Christoph Lameter wrote:
> 0.On Wed, 18 May 2016, Thomas Garnier wrote:
>
>> slab_test, before:
>> 1 times kmalloc(8) -> 67 cycles kfree -> 101 cycles
>> 1 times kmalloc
I thought the mix of slab_test & kernbench would show a diverse
picture on perf data. Is there another test that you think would be
useful?
Thanks,
Thomas
On Wed, May 18, 2016 at 12:02 PM, Christoph Lameter wrote:
> On Wed, 18 May 2016, Thomas Garnier wrote:
>
>&
e count:978 tsc_interval:244028721)
Type:kmem bulk_fallback Per elem: 107 cycles(tsc) 30.833 ns (step:250)
- (measurement period time:0.308335100 sec time_interval:308335100)
- (invoke count:1000 tsc_interval:1076566255)
Type:kmem bulk_quick_reuse Per elem: 24 cycles(tsc) 6.947 ns (step:250)
- (meas
5 cycles
1 times kmalloc(32)/kfree -> 115 cycles
1 times kmalloc(64)/kfree -> 120 cycles
1 times kmalloc(128)/kfree -> 127 cycles
1 times kmalloc(256)/kfree -> 119 cycles
1 times kmalloc(512)/kfree -> 112 cycles
1 times kmalloc(1024)/kfree -> 112 cycles
100
This is RFC v1 for KASLR memory implementation on x86_64. It was reviewed
early by Kees Cook.
***Background:
The current implementation of KASLR randomizes only the base address of
the kernel and its modules. Research was published showing that static
memory can be overwitten to elevate
to ensure each CPU can
start with a PGD aligned virtual address (for realmode).
x86/dump_pagetable was updated to correctly display each section.
Updated documentation on x86_64 memory layout accordingly.
Signed-off-by: Thomas Garnier
---
Based on next-20160413
---
Documentation/x86/x86_64/mm
. If
CONFIG_MEMORY_HOTPLUG is not used, no space is reserved increasing the
entropy available.
Signed-off-by: Thomas Garnier
---
Based on next-20160413
---
arch/x86/Kconfig| 15 +++
arch/x86/mm/kaslr.c | 14 --
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig
Minor change that allows early boot physical mapping of PUD level virtual
addresses. This change prepares usage of different virtual addresses for
KASLR memory randomization. It has no impact on default usage.
Signed-off-by: Thomas Garnier
---
Based on next-20160413
---
arch/x86/mm/init_64.c
Move the KASLR entropy functions in x86/libray to be used in early
kernel boot for KASLR memory randomization.
Signed-off-by: Thomas Garnier
---
Based on next-20160413
---
arch/x86/boot/compressed/aslr.c | 76 +++
arch/x86/include/asm/kaslr.h| 6
Thanks for the comments. I will address them in a v2 early next week.
If anyone has other comments, please let me know.
Thomas
On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches wrote:
> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>> On Fri, 15 Apr 2016 10:25:59 -0700 Thoma
Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
SLAB freelist. This security feature reduces the predictability of
the kernel slab allocator against heap overflows.
Randomized lists are pre-computed using a Fisher-Yates shuffle and
re-used on slab creation for performance.
Yes, sorry about that. It will be in the next RFC or PATCH.
On Wed, Apr 6, 2016 at 1:54 PM, Greg KH wrote:
> On Wed, Apr 06, 2016 at 12:35:48PM -0700, Thomas Garnier wrote:
>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist. This security
Thanks for the feedback Kees. I am preparing another RFC version.
For the config, I plan on creating an equivalent option for SLUB. Both
can benefit from randomizing their freelist order.
Thomas
On Wed, Apr 6, 2016 at 2:45 PM Kees Cook wrote:
>
> On Wed, Apr 6, 2016 at 12:35 PM, Thomas G
That's a use after free. The randomization of the freelist should not
have much effect on that. I was going to quote this exploit that is
applicable to SLAB as well:
https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow
Regards.
Thomas
On Thu, Apr 7, 2016 at 9:17 AM,
Add vmemmap in the list of randomized memory regions.
The vmemmap region holds a representation of the physical memory (through
a struct page array). An attacker could use this region to disclose the
kernel memory layout (walking the page linked list).
Signed-off-by: Thomas Garnier
Signed-off
***Background:
KASLR memory randomization for x86_64 was added when KASLR did not support
hibernation. Now that it does, some changes are needed.
***Problems that needed solving:
Hibernation was failing on reboot with a GP fault when CONFIG_RANDOMIZE_MEMORY
was enabled. Two issues were
When KASLR memory randomization is used, __PAGE_OFFSET is a global
variable changed during boot. The assembly code was using the variable
as an immediate value to calculate the cr3 physical address. The
physical address was incorrect resulting to a GP fault.
Signed-off-by: Thomas Garnier
Correctly setup the temporary mapping for hibernation. Previous
implementation assumed the address was aligned on the PGD level. With
KASLR memory randomization enabled, the address is randomized on the PUD
level. This change supports unaligned address up to PMD.
Signed-off-by: Thomas Garnier
On Wed, Jul 27, 2016 at 8:59 AM, Thomas Garnier wrote:
> Add vmemmap in the list of randomized memory regions.
>
> The vmemmap region holds a representation of the physical memory (through
> a struct page array). An attacker could use this region to disclose the
> kernel memory
I am sorry, there has been parallel work between KASLR memory
randomization and hibernation support. That's why hibernation was not
tested, it was not supported when the feature was created.
Communication will be better next time.
I will work on identifying the problem and pushing a fix.
Thanks
f that variable is ready to be
>> > written into CR3. Then, the assembly code doesn't have to worry
>> > about converting that value into a physical address and things work
>> > regardless of whether or not CONFIG_RANDOMIZE_MEMORY is set.
>> >
>> >
t; result (leading to a kernel panic most of the time).
>>>
>>> To fix this problem, rework kernel_ident_mapping_init() to support
>>> unaligned offsets between KVA and PA up to the PMD level and make
>>> set_up_temporary_mappings() use it as approprtiate.
Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/kernel/setup.c | 4
while doing extensive testing of KASLR memory
randomization on different type of hardware.
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/mm/init.c | 8
1 file changed, 8 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6209289..3a27e6a 100644
On Tue, Aug 2, 2016 at 1:14 AM, Ingo Molnar wrote:
>
> * Thomas Garnier wrote:
>
>> On Wed, Jul 27, 2016 at 8:59 AM, Thomas Garnier wrote:
>> > Add vmemmap in the list of randomized memory regions.
>> >
>> > The vmemmap region holds a representation of
On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki wrote:
> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>> variable changed during boot. The assembly code was using the variable
>>
On Tue, Aug 2, 2016 at 10:36 AM, Yinghai Lu wrote:
> On Mon, Aug 1, 2016 at 10:07 AM, Thomas Garnier wrote:
>> Correctly setup the temporary mapping for hibernation. Previous
>> implementation assumed the address was aligned on the PGD level. With
>> KASLR memory
On Tue, Aug 2, 2016 at 1:47 PM, Rafael J. Wysocki wrote:
> On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier wrote:
>> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki wrote:
>>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>>>> When KASL
On Tue, Aug 2, 2016 at 12:55 PM, Yinghai Lu wrote:
> On Tue, Aug 2, 2016 at 10:48 AM, Thomas Garnier wrote:
>> On Tue, Aug 2, 2016 at 10:36 AM, Yinghai Lu wrote:
>>>
>>> Looks like we need to change the loop from phys address to virtual
>>> addres
On Mon, Aug 8, 2016 at 10:16 PM, Mika Penttilä
wrote:
> On 08/08/2016 09:40 PM, Thomas Garnier wrote:
>> Default implementation expects 6 pages maximum are needed for low page
>> allocations. If KASLR memory randomization is enabled, the worse case
>> of e820 layout w
On Mon, Aug 8, 2016 at 11:40 AM, Thomas Garnier wrote:
> Initialize KASLR memory randomization after max_pfn is initialized. Also
> ensure the size is rounded up. Could have create problems on machines
> with more than 1Tb of memory on certain random addresses.
>
> Signed-off-by:
On Tue, Aug 9, 2016 at 9:18 AM, Rafael J. Wysocki wrote:
> On Tue, Aug 9, 2016 at 5:05 PM, Jiri Kosina wrote:
>> On Tue, 9 Aug 2016, Thomas Garnier wrote:
>>
>>> >> Okay, I did one-by-one reverts, and the one above, i.e.
>>> >>
>>> &g
On Tue, Aug 9, 2016 at 9:03 AM, Joerg Roedel wrote:
> On Tue, Aug 09, 2016 at 09:00:04AM -0700, Thomas Garnier wrote:
>> On Mon, Aug 8, 2016 at 11:40 AM, Thomas Garnier wrote:
>> > Initialize KASLR memory randomization after max_pfn is initialized. Also
>> > ensure th
Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-
while doing extensive testing of KASLR memory
randomization on different type of hardware.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/mm/init.c | 14 --
1 file changed, 12 insert
while doing extensive testing of KASLR memory
randomization on different type of hardware.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/mm/init.c | 14 --
1 file changed, 12 insert
Initialize KASLR memory randomization after max_pfn is initialized. Also
ensure the size is rounded up. Could have create problems on machines
with more than 1Tb of memory on certain random addresses.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-
On Tue, Aug 9, 2016 at 9:54 AM, Borislav Petkov wrote:
> On Tue, Aug 09, 2016 at 09:35:54AM -0700, Thomas Garnier wrote:
>> Default implementation expects 6 pages maximum are needed for low page
>> allocations. If KASLR memory randomization is enabled, the worse case
>>
while doing extensive testing of KASLR memory
randomization on different type of hardware.
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/mm/init.c | 14 --
1 file changed, 12 insert
Signed-off-by: Thomas Garnier
---
Based on next-20160805
---
arch/x86/kernel/setup.c | 8 ++--
arch/x86/mm/kaslr.c | 2 +-
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bcabb88..dc50644 100644
--- a/arch/x86/kernel/setup.c
+++
On Wed, Aug 10, 2016 at 9:35 AM, Borislav Petkov wrote:
> On Wed, Aug 10, 2016 at 04:59:40PM +0200, Jiri Kosina wrote:
>> Mine is Lenovo thinkpad x200s; I think Boris has been testing it on x230s,
>
> It says "X230" here under the screen.
>
>> but not sure whether any of the latest patches didn't
On Wed, Aug 10, 2016 at 6:18 AM, Jiri Kosina wrote:
> On Wed, 10 Aug 2016, Rafael J. Wysocki wrote:
>
>> The last patch I sent had a problem, because if restore_jump_address really
>> overlapped with the identity mapping of the restore kernel, it might share
>> PGD or PUD entries with that
On Wed, Aug 10, 2016 at 5:35 PM, Rafael J. Wysocki wrote:
> On Wed, Aug 10, 2016 at 11:59 PM, Jiri Kosina wrote:
>> On Wed, 10 Aug 2016, Rafael J. Wysocki wrote:
>>
>>> So I used your .config to generate one for my test machine and with
>>> that I can reproduce.
>>
>> Was that the config I've
On Wed, Aug 10, 2016 at 6:35 PM, Rafael J. Wysocki wrote:
> On Thu, Aug 11, 2016 at 3:17 AM, Thomas Garnier wrote:
>> On Wed, Aug 10, 2016 at 5:35 PM, Rafael J. Wysocki wrote:
>>> On Wed, Aug 10, 2016 at 11:59 PM, Jiri Kosina wrote:
>>>> On Wed, 10 Aug
On Thu, Aug 11, 2016 at 2:33 PM, Rafael J. Wysocki wrote:
> On Thursday, August 11, 2016 11:47:27 AM Thomas Garnier wrote:
>> On Wed, Aug 10, 2016 at 6:35 PM, Rafael J. Wysocki wrote:
>> > On Thu, Aug 11, 2016 at 3:17 AM, Thomas Garnier
>> > wrote:
>> >>
the tracing & the exception handler functions tried to
use a per-cpu variable.
Signed-off-by: Thomas Garnier
---
Based on next-20160808
Thanks to Rafael, Jiri & Borislav in tracking down this bug.
---
kernel/power/hibernate.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
di
On Fri, Aug 12, 2016 at 2:23 AM, Jiri Kosina wrote:
> On Fri, 12 Aug 2016, Jiri Kosina wrote:
>
> That's pretty nasty, as turning LOCKDEP on has sideffects on the code
> that'd normally not be expected to be run at all (tracepoint off).
>
> Oh well.
Thanks for the analysis, I didn't got that far
On Fri, Aug 12, 2016 at 4:14 AM, Rafael J. Wysocki wrote:
> On Fri, Aug 12, 2016 at 7:49 AM, Borislav Petkov wrote:
>> On Thu, Aug 11, 2016 at 02:49:29PM -0700, Thomas Garnier wrote:
>>> Restore the processor state before calling any other function to ensure
>>> pe
the tracing & the exception handler functions tried to
use a per-cpu variable.
Reported-by: Jiri Kosina
Tested-by: Jiri Kosina
Acked-by: Pavel Machek
Reported-and-tested-by: Borislav Petkov
Signed-off-by: Thomas Garnier
---
Based on next-20160808
Thanks to Rafael, Jiri & Borislav in
On Wed, Jun 22, 2016 at 5:47 AM, Jason Cooper wrote:
> Hey Kees,
>
> On Tue, Jun 21, 2016 at 05:46:57PM -0700, Kees Cook wrote:
>> Notable problems that needed solving:
> ...
>> - Reasonable entropy is needed early at boot before get_random_bytes()
>>is available.
>
> This series is
but increase performance for machines without arch
specific randomization instructions.
Thanks,
Thomas
On Wed, May 18, 2016 at 7:07 PM, Joonsoo Kim wrote:
> On Wed, May 18, 2016 at 12:12:13PM -0700, Thomas Garnier wrote:
>> I thought the mix of slab_test & kernbench would show a diver
On Thu, May 19, 2016 at 7:15 PM, Joonsoo Kim wrote:
> 2016-05-20 5:20 GMT+09:00 Thomas Garnier :
>> I ran the test given by Joonsoo and it gave me these minimum cycles
>> per size across 20 usage:
>
> I can't understand what you did here. Maybe, it's due to my poor Engling.
On Fri, Jun 17, 2016 at 2:02 AM, Ingo Molnar wrote:
>
> * Kees Cook wrote:
>
>> From: Thomas Garnier
>>
>> Minor change that allows early boot physical mapping of PUD level virtual
>> addresses. The current implementation expects the virtual address to be
On Fri, Jun 17, 2016 at 3:26 AM, Ingo Molnar wrote:
>
> * Kees Cook wrote:
>
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -1993,6 +1993,23 @@ config PHYSICAL_ALIGN
>>
>> Don't change this unless you know what you are doing.
>>
>> +config RANDOMIZE_MEMORY
>> + bool
This is RFC v2 for the SLUB Freelist randomization. The patch is now based
on the Linux master branch (as the based SLAB patch was merged).
Changes since RFC v1:
- Redone slab_test testing to decide best entropy approach on new page
creation.
- Moved to use get_random_int as best approach to
functions are changed to align with the SLUB
implementation, now using get_random_* functions.
Signed-off-by: Thomas Garnier
---
Based on 0e01df100b6bf22a1de61b66657502a6454153c5
---
include/linux/slab_def.h | 11 +++-
mm/slab.c| 68
Time 102.47 (0.562732)
User Time 1045.3 (1.34263)
System Time 88.311 (0.342554)
Percent CPU 1105.8 (6.49444)
Context Switches 189081 (2355.78)
Sleeps 99231.5 (800.358)
Signed-off-by: Thomas Garnier
---
Based on 0e01df100b6bf22a1de61b66657502a6454153c5
---
include/linux/slub_def.h | 8 ++
On Wed, May 25, 2016 at 3:25 PM, Kees Cook wrote:
> On Tue, May 24, 2016 at 2:15 PM, Thomas Garnier wrote:
>> Implements Freelist randomization for the SLUB allocator. It was
>> previous implemented for the SLAB allocator. Both use the same
>> configuration option (CONFIG
On Wed, May 25, 2016 at 6:49 PM, Joonsoo Kim wrote:
> 2016-05-25 6:15 GMT+09:00 Thomas Garnier :
>> Implements Freelist randomization for the SLUB allocator. It was
>> previous implemented for the SLAB allocator. Both use the same
>> configuration option (CONFIG
On Wed, May 25, 2016 at 6:09 PM, Joonsoo Kim wrote:
> On Tue, May 24, 2016 at 02:15:22PM -0700, Thomas Garnier wrote:
>> This commit reorganizes the previous SLAB freelist randomization to
>> prepare for the SLUB implementation. It moves functions that will be
>> shared to
This is PATCH v1 for the SLUB Freelist randomization. The patch is now based
on the Linux master branch (as the based SLAB patch was merged).
Changes since RFC v2:
- Redone slab_test testing to decide best entropy approach on new page
creation.
- Moved to use get_random_int as best approach
Context Switches 189140 (2282.15)
Sleeps 99008.6 (768.091)
After:
Average Optimal load -j 12 Run (std deviation):
Elapsed Time 102.47 (0.562732)
User Time 1045.3 (1.34263)
System Time 88.311 (0.342554)
Percent CPU 1105.8 (6.49444)
Context Switches 189081 (2355.78)
Sleeps 99231.5 (800.358)
Signed-
chosen
because they provide a bit more entropy early on boot and better
performance when specific arch instructions are not available.
Signed-off-by: Thomas Garnier
Reviewed-by: Kees Cook
---
Based on next-20160526
---
include/linux/slab_def.h | 2 +-
mm/slab.c| 80
On Thu, Apr 21, 2016 at 6:30 AM, Boris Ostrovsky
wrote:
>
>
> On 04/15/2016 06:03 PM, Thomas Garnier wrote:
>>
>> +void __init kernel_randomize_memory(void)
>> +{
>> + size_t i;
>> + unsigned long addr = memory_rand_start;
>>
On Thu, Apr 21, 2016 at 8:46 AM, H. Peter Anvin wrote:
> On April 21, 2016 6:30:24 AM PDT, Boris Ostrovsky
> wrote:
>>
>>
>>On 04/15/2016 06:03 PM, Thomas Garnier wrote:
>>> +void __init kernel_randomize_memory(void)
>>> +{
>>> +size_t
Make sense, thanks for the details.
On Thu, Apr 21, 2016 at 1:15 PM, H. Peter Anvin wrote:
> On April 21, 2016 8:52:01 AM PDT, Thomas Garnier wrote:
>>On Thu, Apr 21, 2016 at 8:46 AM, H. Peter Anvin wrote:
>>> On April 21, 2016 6:30:24 AM PDT, Boris Ostrovsky
>> wrote
Commit-ID: c7d2361f7524f365c1ae42f47880e3fa9efb2c2a
Gitweb: http://git.kernel.org/tip/c7d2361f7524f365c1ae42f47880e3fa9efb2c2a
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 9 Aug 2016 10:11:04 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Wed, 10
Commit-ID: 25dfe4785332723f09311dcb7fd91015a60c022f
Gitweb: http://git.kernel.org/tip/25dfe4785332723f09311dcb7fd91015a60c022f
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Wed, 27 Jul 2016 08:59:56 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Wed,
Commit-ID: fb754f958f8e46202c1efd7f66d5b3db1208117d
Gitweb: http://git.kernel.org/tip/fb754f958f8e46202c1efd7f66d5b3db1208117d
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 9 Aug 2016 10:11:05 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Wed, 10
Commit-ID: a95ae27c2ee1cba5f4f6b9dea43ffe88252e79b1
Gitweb: http://git.kernel.org/tip/a95ae27c2ee1cba5f4f6b9dea43ffe88252e79b1
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:04 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: 0483e1fa6e09d4948272680f691dccb1edb9677f
Gitweb: http://git.kernel.org/tip/0483e1fa6e09d4948272680f691dccb1edb9677f
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:02 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: faa379332f3cb3375db1849e27386f8bc9b97da4
Gitweb: http://git.kernel.org/tip/faa379332f3cb3375db1849e27386f8bc9b97da4
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:00 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: 021182e52fe01c1f7b126f97fd6ba048dc4234fd
Gitweb: http://git.kernel.org/tip/021182e52fe01c1f7b126f97fd6ba048dc4234fd
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:03 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: 59b3d0206d74a700069e49160e8194b2ca93b703
Gitweb: http://git.kernel.org/tip/59b3d0206d74a700069e49160e8194b2ca93b703
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:46:59 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: d899a7d146a2ed8a7e6c2f61bcd232908bcbaabc
Gitweb: http://git.kernel.org/tip/d899a7d146a2ed8a7e6c2f61bcd232908bcbaabc
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:46:58 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: b234e8a09003af108d3573f0369e25c080676b14
Gitweb: http://git.kernel.org/tip/b234e8a09003af108d3573f0369e25c080676b14
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:01 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: 90397a41779645d3abba5599f6bb538fdcab9339
Gitweb: http://git.kernel.org/tip/90397a41779645d3abba5599f6bb538fdcab9339
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Jun 2016 17:47:06 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri, 8
Commit-ID: 4ff5308744f5858e4e49e56a0445e2f8b73e47e0
Gitweb: http://git.kernel.org/tip/4ff5308744f5858e4e49e56a0445e2f8b73e47e0
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Wed, 15 Jun 2016 12:05:45 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Sun,
Commit-ID: ef37bc361442545a5be3c56c49a08c3153032127
Gitweb: http://git.kernel.org/tip/ef37bc361442545a5be3c56c49a08c3153032127
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Tue, 21 Mar 2017 08:17:25 +0100
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Tue,
Commit-ID: f991376e444aee8f5643a45703c1433bf7948940
Gitweb: http://git.kernel.org/tip/f991376e444aee8f5643a45703c1433bf7948940
Author: Thomas Garnier <thgar...@google.com>
AuthorDate: Fri, 17 Mar 2017 10:50:34 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Sat,
701 - 800 of 834 matches
Mail list logo