Re: [PATCH] mm: SLAB freelist randomization

2016-04-25 Thread Andrew Morton
On Mon, 18 Apr 2016 12:52:30 -0700 Thomas Garnier  wrote:

> I agree, if we had a generic way to pass entropy across boots on all
> architecture that would be amazing. I will let the SLAB maintainers to
> decide on requiring CONFIG_ARCH_RANDOM or documenting it.

In our world, requiring that sort of attention from maintainers
requires a pretty active level of pinging, poking and harrassing ;)

I do think that if you stick with get_random_bytes_arch() then it need
a comment explaining why.

And I (still) don't think that get_random_bytes_arch() actually does
what you want - if CONFIG_ARCH_RANDOM isn't implemented then
get_random_bytes_arch() just fails.  IOW your statement "the arch
version that will fallback on get_random_bytes sub API in the worse
case" is a misconception?  There is no fallback.



Re: [PATCH] mm: SLAB freelist randomization

2016-04-25 Thread Andrew Morton
On Mon, 18 Apr 2016 12:52:30 -0700 Thomas Garnier  wrote:

> I agree, if we had a generic way to pass entropy across boots on all
> architecture that would be amazing. I will let the SLAB maintainers to
> decide on requiring CONFIG_ARCH_RANDOM or documenting it.

In our world, requiring that sort of attention from maintainers
requires a pretty active level of pinging, poking and harrassing ;)

I do think that if you stick with get_random_bytes_arch() then it need
a comment explaining why.

And I (still) don't think that get_random_bytes_arch() actually does
what you want - if CONFIG_ARCH_RANDOM isn't implemented then
get_random_bytes_arch() just fails.  IOW your statement "the arch
version that will fallback on get_random_bytes sub API in the worse
case" is a misconception?  There is no fallback.



Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Thomas Garnier
I agree, if we had a generic way to pass entropy across boots on all
architecture that would be amazing. I will let the SLAB maintainers to
decide on requiring CONFIG_ARCH_RANDOM or documenting it.

On Mon, Apr 18, 2016 at 12:36 PM, Laura Abbott  wrote:
> On 04/18/2016 08:59 AM, Thomas Garnier wrote:
>>
>> I will send the next version today. Note that I get_random_bytes_arch
>> is used because at that stage we have 0 bits of entropy. It seemed
>> like a better idea to use the arch version that will fallback on
>> get_random_bytes sub API in the worse case.
>>
>
> This is unfortunate for ARM/ARM64. Those platforms don't have a standard
> method for getting random numbers so until additional entropy is added
> get_random_bytes will always return the same seed and indeed I always
> see the same shuffle on a quick test of arm64. For KASLR, the workaround
> was to require the bootloader to pass in entropy. It might be good to
> either document this or require this only be used with CONFIG_ARCH_RANDOM.
>
>
>
>> On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier 
>> wrote:
>>>
>>> Thanks for the comments. I will address them in a v2 early next week.
>>>
>>> If anyone has other comments, please let me know.
>>>
>>> Thomas
>>>
>>> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:

 On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>
> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier 
> wrote:
>>
>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist. The list is randomized during initialization of a new
>> set
>> of pages. The order on different freelist sizes is pre-computed at
>> boot
>> for performance. This security feature reduces the predictability of
>> the
>> kernel SLAB allocator against heap overflows rendering attacks much
>> less
>> stable.


 trivia:

>> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct
>> kmem_cache *cachep, int index)

 []
>>
>> + */
>> +static freelist_idx_t master_list_2[2];
>> +static freelist_idx_t master_list_4[4];
>> +static freelist_idx_t master_list_8[8];
>> +static freelist_idx_t master_list_16[16];
>> +static freelist_idx_t master_list_32[32];
>> +static freelist_idx_t master_list_64[64];
>> +static freelist_idx_t master_list_128[128];
>> +static freelist_idx_t master_list_256[256];
>> +static struct m_list {
>> +   size_t count;
>> +   freelist_idx_t *list;
>> +} master_lists[] = {
>> +   { ARRAY_SIZE(master_list_2), master_list_2 },
>> +   { ARRAY_SIZE(master_list_4), master_list_4 },
>> +   { ARRAY_SIZE(master_list_8), master_list_8 },
>> +   { ARRAY_SIZE(master_list_16), master_list_16 },
>> +   { ARRAY_SIZE(master_list_32), master_list_32 },
>> +   { ARRAY_SIZE(master_list_64), master_list_64 },
>> +   { ARRAY_SIZE(master_list_128), master_list_128 },
>> +   { ARRAY_SIZE(master_list_256), master_list_256 },
>> +};


 static const struct m_list?

>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Thomas Garnier
I agree, if we had a generic way to pass entropy across boots on all
architecture that would be amazing. I will let the SLAB maintainers to
decide on requiring CONFIG_ARCH_RANDOM or documenting it.

On Mon, Apr 18, 2016 at 12:36 PM, Laura Abbott  wrote:
> On 04/18/2016 08:59 AM, Thomas Garnier wrote:
>>
>> I will send the next version today. Note that I get_random_bytes_arch
>> is used because at that stage we have 0 bits of entropy. It seemed
>> like a better idea to use the arch version that will fallback on
>> get_random_bytes sub API in the worse case.
>>
>
> This is unfortunate for ARM/ARM64. Those platforms don't have a standard
> method for getting random numbers so until additional entropy is added
> get_random_bytes will always return the same seed and indeed I always
> see the same shuffle on a quick test of arm64. For KASLR, the workaround
> was to require the bootloader to pass in entropy. It might be good to
> either document this or require this only be used with CONFIG_ARCH_RANDOM.
>
>
>
>> On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier 
>> wrote:
>>>
>>> Thanks for the comments. I will address them in a v2 early next week.
>>>
>>> If anyone has other comments, please let me know.
>>>
>>> Thomas
>>>
>>> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:

 On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>
> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier 
> wrote:
>>
>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> SLAB freelist. The list is randomized during initialization of a new
>> set
>> of pages. The order on different freelist sizes is pre-computed at
>> boot
>> for performance. This security feature reduces the predictability of
>> the
>> kernel SLAB allocator against heap overflows rendering attacks much
>> less
>> stable.


 trivia:

>> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct
>> kmem_cache *cachep, int index)

 []
>>
>> + */
>> +static freelist_idx_t master_list_2[2];
>> +static freelist_idx_t master_list_4[4];
>> +static freelist_idx_t master_list_8[8];
>> +static freelist_idx_t master_list_16[16];
>> +static freelist_idx_t master_list_32[32];
>> +static freelist_idx_t master_list_64[64];
>> +static freelist_idx_t master_list_128[128];
>> +static freelist_idx_t master_list_256[256];
>> +static struct m_list {
>> +   size_t count;
>> +   freelist_idx_t *list;
>> +} master_lists[] = {
>> +   { ARRAY_SIZE(master_list_2), master_list_2 },
>> +   { ARRAY_SIZE(master_list_4), master_list_4 },
>> +   { ARRAY_SIZE(master_list_8), master_list_8 },
>> +   { ARRAY_SIZE(master_list_16), master_list_16 },
>> +   { ARRAY_SIZE(master_list_32), master_list_32 },
>> +   { ARRAY_SIZE(master_list_64), master_list_64 },
>> +   { ARRAY_SIZE(master_list_128), master_list_128 },
>> +   { ARRAY_SIZE(master_list_256), master_list_256 },
>> +};


 static const struct m_list?

>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Laura Abbott

On 04/18/2016 08:59 AM, Thomas Garnier wrote:

I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.



This is unfortunate for ARM/ARM64. Those platforms don't have a standard
method for getting random numbers so until additional entropy is added
get_random_bytes will always return the same seed and indeed I always
see the same shuffle on a quick test of arm64. For KASLR, the workaround
was to require the bootloader to pass in entropy. It might be good to
either document this or require this only be used with CONFIG_ARCH_RANDOM.



On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier  wrote:

Thanks for the comments. I will address them in a v2 early next week.

If anyone has other comments, please let me know.

Thomas

On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:

On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:

On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:

Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
SLAB freelist. The list is randomized during initialization of a new set
of pages. The order on different freelist sizes is pre-computed at boot
for performance. This security feature reduces the predictability of the
kernel SLAB allocator against heap overflows rendering attacks much less
stable.


trivia:


@@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
*cachep, int index)

[]

+ */
+static freelist_idx_t master_list_2[2];
+static freelist_idx_t master_list_4[4];
+static freelist_idx_t master_list_8[8];
+static freelist_idx_t master_list_16[16];
+static freelist_idx_t master_list_32[32];
+static freelist_idx_t master_list_64[64];
+static freelist_idx_t master_list_128[128];
+static freelist_idx_t master_list_256[256];
+static struct m_list {
+   size_t count;
+   freelist_idx_t *list;
+} master_lists[] = {
+   { ARRAY_SIZE(master_list_2), master_list_2 },
+   { ARRAY_SIZE(master_list_4), master_list_4 },
+   { ARRAY_SIZE(master_list_8), master_list_8 },
+   { ARRAY_SIZE(master_list_16), master_list_16 },
+   { ARRAY_SIZE(master_list_32), master_list_32 },
+   { ARRAY_SIZE(master_list_64), master_list_64 },
+   { ARRAY_SIZE(master_list_128), master_list_128 },
+   { ARRAY_SIZE(master_list_256), master_list_256 },
+};


static const struct m_list?





Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Laura Abbott

On 04/18/2016 08:59 AM, Thomas Garnier wrote:

I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.



This is unfortunate for ARM/ARM64. Those platforms don't have a standard
method for getting random numbers so until additional entropy is added
get_random_bytes will always return the same seed and indeed I always
see the same shuffle on a quick test of arm64. For KASLR, the workaround
was to require the bootloader to pass in entropy. It might be good to
either document this or require this only be used with CONFIG_ARCH_RANDOM.



On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier  wrote:

Thanks for the comments. I will address them in a v2 early next week.

If anyone has other comments, please let me know.

Thomas

On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:

On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:

On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:

Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
SLAB freelist. The list is randomized during initialization of a new set
of pages. The order on different freelist sizes is pre-computed at boot
for performance. This security feature reduces the predictability of the
kernel SLAB allocator against heap overflows rendering attacks much less
stable.


trivia:


@@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
*cachep, int index)

[]

+ */
+static freelist_idx_t master_list_2[2];
+static freelist_idx_t master_list_4[4];
+static freelist_idx_t master_list_8[8];
+static freelist_idx_t master_list_16[16];
+static freelist_idx_t master_list_32[32];
+static freelist_idx_t master_list_64[64];
+static freelist_idx_t master_list_128[128];
+static freelist_idx_t master_list_256[256];
+static struct m_list {
+   size_t count;
+   freelist_idx_t *list;
+} master_lists[] = {
+   { ARRAY_SIZE(master_list_2), master_list_2 },
+   { ARRAY_SIZE(master_list_4), master_list_4 },
+   { ARRAY_SIZE(master_list_8), master_list_8 },
+   { ARRAY_SIZE(master_list_16), master_list_16 },
+   { ARRAY_SIZE(master_list_32), master_list_32 },
+   { ARRAY_SIZE(master_list_64), master_list_64 },
+   { ARRAY_SIZE(master_list_128), master_list_128 },
+   { ARRAY_SIZE(master_list_256), master_list_256 },
+};


static const struct m_list?





Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Thomas Garnier
I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.

On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier  wrote:
> Thanks for the comments. I will address them in a v2 early next week.
>
> If anyone has other comments, please let me know.
>
> Thomas
>
> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:
>> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  
>>> wrote:
>>> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>>> > SLAB freelist. The list is randomized during initialization of a new set
>>> > of pages. The order on different freelist sizes is pre-computed at boot
>>> > for performance. This security feature reduces the predictability of the
>>> > kernel SLAB allocator against heap overflows rendering attacks much less
>>> > stable.
>>
>> trivia:
>>
>>> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
>>> > *cachep, int index)
>> []
>>> > + */
>>> > +static freelist_idx_t master_list_2[2];
>>> > +static freelist_idx_t master_list_4[4];
>>> > +static freelist_idx_t master_list_8[8];
>>> > +static freelist_idx_t master_list_16[16];
>>> > +static freelist_idx_t master_list_32[32];
>>> > +static freelist_idx_t master_list_64[64];
>>> > +static freelist_idx_t master_list_128[128];
>>> > +static freelist_idx_t master_list_256[256];
>>> > +static struct m_list {
>>> > +   size_t count;
>>> > +   freelist_idx_t *list;
>>> > +} master_lists[] = {
>>> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
>>> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
>>> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
>>> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
>>> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
>>> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
>>> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
>>> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
>>> > +};
>>
>> static const struct m_list?
>>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-18 Thread Thomas Garnier
I will send the next version today. Note that I get_random_bytes_arch
is used because at that stage we have 0 bits of entropy. It seemed
like a better idea to use the arch version that will fallback on
get_random_bytes sub API in the worse case.

On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier  wrote:
> Thanks for the comments. I will address them in a v2 early next week.
>
> If anyone has other comments, please let me know.
>
> Thomas
>
> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:
>> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  
>>> wrote:
>>> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>>> > SLAB freelist. The list is randomized during initialization of a new set
>>> > of pages. The order on different freelist sizes is pre-computed at boot
>>> > for performance. This security feature reduces the predictability of the
>>> > kernel SLAB allocator against heap overflows rendering attacks much less
>>> > stable.
>>
>> trivia:
>>
>>> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
>>> > *cachep, int index)
>> []
>>> > + */
>>> > +static freelist_idx_t master_list_2[2];
>>> > +static freelist_idx_t master_list_4[4];
>>> > +static freelist_idx_t master_list_8[8];
>>> > +static freelist_idx_t master_list_16[16];
>>> > +static freelist_idx_t master_list_32[32];
>>> > +static freelist_idx_t master_list_64[64];
>>> > +static freelist_idx_t master_list_128[128];
>>> > +static freelist_idx_t master_list_256[256];
>>> > +static struct m_list {
>>> > +   size_t count;
>>> > +   freelist_idx_t *list;
>>> > +} master_lists[] = {
>>> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
>>> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
>>> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
>>> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
>>> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
>>> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
>>> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
>>> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
>>> > +};
>>
>> static const struct m_list?
>>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Thomas Garnier
Thanks for the comments. I will address them in a v2 early next week.

If anyone has other comments, please let me know.

Thomas

On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:
> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  
>> wrote:
>> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> > SLAB freelist. The list is randomized during initialization of a new set
>> > of pages. The order on different freelist sizes is pre-computed at boot
>> > for performance. This security feature reduces the predictability of the
>> > kernel SLAB allocator against heap overflows rendering attacks much less
>> > stable.
>
> trivia:
>
>> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
>> > *cachep, int index)
> []
>> > + */
>> > +static freelist_idx_t master_list_2[2];
>> > +static freelist_idx_t master_list_4[4];
>> > +static freelist_idx_t master_list_8[8];
>> > +static freelist_idx_t master_list_16[16];
>> > +static freelist_idx_t master_list_32[32];
>> > +static freelist_idx_t master_list_64[64];
>> > +static freelist_idx_t master_list_128[128];
>> > +static freelist_idx_t master_list_256[256];
>> > +static struct m_list {
>> > +   size_t count;
>> > +   freelist_idx_t *list;
>> > +} master_lists[] = {
>> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
>> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
>> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
>> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
>> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
>> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
>> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
>> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
>> > +};
>
> static const struct m_list?
>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Thomas Garnier
Thanks for the comments. I will address them in a v2 early next week.

If anyone has other comments, please let me know.

Thomas

On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches  wrote:
> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  
>> wrote:
>> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>> > SLAB freelist. The list is randomized during initialization of a new set
>> > of pages. The order on different freelist sizes is pre-computed at boot
>> > for performance. This security feature reduces the predictability of the
>> > kernel SLAB allocator against heap overflows rendering attacks much less
>> > stable.
>
> trivia:
>
>> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
>> > *cachep, int index)
> []
>> > + */
>> > +static freelist_idx_t master_list_2[2];
>> > +static freelist_idx_t master_list_4[4];
>> > +static freelist_idx_t master_list_8[8];
>> > +static freelist_idx_t master_list_16[16];
>> > +static freelist_idx_t master_list_32[32];
>> > +static freelist_idx_t master_list_64[64];
>> > +static freelist_idx_t master_list_128[128];
>> > +static freelist_idx_t master_list_256[256];
>> > +static struct m_list {
>> > +   size_t count;
>> > +   freelist_idx_t *list;
>> > +} master_lists[] = {
>> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
>> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
>> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
>> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
>> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
>> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
>> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
>> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
>> > +};
>
> static const struct m_list?
>


Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Joe Perches
On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:
> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> > SLAB freelist. The list is randomized during initialization of a new set
> > of pages. The order on different freelist sizes is pre-computed at boot
> > for performance. This security feature reduces the predictability of the
> > kernel SLAB allocator against heap overflows rendering attacks much less
> > stable.

trivia:

> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
> > *cachep, int index)
[]
> > + */
> > +static freelist_idx_t master_list_2[2];
> > +static freelist_idx_t master_list_4[4];
> > +static freelist_idx_t master_list_8[8];
> > +static freelist_idx_t master_list_16[16];
> > +static freelist_idx_t master_list_32[32];
> > +static freelist_idx_t master_list_64[64];
> > +static freelist_idx_t master_list_128[128];
> > +static freelist_idx_t master_list_256[256];
> > +static struct m_list {
> > +   size_t count;
> > +   freelist_idx_t *list;
> > +} master_lists[] = {
> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
> > +};

static const struct m_list?



Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Joe Perches
On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:
> > Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> > SLAB freelist. The list is randomized during initialization of a new set
> > of pages. The order on different freelist sizes is pre-computed at boot
> > for performance. This security feature reduces the predictability of the
> > kernel SLAB allocator against heap overflows rendering attacks much less
> > stable.

trivia:

> > @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
> > *cachep, int index)
[]
> > + */
> > +static freelist_idx_t master_list_2[2];
> > +static freelist_idx_t master_list_4[4];
> > +static freelist_idx_t master_list_8[8];
> > +static freelist_idx_t master_list_16[16];
> > +static freelist_idx_t master_list_32[32];
> > +static freelist_idx_t master_list_64[64];
> > +static freelist_idx_t master_list_128[128];
> > +static freelist_idx_t master_list_256[256];
> > +static struct m_list {
> > +   size_t count;
> > +   freelist_idx_t *list;
> > +} master_lists[] = {
> > +   { ARRAY_SIZE(master_list_2), master_list_2 },
> > +   { ARRAY_SIZE(master_list_4), master_list_4 },
> > +   { ARRAY_SIZE(master_list_8), master_list_8 },
> > +   { ARRAY_SIZE(master_list_16), master_list_16 },
> > +   { ARRAY_SIZE(master_list_32), master_list_32 },
> > +   { ARRAY_SIZE(master_list_64), master_list_64 },
> > +   { ARRAY_SIZE(master_list_128), master_list_128 },
> > +   { ARRAY_SIZE(master_list_256), master_list_256 },
> > +};

static const struct m_list?



Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Andrew Morton
On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:

> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> SLAB freelist. The list is randomized during initialization of a new set
> of pages. The order on different freelist sizes is pre-computed at boot
> for performance. This security feature reduces the predictability of the
> kernel SLAB allocator against heap overflows rendering attacks much less
> stable.
> 
> For example this attack against SLUB (also applicable against SLAB)
> would be affected:
> https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow/
> 
> Also, since v4.6 the freelist was moved at the end of the SLAB. It means
> a controllable heap is opened to new attacks not yet publicly discussed.
> A kernel heap overflow can be transformed to multiple use-after-free.
> This feature makes this type of attack harder too.
> 
> The config option name is not specific to the SLAB as this approach will
> be extended to other allocators like SLUB.
> 
> Performance results highlighted no major changes:
>
> ...
>
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
> *cachep, int index)
>   }
>  }
>  
> +#ifdef CONFIG_FREELIST_RANDOM
> +/*
> + * Master lists are pre-computed random lists
> + * Lists of different sizes are used to optimize performance on different
> + * SLAB object sizes per pages.

"object sizes per pages" doesn't make sense.  "object-per-page counts"?
"object sizes"?

> + */
> +static freelist_idx_t master_list_2[2];
> +static freelist_idx_t master_list_4[4];
> +static freelist_idx_t master_list_8[8];
> +static freelist_idx_t master_list_16[16];
> +static freelist_idx_t master_list_32[32];
> +static freelist_idx_t master_list_64[64];
> +static freelist_idx_t master_list_128[128];
> +static freelist_idx_t master_list_256[256];
> +static struct m_list {
> + size_t count;
> + freelist_idx_t *list;
> +} master_lists[] = {
> + { ARRAY_SIZE(master_list_2), master_list_2 },
> + { ARRAY_SIZE(master_list_4), master_list_4 },
> + { ARRAY_SIZE(master_list_8), master_list_8 },
> + { ARRAY_SIZE(master_list_16), master_list_16 },
> + { ARRAY_SIZE(master_list_32), master_list_32 },
> + { ARRAY_SIZE(master_list_64), master_list_64 },
> + { ARRAY_SIZE(master_list_128), master_list_128 },
> + { ARRAY_SIZE(master_list_256), master_list_256 },
> +};
> +
> +static void __init freelist_random_init(void)
> +{
> + unsigned int seed;
> + size_t z, i, rand;
> + struct rnd_state slab_rand;
> +
> + get_random_bytes_arch(, sizeof(seed));

Using get_random_bytes_arch() seems a rather poor decision.  There are
the caveats described at the get_random_bytes_arch() definition site,
and the minor issue that get_random_bytes_arch() only actually works on
x86 and powerpc!

This is run-once __init code, so rather than adding the kernel's very
first get_random_bytes_arch() call site(!), why not stick with good old
get_random_bytes()?

If there's something I'm missing, please at least place a very good
comment here explaining the reasoning.

> + prandom_seed_state(_rand, seed);
> +
> + for (z = 0; z < ARRAY_SIZE(master_lists); z++) {
> + for (i = 0; i < master_lists[z].count; i++)
> + master_lists[z].list[i] = i;
> +
> + /* Fisher-Yates shuffle */
> + for (i = master_lists[z].count - 1; i > 0; i--) {
> + rand = prandom_u32_state(_rand);
> + rand %= (i + 1);
> + swap(master_lists[z].list[i],
> + master_lists[z].list[rand]);
> + }
> + }
> +}
> +#else
> +static inline void __init freelist_random_init(void) { }
> +#endif /* CONFIG_FREELIST_RANDOM */
> +
> +
>  /*
>   * Initialisation.  Called after the page allocator have been initialised and
>   * before smp_init().
>
> ...
>
> @@ -2442,6 +2499,101 @@ static void cache_init_objs_debug(struct kmem_cache 
> *cachep, struct page *page)
>  #endif
>  }
>  
> +#ifdef CONFIG_FREELIST_RANDOM
> +enum master_type {
> + match,
> + less,
> + more
> +};
> +
> +struct random_mng {

I can't work out what "mng" means in this code.

> + unsigned int padding;
> + unsigned int pos;
> + unsigned int count;
> + struct m_list master_list;
> + unsigned int master_count;
> + enum master_type type;
> +};

It would be useful to document the above struct.  Skilfully documenting
the data structures is key to making the code understandable.

> +static void random_mng_initialize(struct random_mng *mng, unsigned int count)
> +{
> + unsigned int idx;
> + const unsigned int last_idx = ARRAY_SIZE(master_lists) - 1;
> +
> + memset(mng, 0, sizeof(*mng));
> + mng->count = count;
> + mng->pos = 0;
> + /* count is >= 2 */
> + idx = ilog2(count) - 1;

slab.c should now include log2.h.

> + if (idx >= 

Re: [PATCH] mm: SLAB freelist randomization

2016-04-15 Thread Andrew Morton
On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier  wrote:

> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
> SLAB freelist. The list is randomized during initialization of a new set
> of pages. The order on different freelist sizes is pre-computed at boot
> for performance. This security feature reduces the predictability of the
> kernel SLAB allocator against heap overflows rendering attacks much less
> stable.
> 
> For example this attack against SLUB (also applicable against SLAB)
> would be affected:
> https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow/
> 
> Also, since v4.6 the freelist was moved at the end of the SLAB. It means
> a controllable heap is opened to new attacks not yet publicly discussed.
> A kernel heap overflow can be transformed to multiple use-after-free.
> This feature makes this type of attack harder too.
> 
> The config option name is not specific to the SLAB as this approach will
> be extended to other allocators like SLUB.
> 
> Performance results highlighted no major changes:
>
> ...
>
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache 
> *cachep, int index)
>   }
>  }
>  
> +#ifdef CONFIG_FREELIST_RANDOM
> +/*
> + * Master lists are pre-computed random lists
> + * Lists of different sizes are used to optimize performance on different
> + * SLAB object sizes per pages.

"object sizes per pages" doesn't make sense.  "object-per-page counts"?
"object sizes"?

> + */
> +static freelist_idx_t master_list_2[2];
> +static freelist_idx_t master_list_4[4];
> +static freelist_idx_t master_list_8[8];
> +static freelist_idx_t master_list_16[16];
> +static freelist_idx_t master_list_32[32];
> +static freelist_idx_t master_list_64[64];
> +static freelist_idx_t master_list_128[128];
> +static freelist_idx_t master_list_256[256];
> +static struct m_list {
> + size_t count;
> + freelist_idx_t *list;
> +} master_lists[] = {
> + { ARRAY_SIZE(master_list_2), master_list_2 },
> + { ARRAY_SIZE(master_list_4), master_list_4 },
> + { ARRAY_SIZE(master_list_8), master_list_8 },
> + { ARRAY_SIZE(master_list_16), master_list_16 },
> + { ARRAY_SIZE(master_list_32), master_list_32 },
> + { ARRAY_SIZE(master_list_64), master_list_64 },
> + { ARRAY_SIZE(master_list_128), master_list_128 },
> + { ARRAY_SIZE(master_list_256), master_list_256 },
> +};
> +
> +static void __init freelist_random_init(void)
> +{
> + unsigned int seed;
> + size_t z, i, rand;
> + struct rnd_state slab_rand;
> +
> + get_random_bytes_arch(, sizeof(seed));

Using get_random_bytes_arch() seems a rather poor decision.  There are
the caveats described at the get_random_bytes_arch() definition site,
and the minor issue that get_random_bytes_arch() only actually works on
x86 and powerpc!

This is run-once __init code, so rather than adding the kernel's very
first get_random_bytes_arch() call site(!), why not stick with good old
get_random_bytes()?

If there's something I'm missing, please at least place a very good
comment here explaining the reasoning.

> + prandom_seed_state(_rand, seed);
> +
> + for (z = 0; z < ARRAY_SIZE(master_lists); z++) {
> + for (i = 0; i < master_lists[z].count; i++)
> + master_lists[z].list[i] = i;
> +
> + /* Fisher-Yates shuffle */
> + for (i = master_lists[z].count - 1; i > 0; i--) {
> + rand = prandom_u32_state(_rand);
> + rand %= (i + 1);
> + swap(master_lists[z].list[i],
> + master_lists[z].list[rand]);
> + }
> + }
> +}
> +#else
> +static inline void __init freelist_random_init(void) { }
> +#endif /* CONFIG_FREELIST_RANDOM */
> +
> +
>  /*
>   * Initialisation.  Called after the page allocator have been initialised and
>   * before smp_init().
>
> ...
>
> @@ -2442,6 +2499,101 @@ static void cache_init_objs_debug(struct kmem_cache 
> *cachep, struct page *page)
>  #endif
>  }
>  
> +#ifdef CONFIG_FREELIST_RANDOM
> +enum master_type {
> + match,
> + less,
> + more
> +};
> +
> +struct random_mng {

I can't work out what "mng" means in this code.

> + unsigned int padding;
> + unsigned int pos;
> + unsigned int count;
> + struct m_list master_list;
> + unsigned int master_count;
> + enum master_type type;
> +};

It would be useful to document the above struct.  Skilfully documenting
the data structures is key to making the code understandable.

> +static void random_mng_initialize(struct random_mng *mng, unsigned int count)
> +{
> + unsigned int idx;
> + const unsigned int last_idx = ARRAY_SIZE(master_lists) - 1;
> +
> + memset(mng, 0, sizeof(*mng));
> + mng->count = count;
> + mng->pos = 0;
> + /* count is >= 2 */
> + idx = ilog2(count) - 1;

slab.c should now include log2.h.

> + if (idx >= last_idx)
> +