On Tue, 15 May 2018 13:06:26 +1000
"Tobin C. Harding" <[email protected]> wrote:

> Currently we must wait for enough entropy to become available before
> hashed pointers can be printed.  We can remove this wait by using the
> hw RNG if available.
> 
> Use hw RNG to get keying material.
> 
> Suggested-by: Kees Cook <[email protected]>
> Signed-off-by: Tobin C. Harding <[email protected]>
> ---
>  lib/vsprintf.c | 19 ++++++++++++++++---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index b82f0c6c2aec..3697a19c2b25 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -1657,9 +1657,8 @@ char *device_node_string(char *buf, char *end, struct 
> device_node *dn,
>  static bool have_filled_random_ptr_key __read_mostly;
>  static siphash_key_t ptr_key __read_mostly;
>  
> -static void fill_random_ptr_key(struct random_ready_callback *unused)
> +static void ptr_key_ready(void)
>  {
> -     get_random_bytes(&ptr_key, sizeof(ptr_key));
>       /*
>        * have_filled_random_ptr_key==true is dependent on get_random_bytes().
>        * ptr_to_id() needs to see have_filled_random_ptr_key==true

Nothing to do with this patch, but I believe there's a missing memory
barrier in the code.

Right after this we have:

        smp_mb();
        WRITE_ONCE(have_filled_random_ptr_key, true);

Where the comment says that have_filled_random_ptr_key must be set
after ptr_key has been updated. But there's no memory barrier on the
read side. In fact, I think this could be a smp_wmb() instead of a
smp_mb(). The read side has:

        if (unlikely(!have_filled_random_ptr_key))
                return string(buf, end, "(ptrval)", spec);

/* Missing memory barrier smp_rmb() here. */

        hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);

Thus we can have something like:

        CPU0                            CPU1
        ----                            ----
                                   load ptr_key = 0
   store ptr_key = random
   smp_mb()
   store have_filled_random_ptr_key

                                   load have_filled_random_ptr_key = true

                                    BAD BAD BAD!

I'll send a patch.

> @@ -1669,14 +1668,28 @@ static void fill_random_ptr_key(struct 
> random_ready_callback *unused)
>       WRITE_ONCE(have_filled_random_ptr_key, true);
>  }
>  
> +static void fill_random_ptr_key(struct random_ready_callback *unused)
> +{
> +     get_random_bytes(&ptr_key, sizeof(ptr_key));
> +     ptr_key_ready();
> +}
> +
>  static struct random_ready_callback random_ready = {
>       .func = fill_random_ptr_key
>  };
>  
>  static int __init initialize_ptr_random(void)
>  {
> -     int ret = add_random_ready_callback(&random_ready);
> +     int ret;
> +     int key_size = sizeof(ptr_key);
> +
> +     /* Use hw RNG if available */
> +     if (get_random_bytes_arch(&ptr_key, key_size) == key_size) {
> +             ptr_key_ready();
> +             return 0;
> +     }
>  
> +     ret = add_random_ready_callback(&random_ready);
>       if (!ret) {
>               return 0;
>       } else if (ret == -EALREADY) {

But this patch looks good.

Reviewed-by: Steven Rostedt (VMware) <[email protected]>

-- Steve

Reply via email to