On Sat, Jan 21, 2017 at 01:16:56AM +0100, Jason A. Donenfeld wrote:
> On Sat, Jan 21, 2017 at 1:15 AM, Theodore Ts'o <ty...@mit.edu> wrote:
> > But there is a shared pointer, which is used both for the dedicated
> > u32 array and the dedicated u64 array.  So when you increment the
> > pointer for the get_random_u32, the corresponding entry in the u64
> > array is wasted, no?
> 
> No, it is not a shared pointer. It is a different pointer with a
> different batch. The idea is that each function gets its own batch.
> That way there's always perfect alignment. This is why I'm suggesting
> that my approach is faster.

Oh, I see.  What was confusing me was that you used the same data
structure for both, and but you were using different instances of the
structure for get_random_u32 and get_random_u64.  I thought you were
using the same batched_entropy structure for both.  My bad.

I probably would have used different structure definitions for both,
but that's probably because I really am not fund of unions at all if
they can be avoided.  I thought you were using a union because you
were deliberately trying to use one instance of the structure as a per
cpu variable for u32 and u64.

So that's not how I would do things, but it's fine.

                                                - Ted
                                        

Reply via email to