t...@gmplib.org (Torbjörn Granlund) writes:
> For invert_limb, we should write some leak-free C code for generating a
> suitable table, I suppose.
You mean leak-free code replacing the table lookup?
For 64-bits, the table used in x86_64/invert_limb is floor(0x7fd00 / x)
for 0x100 <= x < 0x200, 8
ni...@lysator.liu.se (Niels Möller) writes:
And there's also a similar table lookup in binvert_limb, used by
mpn_sec_powm.
I am surprised that I implemented it like that. This needs fixing.
For binvert_limb, doing some logics should get us 4 bits, than just one
more iteration. We could as
t...@gmplib.org (Torbjörn Granlund) writes:
> Which is fine in itself. We do NOT try to hide the number of bits in
> operands.
We certainly don't hide passed in limb counts. But most functions don't
leak any bits of the top limb. Maybe it's reasonable that division
functions (which are also speci
ni...@lysator.liu.se (Niels Möller) writes:
==28982== Conditional jump or move depends on uninitialised value(s)
==28982==at 0x493A982: __gmpn_sec_div_r (in
/usr/lib/x86_64-linux-gnu/libgmp.so.10.3.2)
==28982== Use of uninitialised value of size 8
==28982==at 0x493C07E: __gmpn_in
Hi,
I'm trying to use mpn_sec_div_r. To verify the code indeed is
sidechannel silent, I have tests wrapping calls with
#define MARK_MPZ_LIMBS_UNDEFINED(parm) \
VALGRIND_MAKE_MEM_UNDEFINED (mpz_limbs_read (parm), \
mpz_size (parm) * sizeof (mp_limb_t))
o