[EMAIL PROTECTED] writes:

>> * ComparisonToft05Mixin which gives a result in GF256. It is
>> described in your progress report:
>>
>>     http://www.daimi.au.dk/~tomas/publications/progress.pdf
>>
>> * ComparisonToft07Mixin where the result is a Zp element. As far as
>> I know it is not described in any published material.
>
> True. And on a sidenote: it bugs me that the comparisons return
> things differently, and therefore cannot be interchanged. It
> shouldn't be too costly to move the GF(256) back to Zp though.

Yeah, it is definitely annoying that they don't agree on the return
type.

> Alternatively, the '05 could be written entirely in Zp avoiding
> GF(256) all together. The XORs in the diamond operator can still be
> local by replacing them with addition and subtraction.
>
>   bot = top_b * (bot_a - bot_b) + bot_b
>
> rather than
>
>   bot = top_b * (bot_a ^ bot_b) ^ bot_b

Hehe, that is cool! And since (^) == (+) == (-) for GF256, we can
easily switch to always computing the diamond operator like this.

> This does the same and avoids the conversion to GF(256), but may be
> more expensive online (IIRC GF(256) computation is /really/ fast).

Well, that is easy to check. The timeit module says:

  % python -m timeit -s 'from viff.field import GF' \
                     -s 'F = GF(256)' -s 'x = F(17)' -s 'y = F(152)' \
                     'x + y'
  100000 loops, best of 3: 4.91 usec per loop

  % python -m timeit -s 'from viff.field import GF' \
                     -s 'F = GF(256)' -s 'x = F(17)' -s 'y = F(152)' \
                     'x * y'
  100000 loops, best of 3: 6.05 usec per loop

  % python -m timeit -s 'from viff.field import GF' \
                     -s 'from viff.util import find_prime' \
                     -s 'F = GF(find_prime(2**32))' \
                     -s 'x = F(17)' -s 'y = F(152)' \
                     'x + y'
  100000 loops, best of 3: 5.4 usec per loop

  % python -m timeit -s 'from viff.field import GF' \
                     -s 'from viff.util import find_prime' \
                     -s 'F = GF(find_prime(2**32))' \
                     -s 'x = F(17)' -s 'y = F(152)' \
                     'x * y'
  100000 loops, best of 3: 5.44 usec per loop

So for small field elements, both fields are about equally fast.
Testing with larger elements gives about the same results:

  % python -m timeit [...] -s 'F = GF(find_prime(2**32))' \
                           -s 'x = F(-17)' -s 'y = F(-152)' 'x + y'
  100000 loops, best of 3: 5.95 usec per loop

  % python -m timeit [...] -s 'F = GF(find_prime(2**32))' \
                           -s 'x = F(-17)' -s 'y = F(-152)' 'x * y'
  100000 loops, best of 3: 6.59 usec per loop

Like you, I had expected GF256 to be significantly faster.

>> I talked with Rune Thorbek yesterday, and he asked why we have not
>> implemented your comparison protocol from your PhD thesis:
>>
>>   http://www.daimi.au.dk/~tomas/publications/dissertation.pdf
>>
>> Is there any good reason why we haven't done that? Is the constants
>> much worse than the ones implemented?
>
> Short answer: Yes.
>
> Longer Answer: You need to do lots of tricks to go to
> constant-rounds. The logarithmic solutions do not give (much) worse
> round complexity, but require notably fewer multiplications.

Okay.

> The best one published that I know of is Tord's and mine from
> ICITS07. This is reasonable to implement (complexity-wise), but
> trust me, you don't want to :-)

Hmm... Rune says that I should consider this a challange... :-)

I wont really have time before I return from Switzerland in September
(I leave in a week), but can I find the article online? I found the
conference webpage, but it does not link to your article, and neither
does your own publication list.

-- 
Martin Geisler
_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to