Parallel code will require a major rewrite of MPIR. So far we've
concentrated on build support for Sun, Apple and MSVC and improving
single core assembly code for major CPU's. We still have some work to
achieve our goals there (for example we don't support 32 bit OSX on
x86, and we have heaps of assembly code improvements to come over the
next few releases).

If you are interested in contributing to parallel code in MPIR, please
let us know.

I have written some parallel code for *very* large integer
multiplication. But it is nowhere near ready to be merged into MPIR
yet. But ostensibly someone could volunteer to work on parallel
algorithms for smaller integer multiplications (say from 100-1000000
limbs).

We are about to release MPIR 1.1 which will improve our benchmarks, so
a comparison would be premature. You can check our MPIR 1.1 code out
from svn if you want to time it. Just do make bench in the bench
directory to run the benchmark utility.

The command to check out svn is:

svn co http://modular.math.jmu.edu/svn/mpir/mpir/mpir-1.1 mpir-1.1

You'll find the comparative results depend on which architecture you
are benchmarking and which of the three benchmarks you are looking at
(rsa, multiplication or division).

One thing we need is asymptotically fast division code. That is one
area where GMP is well ahead. If you have ideas for algorithms or wish
to contribute, please let us know.

Bill.

On 14 Apr, 17:01, jmakov <jernej.makov...@gmail.com> wrote:
> Hi.
>
> Can someone tell us the status of parallel algorithms in MPIR and give
> some benchmarks in respect with GMP?
>
> Tnx,
> jernej
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mpir-devel" group.
To post to this group, send email to mpir-devel@googlegroups.com
To unsubscribe from this group, send email to 
mpir-devel+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/mpir-devel?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to