On Fri, 14 Jan 2022 09:15:37 GMT, kabutz <d...@openjdk.java.net> wrote:

>>> > embarrassingly parallelizable
>>> 
>>> Having looked at [embarrassingly 
>>> parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel), I'm not 
>>> certain that this particular problem would qualify. The algorithm is easy 
>>> to parallelize, but in the end we still have some rather large numbers, so 
>>> memory will be our primary dominator. I'd expect to see a linear speedup if 
>>> it was "perfectly parallel", but this does not come close to that.
>> 
>> Ok, fair point, to avoid possible confusion I have removed "embarrassingly". 
>> I don't think we need to refer to other algorithms.
>
> Hi @PaulSandoz is there anything else that we need to do? Or is this in the 
> hopper for Java 19 already?

> @kabutz please see comments from Joe on the 
> [CSR](https://bugs.openjdk.java.net/browse/JDK-8278886), which should be easy 
> to address (i can update the CSR after you make changes).

I'm working on some results for the question by Joe about the latency vs CPU 
usage for the parallelMultiply() vs multiply() methods. It wasn't so easy, 
because measuring a single thread is easier than all of the FJP threads. But I 
have a nice benchmark that I'm running now. I had to write my own harness and 
not use JMH, because I don't think that JMH can test at that level. I'm also 
measuring object allocation.

Furthermore, I'm testing against all Java versions going back to Java 8, to 
make sure that we don't get any surprises. Here is my version:


BigInteger.multiply()
real  0m22.616s
user  0m22.470s
sys   0m0.008s
mem   84.0GB
BigInteger.parallelMultiply()
real  0m6.283s
user  1m3.200s
sys   0m0.004s
mem   84.0GB


I will upload the results for all the Java versions later, and will also submit 
the benchmark.

-------------

PR: https://git.openjdk.java.net/jdk/pull/6409

Reply via email to