Hi Jeff, Konstantinos,
I think you might want MPI.C_DOUBLE_COMPLEX for your datatype, since
np.complex128 is a double-precision. But I think it’s either ignoring this and
using the datatype of the object you’re sending or MPI4py is handling the
conversion in the backend somewhere. You could act
Thanks Jeff.
I ran your code and saw your point. Based on that, it seems that my
comparison by just printing the values was misleading.
I have two questions for you:
1. Can you please describe your setup i.e. Python version, Numpy version,
MPI4py version and Open MPI version? I'm asking since I
There are two issues:
1. You should be using MPI.C_COMPLEX, not MPI.COMPLEX. MPI.COMPLEX is a
Fortran datatype; MPI.C_COMPLEX is the C datatype (which is what NumPy is using
behind the scenes).
2. Somehow the received B values are different between the two.
I derived this program from your tw
Assume an Python MPI program where a master node sends a pair of complex
matrices to each worker node and the worker node is supposed to compute
their product (conventional matrix product). The input matrices are
constructed at the master node according to some algorithm which there is
no need to e
Hi, all
I have gone through some sweat to compile OpenMPI against a custom
(non-native) glibc. The reason I need this is that GCC can use the
vectorized libm, which came in glibc 2.22. And of course no HPC OS ships
with v2.22 - they are all behind a few years!
While using a custom Glibc for