Hi,everyone! I am trying to install LAMMPS which is linked to openmpi 1.3.3
on a Linux service.
I can run simulations in serial computing successfully.But when I submit
jobs to compute parallelly
after setting environment in .bash_profile , jobs exit soon with an err.
The following is the err info.
Can you send all the info listed http://www.open-mpi.org/community/help/ ?
On Oct 9, 2012, at 5:25 PM, Thomas Evangelidis wrote:
> Greetings,
>
> I am trying to compile openmpi 1.6.2 on Fedora 17 64-bit using the intel
> compilers (icc and ifort version 13.0.0) but I am getting an error which
Greetings,
I am trying to compile openmpi 1.6.2 on Fedora 17 64-bit using the intel
compilers (icc and ifort version 13.0.0) but I am getting an error which I
cannot trace back. These are the steps I followed:
./configure CC=icc F77=ifort
make
util.o: In function `guess_strlen':
On Oct 9, 2012, at 11:34 AM, Tohiko Looka wrote:
> Mmm... The problem is I already have applications/nodes that use 1.5.4
> and upgrading might be difficult.
FWIW, Open MPI 1.5.4 is binary compatible with Open MPI 1.6.2.
> It is strange because it works on other nodes.
Perhaps you have differen
Mmm... The problem is I already have applications/nodes that use 1.5.4
and upgrading might be difficult.
It is strange because it works on other nodes.
I will try to check if 1.6.2 compiles anyways
Thanks for your reply,
On Tue, Oct 9, 2012 at 5:11 PM, Jeff Squyres wrote:
> Please try upgrading
I'll agree with Jeff that what you propose sounds right for avg. round-trip
time.
Just thought I'd mention that when people talk about the ping-pong latency or
MPI latency benchmarks, they are usually referring to 1/2 the round-trip time.
So you compute everything the same as you did, and then
Hi,
I have built openmpi-1.9a1r27380 with Java support and try some small
programs. When I try to scatter the columns of a matrix, I don't get
the expected results.
tyr java 106 mpijavac ColumnMain.java
tyr java 107 mpiexec -np 6 java ColumnMain
matrix:
1.00 2.00 3.00 4.00
Hi,
I used die following options for "configure" in openmpi-1.9a1r27380
and I get "MPI_THREAD_MULTIPLE"
--enable-cxx-exceptions \
--enable-mpi-java \
--enable-heterogeneous \
--enable-opal-multi-threads \
--enable-mpi-thread-multiple \
--with-threads=posix \
--with-hwloc=internal \
When I said that I quickly found out that my installation does not have
MPI_THREAD_MULTIPLE support, it was because I was getting sig segv in MPI calls
when making MPI calls from 2 threads at once. I later found that
MPI_Init_thread was saying that my provided support was MPI_THREAD_SINGLE (0)
If you ask for thread multiple, I believe we return thread funneled or thread
serial. You can check, though - I might be remembering wrong, but I'm pretty
sure that's true
Sent from my iPad
On Oct 9, 2012, at 7:09 AM, Brian Budge wrote:
> Hi Ralph -
>
> Is this really true? I've been using
Please try upgrading to Open MPI 1.6.2.
On Oct 8, 2012, at 6:34 PM, Tohiko Looka wrote:
> Greetings,
>
> I am trying to compile openmpi-1.5.4, while it usually works out fine
> it is failing on a specific node.
> The error is
> vt_metric_papi.c:262: error: too many arguments to function ‘PAPI_pe
Hi Ralph -
Is this really true? I've been using thread_multiple in my openmpi
programs for quite some time... There may be known cases where it
will not work, but for vanilla MPI use, it seems good to go. That's
not to say that you can't create your own deadlock if you're not
careful, but they
We don't support thread_multiple, I'm afraid. Only thread_funneled, so
you'll have to architect things so that each process can perform all its
MPI actions inside of a single thread.
On Tue, Oct 9, 2012 at 6:10 AM, Hodge, Gary C wrote:
> FYI, I implemented the harvesting thread but found out q
FYI, I implemented the harvesting thread but found out quickly that my
installation of open MPI does not have MPI_THREAD_MULIPLE support
My worker thread still does MPI_Send calls to move the data to the next process.
So I am going to download 1.6.2 today, configure it with
--enable-thread-multip
We have a general policy on this list of not doing homework problems for
people. Your question sounds very much like a homework problem. :-)
I suggest you try googling around.
On Oct 9, 2012, at 11:57 AM, kalmun wrote:
> Hi,
>
> I am searching for a MPI code to solve Eigenvalue problem. (
Hi,
I am searching for a MPI code to solve Eigenvalue problem. (use of any
method like bisection or other). If you have any or if you know a place to
download, please let me know.
Thanks.
Kalmun
In general, what you said is "right"... for some definition of "right". :-)
Usually, benchmarking programs start a timer, do the round trip sends N times,
stop the timer, and then divide the total time by N (to get a smoother
definition of "average").
But keep in mind that there are many, many
17 matches
Mail list logo