ective
Nodes=1:ppn=2
which means 4 threads. Then we omit -np and -hostfile in the mpirun command.
On 31 Jul 2017 20:24, "Elken, Tom"
<tom.el...@intel.com<mailto:tom.el...@intel.com>> wrote:
Hi Mahmood,
With the -hostfile case, Open MPI is trying to helpfully run things fas
Hi Mahmood,
With the -hostfile case, Open MPI is trying to helpfully run things faster by
keeping both processes on one host. Ways to avoid this…
On the mpirun command line add:
-pernode (runs 1 process per node), oe
-npernode 1 , but these two has been deprecated in favor of the wonderful
" i do not think btl/openib can be used with QLogic cards
(please someone correct me if i am wrong)"
You are wrong :) . The openib BTL is the best one to use for interoperability
between QLogic and Mellanox IB cards.
The Intel True Scale (the continuation of the QLogic IB product line) Host SW
luster. For some codes we see
> > noticeable differences using fillup vs round robin, not 4x though. Fillup
> > is more shared memory use while round robin uses more infinniband.
> >
> > Doug
> >
> > On Feb 1, 2017, at 3:25 PM, Andy Witzig <cap1...@icloud.com&l
For this case: " a cluster system with 2.6GHz Intel Haswell with 20 cores /
node and 128GB RAM/node. "
are you running 5 ranks per node on 4 nodes?
What interconnect are you using for the cluster?
-Tom
> -Original Message-
> From: users [mailto:users-boun...@lists.open-mpi.org] On
Hi Mike,
In this file,
$ cat /etc/security/limits.conf
...
< do you see at the end ... >
* hard memlock unlimited
* soft memlock unlimited
# -- All InfiniBand Settings End here --
?
-Tom
> -Original Message-
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Di
>
When you change major RHEL release # and OFED version #, it's a pretty safe bet
that you would need to rebuild Open MPI.
-Tom
P.S. There was no Open Fabrics OFED 2.4 (Release #s jumped from 1.5 to 3.5)
, so I guess this is a Mellanox OFED 2.4.
From: users [mailto:users-boun...@open-mpi.org]
Hi Na Zhang,
It seems likely that on your Open MPI 1.8.1 run that you have the 2 ranks
running on one host whereas on the 1.6.5 results they are running on 2 hosts.
You should be able to verify that by running top on one of the nodes during the
1.8.1 runs and see if you have 2 or 0
Martin Siegert wrote:
> Just set LDFLAGS='-Wl,-rpath,/usr/local/xyz/lib64' with autotools.
> With cmake? Really complicated.
John Cary wrote:
> For cmake,
>
> -DCMAKE_SHARED_LINKER_FLAGS:STRING=-Wl,-rpath,'$HDF5_SERSH_DIR/lib'
> or
>
That’s OK. Many of us make that mistake, though often as a typo.
One thing that helps is that the correct spelling of Open MPI has a space in
it, but OpenMP does not.
If not aware what OpenMP is, here is a link: http://openmp.org/wp/
What makes it more confusing is that more and more apps.
Just to give a quick pointer... RHEL 6.4 is pretty new, and OFED 1.5.3.2 is
pretty old, so that is likely the root of your issue.
I believe the first OFED that supported RHEL 6.4 , which is roughly = CentOS
6.4, is OFED 3.5-1:
http://www.openfabrics.org/downloads/OFED/ofed-3.5-1/
What also
> The trouble is when I try to add some "--mca" parameters to force it to
> use TCP/Ethernet, the program seems to hang. I get the headers of the
> "osu_bw" output, but no results, even on the first case (1 byte payload
> per packet). This is occurring on both the IB-enabled nodes, and on the
>
On 8/3/13 7:09 PM, RoboBeans wrote:
On first 7 nodes:
[mpidemo@SERVER-3 ~]$ ofed_info | head -n 1
OFED-1.5.3.2:
On last 4 nodes:
[mpidemo@sv-2 ~]$ ofed_info | head -n 1
-bash: ofed_info: command not found
[Tom]
This is a pretty good clue that OFED is not installed on the last 4 nodes. You
> "Elken, Tom" <tom.el...@intel.com> writes:
> > there is a kcopy module
> > that assists shared memory MPI bandwidth in a way similar to knem.
>
> Somewhat OT, but is it documented? I went looking some time ago, and
> couldn't find anything more
> I was hoping that someone might have some examples of real application
> behaviour rather than micro benchmarks. It can be crazy hard to get that
> information from users.
[Tom]
I don't have direct performance information on knem, but with Intel's (formerly
QLogic's) PSM layer as delivered in
t they did not configure for mpi-f77 & mpif90, but perhaps this is
still helpful, if the AR and RANLIB flags are important.
-Tom
regards
Michael
On Mon, Jul 8, 2013 at 4:30 PM, Tim Carlson
<tim.carl...@pnl.gov<mailto:tim.carl...@pnl.gov>> wrote:
On Mon, 8 Jul 2013, Elken, Tom
On Mon, Jul 8, 2013 at 12:10 PM, Elken, Tom
<tom.el...@intel.com<mailto:tom.el...@intel.com>> wrote:
Do you guys have any plan to support Intel Phi in the future? That is, running
MPI code on the Phi cards or across the multicore and Phi, as Intel MPI does?
[Tom]
Hi Michael,
Becaus
Do you guys have any plan to support Intel Phi in the future? That is, running
MPI code on the Phi cards or across the multicore and Phi, as Intel MPI does?
[Tom]
Hi Michael,
Because a Xeon Phi card acts a lot like a Linux host with an x86 architecture,
you can build your own Open MPI libraries
> As a guess: I suggest looking for a package named openmpi-devel, or something
> like that.
[Tom]
Yes, you want "-devel" in addition to the RPM you listed. Going to the URL
below, I see listed:
openmpi-1.5.4-1.el6.x86_64.rpm - Open Message Passing Interface
> It looks like your PATH is pointing to the Intel MPI mpirun,
> not to the Open MPI mpirun/mpiexec.
[Tom]
Just to expand a little on this, it looks like your path is pointing to the
Intel MPI run-time version (mpirt) that is included with the Intel Compiler and
it's PATH/LD_LIBRARY_PATH is set
> > Intel has acquired the InfiniBand assets of QLogic
> > about a year ago. These SDR HCAs are no longer supported, but should
> > still work.
[Tom]
I guess the more important part of what I wrote is that " These SDR HCAs are no
longer supported" :)
>
> Do you mean they should work with the
> I have seen it recommended to use psm instead of openib for QLogic cards.
[Tom]
Yes. PSM will perform better and be more stable when running OpenMPI than
using verbs. Intel has acquired the InfiniBand assets of QLogic about a year
ago. These SDR HCAs are no longer supported, but should
> The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and
> you are getting that instead of OpenMPI. You need to fix your path for all
> the
> shells you use.
[Tom]
Agree with Michael, but thought I would note something additional.
If you are using OFED's mpi-selector to
Now I would like to test it with a simple hello project. Ralph Castain suggest
me the following web site:
https://wiki.mst.edu/nic/examples/openmpi-intel-fortran90-example
This is the results of my simulation:
Hello World! I am0 of1
How ever I have a quad core
I'll agree with Jeff that what you propose sounds right for avg. round-trip
time.
Just thought I'd mention that when people talk about the ping-pong latency or
MPI latency benchmarks, they are usually referring to 1/2 the round-trip time.
So you compute everything the same as you did, and
Hi Seshendra,
If you have implemented hyperthreading by Enabling it in the BIOS, then when
you look at
cat /proc/cpuinfo output, you will see 2x the # of CPUs than the number of
cores on your system.
If that number of CPUs showing on a node = H, and the number of nodes in your
cluster with
omips : 4623.18
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
On Jul 16, 2012, at 4:09 PM, Elken, Tom wrote:
Anne,
output from "cat /proc/cpuinfo" on your node &
Anne,
output from "cat /proc/cpuinfo" on your node "hostname" may help those trying
to answer.
-Tom
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Ralph Castain
> Sent: Monday, July 16, 2012 2:47 PM
> To: Open MPI Users
>
Hi Sebastien,
The Infinipath / PSM software that was developed by PathScale/QLogic is now
part of Intel.
I'll advise you off-list about how to contact our customer support so we can
gather information about your software installation and work to resolve your
issue.
The 20 microseconds
29 matches
Mail list logo