Any chance this is all due to an OS X security setting? Apple has
been putting locked doors on many, many things lately.
On Thu, May 5, 2022 at 8:57 AM Jeff Squyres (jsquyres) via users
wrote:
>
> Scott --
>
> Sorry; something I should have clarified in my original email: I meant you to
> run
If you are running this on a cluster or other professionally supported
machine, your system administrator may be able to help.
You should also check to make sure that you should be running LS-DYNA
directly. I believe that you should be running mpirun or mpiexec
followed by the name of the LS-DYNA
Luis,
Can you install OpenMPI into your home directory (or other shared
filesystem) and use that? You may also want contact your cluster
admins to see if they can help do that or offer another solution.
On Wed, Jan 26, 2022 at 3:21 PM Luis Alfredo Pires Barbosa via users
wrote:
>
> Hi Ralph,
>
definition of `ompi_op_avx_3buff_functions_avx2'
>
> ./.libs/liblocal_ops_avx2.a(liblocal_ops_avx2_la-op_avx_functions.o):/project/muno/OpenMPI/BUILD/SRC/openmpi-4.1.1/ompi/mca/op/avx/op_avx_functions.c:651:
> first defined here
> make[2]: *** [mca_op_avx.la] Error 2
> make[2]: Le
Ray,
If all the errors about not being compiled with -fPIC are still appearing,
there may be a bug that is preventing the option from getting through to
the compiler(s). It might be worth looking through the logs to see the
full compile command for one or more of them to see whether that is true?
We are getting this message when OpenMPI starts up.
--
WARNING: There was an error initializing an OpenFabrics device.
Local host: gls801
Local device: mlx5_0
Thomas,
I think OpenMP is installed correctly. This
$ mpiexec -mca btl ^openib -N 5 gcc --version
asks OpenMPI to run `gcc --version` once for each processor assigned to the
job, so if you did NOT get 5 sets of output, it would be incorrect.
>From your error error message, it looks to me as th
It covers a good deal more than MPI, but there is at least one full
chapter on MPI in
Scientific Programming and Computer Architecture, Divakar
Viswanath (MIT Press, 2017)
also available online at
https://divakarvi.github.io/bk-spca/spca.html
https://divakarvi.github.io/bk-spca/spca.
We are getting errors on our system that indicate that we should
export OMPI_MCA_btl_vader_single_copy_mechanism=none
Our user originally reported
> This occurs for both GCC and PGI. The errors we get if we do not set this
> indicate something is going wrong in our communication which uses
` has better debugging
capabilities?
Thanks,-- bennet
On Mon, Feb 3, 2020 at 12:02 PM Jeff Squyres (jsquyres)
wrote:
>
> On Feb 3, 2020, at 10:03 AM, Bennet Fauber wrote:
> >
> > Ah, ha!
> >
> > Yes, that seems to be it. Thanks.
>
> Ok, good. I u
duler (Slurm),
PMIx, and OpenMPI, so I am a bit muddled about how all the moving
pieces work yet.
On Sun, Feb 2, 2020 at 4:16 PM Jeff Squyres (jsquyres)
wrote:
>
> Bennet --
>
> Just curious: is there a reason you're not using UCX?
>
>
> > On Feb 2, 2020, a
We get these warnings/error from OpenMPI, version 3.1.4 and 4.0.2
--
WARNING: No preset parameters were found for the device that Open MPI
detected:
Local host:gl3080
Device name: mlx5_0
Device ven
Setting UCX_LOG_LEVEL=error suppresses the messages.
There may be release eager messages.
If anyone is interested, this is the GitHub Issue:
https://github.com/openucx/ucx/issues/4175
On Sun, Sep 8, 2019 at 11:37 AM Bennet Fauber wrote:
>
> I am posting this here, first, as I think
I am posting this here, first, as I think these questions are probably
OpenMPI related and not related specifically to parallel HDF5.
I am trying to get parallel HDF5 installed, but in the `make check`, I
am getting many, many warnings of the form
-
mpool.c:38 UCX WARN object 0x2afbefc67f
Hi, Open MPI developers,
Is this something you might be interested in?
-- Forwarded message -
From: Sarah Maddox
Date: Mon, Mar 11, 2019 at 2:56 PM
Subject: Announcement and thanks to Season of Docs survey respondents:
Season of Docs has launched
To:
We’re delighted to announce
>From the web page at
https://www.open-mpi.org/nightly/
Before deciding which series to download, be sure to read Open
MPI's philosophy on
version numbers. The short version is that odd numbered release
series are "feature"
series that eventually morph into even numbered "super stable
Oh, you would get overwhelmed, almost certainly.
On Wed, Mar 6, 2019 at 10:47 AM Ralph H Castain wrote:
>
> We currently reserve the Slack channel for developers. We might be willing to
> open a channel for users, but we’d have to discuss it - there is a concern
> that we not get overwhelmed :-
Dani,
We have had to specify the path to the external PMIx explicitly when
compiling both Slurm and OpenMPI; e.g.,
--with-pmix=/opt/pmix/3.1.2
That insures the both are referring to the same version.
-- bennet
On Sun, Mar 3, 2019 at 8:56 AM Daniel Letai wrote:
>
> Hello,
>
>
> I have bui
Jeff Squyres (jsquyres) via users
wrote:
>
> On Feb 28, 2019, at 11:27 AM, Bennet Fauber wrote:
> >
> > 13bb410b52becbfa140f5791bd50d580 /sw/src/arcts/ompi/openmpi-1.10.7.tar.gz
> > bcea63d634d05c0f5a821ce75a1eb2b2 openmpi-v1.10-201705170239-5e373bf.tar.gz
>
> Bennet --
is
sometimes necessary.
If 1.10.7 is too old to debug, I understand.
On Thu, Feb 28, 2019 at 12:06 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> On Feb 28, 2019, at 11:27 AM, Bennet Fauber wrote:
> >
> > 13bb410b52becbfa140f5791bd50d580
> /sw/s
too much more output to include here.
Seems to be the same situation with the last available nightly build, as well.
bcea63d634d05c0f5a821ce75a1eb2b2 openmpi-v1.10-201705170239-5e373bf.tar.gz
On Sun, Feb 24, 2019 at 8:11 AM Bennet Fauber wrote:
>
> HI, Gilles,
>
> With respect t
HI, Gilles,
With respect to your comment about not using --FOO=/usr It is bad
practice, sure, and it should be unnecessary, but we have had at least
one instance where it is also necessary for the requested feature to
actually work. The case I am thinking of was, in particular, OpenMPI
1.10.
Used to be that you could put default MCA settings in
OMPI_ROOT/etc/openmpi-mca-params.conf.
btl_openib_allow_ib=1
You could try that.
-- bennet
On Mon, Jan 7, 2019 at 8:16 AM Udayanga Wickramasinghe wrote:
>
> Hi Salim,
> Thank you. Yeah, I noticed warnings would vanish by turning on
>
Maybe the distribution tar ball at
https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.3.tar.gz
did not get refreshed after the fix in
https://github.com/bosilca/ompi/commit/b902cd5eb765ada57f06c75048509d0716953549
was implemented? I downloaded the tarball from open-mpi.org today, 2
:
> https://pmix.org/support/faq/how-does-pmix-work-with-containers/
>
> Your options would be to build OMPI against the same PMIx 2.0.2 you used
> for Slurm, or update the PMIx version you used for Slurm to something that
> can support cross-version operations.
>
> Ralph
>
&g
I have been having some difficulties getting the right combination of
SLURM, PMIx, and OMPI 3.1.x (specifically 3.1.2) to compile in such a way
that both the srun method of starting jobs and mpirun/mpiexec will also run.
If someone has a slurm 18.08 or newer, PMIx, and OMPI 3.x that works with
bot
There is a linux utility program `locate` that may be installed on
your system. You could try
$ locate ibv_devinfo
Thus, mine returns
$ locate ibv_devinfo
/usr/bin/ibv_devinfo
/usr/share/man/man1/ibv_devinfo.1.gz
That should find it if it is on local disk and not in a network
filesystem, and i
point it to the slurm pmi directories.
>
>
> --
> *From:* users on behalf of Bennet
> Fauber
> *Sent:* Wednesday, October 10, 2018 1:14 PM
> *To:* users@lists.open-mpi.org
> *Subject:* Re: [OMPI users] issue compiling openmpi 3.2.1 with pmi and
I thought the --with-pmi=/the/dir was meant to point to the top of a
traditional FHS installation of PMI; e.g., /opt/pmi with
subdirectories for bin, lib, include, man, etc. It looks like this is
pointing only to the header file, based on the name.
On Wed, Oct 10, 2018 at 11:27 AM Ralph H Castai
fails to run the binary with mpirun.
It is late, and I am baffled.
On Mon, Jun 18, 2018 at 9:02 PM Bennet Fauber wrote:
>
> Ryan,
>
> With srun it's fine. Only with mpirun is there a problem, and that is
> both on a single node and on multiple nodes. SLURM was built against
> Hello world from processor slepner032.amarel.rutgers.edu, rank 8 out of 16
> processors
> Hello world from processor slepner032.amarel.rutgers.edu, rank 9 out of 16
> processors
> Hello world from processor slepner032.amarel.rutgers.edu, rank 10 out of 16
> processors
> Hello wo
ring directory `/tmp/build/openmpi-3.0.0/test/class'
make[4]: Entering directory `/tmp/build/openmpi-3.0.0/test/class'
I have to interrupt it, but it's been many minutes, and usually these
have not been behaving this way.
-- bennet
On Mon, Jun 18, 2018 at 4:21 PM Bennet Fauber wrot
3.0.0,
> > and we'll try downgrading SLURM to a prior version.
> >
> > -- bennet
> >
> >
> > -- bennetOn Mon, Jun 18, 2018 at 10:56 AM r...@open-mpi.org
> > wrote:
> >>
> >> Hmmm...well, the error has changed from your initial report. Tur
E NODES
> > NODELIST(REASON)
> > 158 standard bash bennet R 14:30 1 cav01
> > [bennet@cavium-hpc ~]$ srun hostname
> > cav01.arc-ts.umich.edu
> > [ repeated 23 more times ]
> >
> > As always, your help is much appreciated,
&g
able-debug to your OMPI configure cmd line, and then add --mca
> plm_base_verbose 10 to your mpirun cmd line. For some reason, the remote
> daemon isn’t starting - this will give you some info as to why.
>
>
> > On Jun 17, 2018, at 9:07 AM, Bennet Fauber wrote:
> >
>
I have a compiled binary that will run with srun but not with mpirun.
The attempts to run with mpirun all result in failures to initialize.
I have tried this on one node, and on two nodes, with firewall turned
on and with it off.
Am I missing some command line option for mpirun?
OMPI built from t
component was selected as the default
> configure: error: Cannot continue
> $
> ---
>
> Are you seeing something different?
>
>
>
> > On Jun 8, 2018, at 11:16 AM, r...@open-mpi.org wrote:
> >
> >
> >
> >> On Jun 8, 2018, at 8:10 AM, Bennet Fauber
7, 2018, at 7:41 AM, Bennet Fauber wrote:
> >
> > Thanks, Ralph,
> >
> > I just tried it with
> >
> >srun --mpi=pmix_v2 ./test_mpi
> >
> > and got these messages
> >
> >
> > srun: Step created for job 89
> > [cav02.arc
Artem,
Please find attached the gzipped slurmd.log with the entries from the
failed job's run.
-- bennet
On Fri, Jun 8, 2018 at 7:53 AM Bennet Fauber wrote:
> Hi, Artem,
>
> Thanks for the reply. I'll answer a couple of questions inline below.
>
> One odd thin
opics:
>
>1. Re: Fwd: OpenMPI 3.1.0 on aarch64 (r...@open-mpi.org)
>
> ------
>
> Message: 1
> Date: Thu, 7 Jun 2018 08:05:30 -0700
> From: "r...@open-mpi.org"
> To: Open MPI Users
> Subject: Re: [
I rebuilt and examined the logs more closely. There was a warning
about a failure with the external hwloc, and that led to finding that
the CentOS hwloc-devel package was not installed.
I also added the options that we have been using for a while,
--disable-dlopen and --enable-shared, to the conf
k you need to set your MPIDefault to pmix_v2 since you are using a PMIx
> v2 library
>
>
>> On Jun 7, 2018, at 6:25 AM, Bennet Fauber wrote:
>>
>> Hi, Ralph,
>>
>> Thanks for the reply, and sorry for the missing information. I hope
>> this fill
your intent was to use Slurm’s PMI-1 or PMI-2, then you need to configure
> OMPI --with-pmi=
>
> Ralph
>
>
>> On Jun 7, 2018, at 5:21 AM, Bennet Fauber wrote:
>>
>> We are trying out MPI on an aarch64 cluster.
>>
>> Our system administrators installed S
We are trying out MPI on an aarch64 cluster.
Our system administrators installed SLURM and PMIx 2.0.2 from .rpm.
I compiled OpenMPI using the ARM distributed gcc/7.1.0 using the
configure flags shown in this snippet from the top of config.log
It was created by Open MPI configure 3.1.0, which was
hangs as well - no change.
>
> Marcin
>
>
>
> On 06/04/2018 05:27 PM, r...@open-mpi.org wrote:
>
> It might call disconnect more than once if it creates multiple
> communicators. Here’s another test case for that behavior:
>
>
>
>
>
> On Jun 4, 2018, at 7:
Just out of curiosity, but would using Rmpi and/or doMPI help in any way?
-- bennet
On Mon, Jun 4, 2018 at 10:00 AM, marcin.krotkiewski
wrote:
> Thanks, Ralph!
>
> Your code finishes normally, I guess then the reason might be lying in R.
> Running the R code with -mca pmix_base_verbose 1 i see
> You can test this out by adding --mpi=pmi2 to your srun cmd line and see if
> that solves the problem (you may also need to add OMPI_MCA_pmix=s2 to your
> environment as slurm has a tendency to publish envars even when they aren’t
> being used).
>
>
>
>> On Nov 29, 2017, at
.@open-mpi.org :
>
>> What Charles said was true but not quite complete. We still support the
>> older PMI libraries but you likely have to point us to wherever slurm put
>> them.
>>
>> However,we definitely recommend using PMIx as you will get a faster launch
>>
&g
ving a bum steer.
>
> Hope this helps,
>
> Charlie Taylor
> University of Florida
>
> On Nov 16, 2017, at 10:34 AM, Bennet Fauber wrote:
>
> I think that OpenMPI is supposed to support SLURM integration such that
>
>srun ./hello-mpi
>
> should work? I built OMPI
I think that OpenMPI is supposed to support SLURM integration such that
srun ./hello-mpi
should work? I built OMPI 2.1.2 with
export CONFIGURE_FLAGS='--disable-dlopen --enable-shared'
export COMPILERS='CC=gcc CXX=g++ FC=gfortran F77=gfortran'
CMD="./configure \
--prefix=${PREFIX} \
x27; and '--with-pmi=...'
>
> D
>
> On 11/14/2017 10:01 AM, Bennet Fauber wrote:
> > We are trying SLURM for the first time, and prior to this I've always
> > built OMPI with Torque support. I was hoping that someone with more
> > experience than I wit
We are trying SLURM for the first time, and prior to this I've always built
OMPI with Torque support. I was hoping that someone with more experience
than I with both OMPI and SLURM might provide a bit of up-front advice.
My situation is that we are running CentOS 7.3 (soon to be 7.4), we use
Mell
Would
$ mpirun -x LD_LIBRARY_PATH ...
work here? I think from the man page for mpirun that should request
that it would would export the currently set value of LD_LIBRARY_PATH
to the remote nodes prior to executing the command there.
-- bennet
On Tue, Aug 22, 2017 at 11:55 AM, Jackson, G
with
gcc 4.8.5 as shipped.
-- bennet
On Mon, Feb 20, 2017 at 1:10 PM, r...@open-mpi.org wrote:
> If you can send us some more info on how it breaks, that would be helpful.
> I’ll file it as an issue so we can track things
>
> Thanks
> Ralph
>
>
>> On Feb 20, 201
on’t remember if the configury checks for functions in the library
> or not. If so, then you’ll need that wherever you build OMPI, but everything
> else is accurate
>
> Good luck - and let us know how it goes!
> Ralph
>
>> On Feb 17, 2017, at 4:34 PM, Bennet Fauber wrot
et around the problem). Thus, it isn’t hard to
> avoid this portability problem - you just need to think ahead a bit.
>
> HTH
> Ralph
>
>> On Feb 17, 2017, at 3:49 PM, Bennet Fauber wrote:
>>
>> I am wishing to follow the instructions on the Singularity web site
I am wishing to follow the instructions on the Singularity web site,
http://singularity.lbl.gov/docs-hpc
to test Singularity and OMPI on our cluster. My previously normal
configure for the 1.x series looked like this.
./configure --prefix=/usr/local \
--mandir=${PREFIX}/share/man \
-
How do they compare if you run a much smaller number of ranks, say -np 2 or 4?
Is the workstation shared and doing any other work?
You could insert some diagnostics into your script, for example,
uptime and free, both before and after running your MPI program and
compare.
You could also run top
You may want to run this by Penguin support, too.
I believe that Penguin on Demand use Torque, in which case the
nodes=1:ppn=20
is requesting 20 cores on a single node.
If this is Torque, then you should get a host list, with counts by inserting
uniq -c $PBS_NODEFILE
after the last #P
PPFLAGS=-I/Users/mathomp4/src/MPI/openmpi-
>> 2.0.1 -I/Users/mathomp4/src/MPI/openmpi-2.0.1
>> -I/Users/mathomp4/src/MPI/openmpi-2.0.1/opal/include
>> -I/Users/mathomp4/src/MPI/o
>> penmpi-2.0.1/opal/mca/hwloc/hwloc1112/hwloc/include -Drandom=opal_random'
>> --cache-file=/d
atever magic found stdint.h for the startup isn't passed
> down to libevent when it builds? As I scan the configure output, PMIx sees
> stdint.h in its section and ROMIO sees it as well, but not libevent2022. The
> Makefiles inside of libevent2022 do have 'oldincludedir = /usr/inc
I think PGI uses installed GCC components for some parts of standard C
(at least for some things on Linux, it does; and I imagine it is
similar for Mac). If you look at the post at
http://www.pgroup.com/userforum/viewtopic.php?t=5147&sid=17f3afa2cd0eec05b0f4e54a60f50479
The problem seems to have
gt; Department of Mechanical Engineering
> Imperial College London
> Exhibition Road
> London SW7 2AZ
> Tel:+44 (0)20 7594 7037/7033
> Mobile +44 (0)776 495 9702
> Fax:+44 (0)20 7594 5702
> E-mail: w.jo...@imperial.ac.uk
> web site: http://www.imperial.ac.uk/me
>
Can you include your entire ./configure line? Also, it would be
useful, perhaps to look at the output of
$ printenv | grep LIBRARY
to make sure that all Intel library paths made it into the appropriate
variables.
When I build 1.10.2, I had these:
LIBRARY_PATH=/sw/arcts/centos7/intel/2013.1.046
Mahesh,
Depending what you are trying to accomplish, might using the mpirun option
-pernode -o- --pernode
work for you? That requests that only one process be spawned per
available node.
We generally use this for hybrid codes, where the single process will
spawn threads to the remaining proc
le or as environment variables, as described in the MCA
> section below. Some
>examples include:
>
>mpirun option MCA parameter key value
>
> --map-by core rmaps_base_mapping_policy core
> --map-by socket
Ralph,
Alas, I will not be at SC16. I would like to hear and/or see what you
present, so if it gets made available in alternate format, I'd
appreciated know where and how to get it.
I am more and more coming to think that our cluster configuration is
essentially designed to frustrated MPI develo
Matlab may have its own MPI installed. It definitely does if you have
the parallel computing toolbox. If you have that, it could be causing
problems. If you can, you might consider compiling your Matlab
application into a standalone executable, then call that from your own
program. That bypasse
Pardon my naivete, but why is bind-to-none not the default, and if the
user wants to specify something, they can then get into trouble
knowingly? We have had all manner of problems with binding when using
cpusets/cgroups.
-- bennet
On Thu, Sep 29, 2016 at 9:52 PM, Gilles Gouaillardet wrote:
>
siesta/openmpi-1.8.8
> --enable-mpirun-prefix-by-default --enable-static --disable-dl-dlopen
>
>
>
>
> Regards,
> Mahmood
>
>
>
> On Wed, Sep 14, 2016 at 5:07 PM, Bennet Fauber wrote:
>>
>> Mahmood,
>>
>> It looks like it is dlopen that is comp
Mahmood,
It looks like it is dlopen that is complaining. What happens if
--disable-dlopen?
On Wed, Sep 14, 2016 at 8:34 AM, Mahmood Naderan wrote:
> Well I want to omit LD_LIBRARY_PATH. For that reason I am building the
> binary statically.
>
>> note this is not required when Open MPI is confi
Oswin,
Does the torque library show up if you run
$ ldd mpirun
That would indicate that Torque support is compiled in.
Also, what happens if you use the same hostfile, or some hostfile as
an explicit argument when you run mpirun from within the torque job?
-- bennet
On Wed, Sep 7, 2016 at
;
>> > Meh. That's a good point. We might have to pony up the cost for
>> > the certificates, then. :-(
>> > (Indiana University provided all this stuff to us for free; now that
>> > the community has to pay for our own hosting, the fund
elated to do with our real jobs (i.e., software development of Open MPI);
> we're doing all this migration work on nights, weekends, and sometimes while
> waiting for lengthy compiles. We didn't think of the
> Google-will-have-https-links issue. :-\
>
>
>
>> On J
/www.open-mpi.org/community/lists/devel/2016/06/19139.php).
>
> I thought I had disabled https for the web site last night when I did the
> move -- I'll have to check into this.
>
> For the meantime, please just use http://www.open-mpi.org/.
>
>
>
>> On Jul
I am getting a certificate error from https://www.open-mpi.org/
The owner of www.open-mpi.org has configured their website improperly.
To protect your information from being stolen, Firefox has not
connected to this website.
and if I go to advanced and ask about the certificate, it says
The cert
We have found that virtually all Rmpi jobs need to be started with
$ mpirun -np 1 R CMD BATCH
This is, as I understand it, because the first R will initialize the
MPI environment and then when you create the cluster, it wants to be
able to start the rest of the processes. When you intialize
could
> verify that if you are using something that old.
>
>
>
>> On Jan 11, 2016, at 5:32 AM, Bennet Fauber wrote:
>>
>> We have an issue with binding to cores with some applications and the
>> default causes issues. We would, therefore, like to set the
>&
We have an issue with binding to cores with some applications and the
default causes issues. We would, therefore, like to set the
equivalent of
mpirun --bind-to none
globally. I tried search for combinations of 'openmpi global
settings', 'site settings', and the like on the web and ended up
sev
There is also the package Lmod, which provides similar functionality
to environment modules. It is maintained by TACC.
https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
but I think the current source code is at
https://github.com/TACC/Lmod
-- bennet
On Thu, Sep 3, 2015 at
un it from login terminal, it says:
> Warning: Permanently added 'cx1055,10.1.5.35' (RSA) to the list of known
> hosts.
> Warning: Permanently added 'cx1071,10.1.5.51' (RSA) to the list of known
> hosts.
>
>
> Is it ok to conclude about both node usage ?
On Sun, Aug 2, 2015 at 10:47 AM, abhisek Mondal wrote:
Try
/mpirun --hostfile myhostfile -np 32 hostname
There is also the Lmod project, based at TACC, and run by Robert McLay.
https://www.tacc.utexas.edu/tacc-projects/lmod
That's under current, active development, and if you'd be creating a
brand new modules infrastructure, bears a close look.
-- bennet
On Tue, Aug 5, 2014 at 2:39 PM, Fabricio C
On Thu, Jun 12, 2014 at 10:56 AM, Ralph Castain wrote:
> I've poked and prodded, and the 1.8.2 tarball seems to be handling this
> situation
Ralph,
That's still the development tarball, right? 1.8.2 remains unreleased?
Is the an ETA for 1.8.2 the end of this month?
Thanks, -- bennet
that it will change quite a bit
>> over time, and it'll take us a bit of time to design and implement it.
>
> A menu-like system is not going to be very useful at least for us, since we
> script all of our installations. Scripting a menu is not very handy.
>
> Maxime
I think Maxime's suggestion is sane and reasonable. Just in case
you're taking ha'penny's worth from the groundlings. I think I would
prefer not to have capability included that we won't use.
-- bennet
On Wed, May 14, 2014 at 7:43 PM, Maxime Boissonneault
wrote:
> For the scheduler issue, I
Is there an ETA for 1.8.2 general release instead of snapshot?
Thanks, -- bennet
On Wed, May 14, 2014 at 10:17 AM, Ralph Castain wrote:
> You might give it a try with 1.8.1 or the nightly snapshot from 1.8.2 - we
> updated ROMIO since the 1.6 series, and whatever fix is required may be in
> t
The permission denied looks like it is being issued against
'/bin/.'
What do you get if you grep your own username from /etc/passwd? That is,
% grep Edwin /etc/passwd
If your shell is listed as /bin/csh, then you need to use csh's
syntax, which would be
% source hello
(which will also work f
Hi, Ross,
Just out of curiosity, is Rmpi required for some package that you're
using? I only ask because, if you're mostly writing your own MPI
calls, you might want to look at pbdR/pbdMPI, if you haven't already.
They also have a pbdPROF for profiling and which should be able to do
some profilin
My experience with Rmpi and OpenMPI is that it doesn't seem to do well
with the dlopen or dynamic loading. I recently installed R 3.0.3, and
Rmpi, which failed when built against our standard OpenMPI but
succeeded using the following 'secret recipe'. Perhaps there is
something here that will be h
In case it is helpful to those who may not have the Intel compilers, these
are the libraries against which the two executables of Lisandro's
allgather.c get linked:
with Intel compilers:
=
$ ldd a.out
linux-vds
On Wed, 23 May 2012, Lisandro Dalcin wrote:
On 23 May 2012 19:04, Jeff Squyres wrote:
Thanks for all the info!
But still, can we get a copy of the test in C? That would make it
significantly easier for us to tell if there is a problem with Open MPI --
mainly because we don't know anything
, 2012, at 4:52 PM, Bennet Fauber wrote:
I've installed the latest mpi4py-1.3 on several systems, and there is a
repeated bug when running
$ mpirun -np 5 python test/runtests.py
where it throws an error on mpigather with openmpi-1.4.4 and hangs with
openmpi-1.3.
It runs to compl
h -np 5, and it runs with all other numbers of processors
I've tested always.
-- bennet
On May 23, 2012, at 2:52 PM, Bennet Fauber wrote:
I've installed the latest mpi4py-1.3 on several systems, and there is a
repeated bug when running
$ mpirun -np
I've installed the latest mpi4py-1.3 on several systems, and there is a
repeated bug when running
$ mpirun -np 5 python test/runtests.py
where it throws an error on mpigather with openmpi-1.4.4 and hangs with
openmpi-1.3.
It runs to completion and passes all tests when run with -np o
95 matches
Mail list logo