Hi Syed Ahsan Ali
On 02/27/2015 12:46 PM, Syed Ahsan Ali wrote:
Oh sorry. That is related to application. I need to recompile
application too I guess.
You surely do.
Also, make sure the environment, in particular PATH and LD_LIBRARY_PATH
is propagated to the compute nodes.
Not doing that is a
Oh sorry. That is related to application. I need to recompile
application too I guess.
On Fri, Feb 27, 2015 at 10:44 PM, Syed Ahsan Ali wrote:
> Dear Gus
>
> Thanks once again for suggestion. Yes I did that before installation
> to new path. I am getting error now about some library
> tstint2lm:
Dear Gus
Thanks once again for suggestion. Yes I did that before installation
to new path. I am getting error now about some library
tstint2lm: error while loading shared libraries:
libmpi_usempif08.so.0: cannot open shared object file: No such file or
directory
While library is present
[pmdtest@
Hi Syed Ahsan Ali
To avoid any leftovers and further confusion,
I suggest that you delete completely the old installation directory.
Then start fresh from the configure step with the prefix pointing to
--prefix=/share/apps/openmpi-1.8.4_gcc-4.9.2
I hope this helps,
Gus Correa
On 02/27/2015 12:1
Hi Gus
Thanks for prompt response. Well judged, I compiled with /export/apps
prefix so that is most probably the reason. I'll check and update you.
Best wishes
Ahsan
On Fri, Feb 27, 2015 at 10:07 PM, Gus Correa wrote:
> Hi Syed
>
> This really sounds as a problem specific to Rocks Clusters,
> n
Hi Syed
This really sounds as a problem specific to Rocks Clusters,
not an issue with Open MPI.
A confusion related to mount points, and soft links used by Rocks.
I haven't used Rocks Clusters in a while,
and I don't remember the details anymore, so please take my
suggestions with a grain of sal
I am trying to run openmpi application on my cluster. But the mpirun
fails, simple hostname command gives this error
[pmdtest@hpc bin]$ mpirun --host compute-0-0 hostname
--
Sorry! You were supposed to get help about:
op