Great it is finally working !
nvml and opencl are only used by hwloc, and i do not think Open MPI is
using these features,
so i suggest you go ahead, reconfigure and rebuild Open MPI and see how
things go
Cheers,
Gilles
On 9/22/2017 4:59 PM, Tim Jim wrote:
Hi Gilles,
Yes, you're rig
Hi Gilles,
Yes, you're right. I wanted to double check the compile but didn't notice I
was pointing to the exec I compiled from a previous Make.
mpicc now seems to work, running mpirun hello_c gives:
Hello, world, I am 0 of 4, (Open MPI v3.0.0, package: Open MPI
tjim@DESKTOP-TA3P0PS Distribution
Was there an error in the copy/paste ?
The mpicc command should be
mpicc /opt/openmpi/openmpi-3.0.0_src/examples/hello_c.c
Cheers,
Gilles
On Fri, Sep 22, 2017 at 3:33 PM, Tim Jim wrote:
> Thanks for the thoughts and comments. Here is the setup information:
> OpenMPI Ver. 3.0.0. Please see at
The issue is related to openCL, not NVML.
So the correct export would be "export enable_opencl=no" (you may want
to "export enable_nvml=no" as well).
On 09/21/2017 12:32 AM, Tim Jim wrote:
Hi,
I tried as you suggested: export nvml_enable=no, then reconfigured and
ran make all install again,
A few things:
0. Rather than go a few more rounds of "how was Open MPI configured", can you
send all the information listed here:
https://www.open-mpi.org/community/help/
That will tell us a lot about exactly how your Open MPI was configured,
installed, etc.
1. Your mpirun error is becaus
Hi,
I tried as you suggested: export nvml_enable=no, then reconfigured and ran
make all install again, but mpicc is still producing the same error. What
should I try next?
Many thanks,
Tim
On 21 September 2017 at 16:12, Gilles Gouaillardet
wrote:
> Tim,
>
>
> do that in your shell, right befor
Tim,
do that in your shell, right before invoking configure.
export nvml_enable=no
./configure ...
make && make install
you can keep the --without-cuda flag (i think this is unrelated though)
Cheers,
Gilles
On 9/21/2017 3:54 PM, Tim Jim wrote:
Dear Gilles,
Thanks for the mail - where
Dear Gilles,
Thanks for the mail - where should I set export nvml_enable=no? Should I
reconfigure with default cuda support or keep the --without-cuda flag?
Kind regards,
Tim
On 21 September 2017 at 15:22, Gilles Gouaillardet
wrote:
> Tim,
>
>
> i am not familiar with CUDA, but that might help
Tim,
i am not familiar with CUDA, but that might help
can you please
export nvml_enable=no
and then re-configure and rebuild Open MPI ?
i hope this will help you
Cheers,
Gilles
On 9/21/2017 3:04 PM, Tim Jim wrote:
Hello,
Apologies to bring up this old thread - I finally had a chance
Hello,
Apologies to bring up this old thread - I finally had a chance to try again
with openmpi but I am still have trouble getting it to run. I downloaded
version 3.0.0 hoping it would solve some of the problems but on running
mpicc for the previous test case, I am still getting an undefined refe
Tim,
On 5/18/2017 2:44 PM, Tim Jim wrote:
In summary, I have attempted to install OpenMPI on Ubuntu 16.04 to the
following prefix: /opt/openmpi-openmpi-2.1.0. I have also manually
added the following to my .bashrc:
export PATH="/opt/openmpi/openmpi-2.1.0/bin:$PATH"
MPI_DIR=/opt/openmpi/open
Thanks for the thoughts, I'll give it a go. For reference, I have installed
it in the opt directory, as that is where I have kept my installs
currently. Will this be a problem when calling mpi from other packages?
Thanks,
Tim
On 24 May 2017 06:30, "Reuti" wrote:
> Hi,
>
> Am 23.05.2017 um 05:03
Hi,
Am 23.05.2017 um 05:03 schrieb Tim Jim:
> Dear Reuti,
>
> Thanks for the reply. What options do I have to test whether it has
> successfully built?
LIke before: can you compile and run mpihello.c this time – all as ordinary
user in case you installed the Open MPI into something like
$HOM
Dear Reuti,
Thanks for the reply. What options do I have to test whether it has
successfully built?
Thanks and kind regards.
Tim
On 22 May 2017 at 19:39, Reuti wrote:
> Hi,
>
> > Am 22.05.2017 um 07:22 schrieb Tim Jim :
> >
> > Hello,
> >
> > Thanks for your message. I'm trying to get this to
Hi,
> Am 22.05.2017 um 07:22 schrieb Tim Jim :
>
> Hello,
>
> Thanks for your message. I'm trying to get this to work on a single
> machine.
Ok.
> How might you suggest getting OpenMPIworking without python and
> CUDA?
It looks like it's detected automatically. It should be possible to disab
Hello,
Thanks for your message. I'm trying to get this to work on a single
machine. How might you suggest getting OpenMPIworking without python and
CUDA? I don't recall setting anything for either, as the only command I had
run was "./configure --prefix=/opt/openmpi/openmpi-2.1.0" - did it possibl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Am 18.05.2017 um 07:44 schrieb Tim Jim:
> Hello,
>
> I have been having some issues with trying to get OpenMPI working with
> mpi4py. I've tried to break down my troubleshooting into a few chunks below,
> and I believe that there are a few, di
Thanks Jeff and Renato for your help.
Renato,
You are right. Perl alias was pointing to somewhere else. I was able to
install openmpi using previous released version of it.
Thanks.
Vinay K. Mittal
On Thu, Mar 23, 2017 at 2:04 PM, Renato Golin
wrote:
> On 23 March 2017 at 17:39, Vinay Mittal
On 23 March 2017 at 17:39, Vinay Mittal wrote:
> I need mpirun to run a genome assembler.
>
> Linux installation of openmpi-2.1.0 stops during make all saying:
>
> "Perl 5.006 required--this is only version 5.00503, stopped at
> /usr/share/perl5/vars.pm line 3."
This looks like Perl's own verific
That's a pretty weird error. We don't require any specific version of perl
that I'm aware of. Are you sure that it's Open MPI's installer that is kicking
out the error?
Can you send all the information listed here:
https://www.open-mpi.org/community/help/
> On Mar 23, 2017, at 1:39 PM,
And please do us a favor, Talia, and don't cross post the same problem to
multiple mailing lists. I've been answering this on the devel posting, and all
we are doing is duplicating our answers (and wasting people's time).
Thanks
Ralph
On Feb 7, 2014, at 9:55 AM, Özgür Pekçağlıyan
wrote:
> He
Hello,
Looks like your problem is related with environment parameters. .bashrc is
only for non-login shells as Reuti mentioned before. You should look for a
file name .bash_profile, .profile or .bash_login.
You may put your export lines in one of these files.
Please check this link out;
http://a
Am 07.02.2014 um 18:10 schrieb Talla:
> Yes I have access to the command when I source it by hand? do you have any
> ready (example) ./bashrc file? I installed openmpi in my home directory (not
> root) if that help?
You can either source ~/.bashrc in any of your profiles for interactive logins
Yes I have access to the command when I source it by hand? do you have any
ready (example) ./bashrc file? I installed openmpi in my home directory
(not root) if that help?
On Fri, Feb 7, 2014 at 8:05 PM, Reuti wrote:
> Hi,
>
> Am 07.02.2014 um 17:42 schrieb Talla:
>
> > I downloaded openmpi 1.7
Hi,
Am 07.02.2014 um 17:42 schrieb Talla:
> I downloaded openmpi 1.7 and followed the installation instructions:
> cd openmpi
> ./configure --prefix="/home/$USER/.openmpi"
>
> make
> make install
> export PATH="$PATH:/home/$USER/.openmpi/bin"
> export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USE
> As a guess: I suggest looking for a package named openmpi-devel, or something
> like that.
[Tom]
Yes, you want "-devel" in addition to the RPM you listed. Going to the URL
below, I see listed:
openmpi-1.5.4-1.el6.x86_64.rpm - Open Message Passing Interface
openmpi-devel-1.5.4-1.el6.x86_64.rp
We don't really know how downstream packagers package Open MPI; you'll need to
contact the Centos Open MPI packager for specific help.
As a guess: I suggest looking for a package named openmpi-devel, or something
like that.
On May 29, 2013, at 7:41 AM, alankrutha reddy
wrote:
> Hi,
>
>
Apologize for the delay, but I missed your post.
> Hi,
>
> Am 22.02.2012 um 14:21 schrieb Salvatore Podda:
>
>> we have a computational infrastructure composed from front end and
>> worker nodes
>> which differ slightly differ in the architectures on board (I mean same
>> processor differ
Pasha, thanks for your comment.
I add the comments inlne.
> Salvatore,
>
> Please see my comment inline.
>
>>
>> More generally, in the case of a front-end nodes with a processors
>> definitively different from
>> worker nodes (same firm i.e.Intel) can openMPI applications compiled on one
>>
Salvatore,
Please see my comment inline.
>
> More generally, in the case of a front-end nodes with a processors
> definitively different from
> worker nodes (same firm i.e.Intel) can openMPI applications compiled on one
> run correctly
> on the others?
It is possible to come up with a set of
Hi,
Am 22.02.2012 um 14:21 schrieb Salvatore Podda:
> we have a computational infrastructure composed from front end and
> worker nodes
> which differ slightly differ in the architectures on board (I mean same
> processor different socket).
can you clarify this - something like an AM3 CP
Thank you vey much for your response Gus, unfortunately I was in a trip for
some time thats why I didnt reply immediately. I will try your suggestion.
2009/5/26 Gus Correa
> Hi Fivoskouk
>
> I don't use Ubuntu.
> However, we install OpenMPI from source with no problem on CentOS, Fedora,
> etc. A
Hi Fivoskouk
I don't use Ubuntu.
However, we install OpenMPI from source with no problem on CentOS,
Fedora, etc. All configuration options are available this way.
Works with gnu (gcc,g++,gfortran), Intel, PGI, etc.
We've been using 1.3.2.
Suggestion:
1) Install the Ubuntu gfortran, gcc, g++ p
33 matches
Mail list logo