Hi,
Any help on how to debug this further..!? What options can i provide at
the command prompt.
--pallab
> Hello Pallab,
>
> Is there a chance its something simple like having the Mac's
> Fireware turned on? On my 10.4 system this is in System Preference-
> >Sharing, and then the Firewall
Hello Joshua,
My firewall seems to be turned ON and it currently accepts all incoming
connections.
Also I can ping or ssh into the other end from both the Mac and the Linux
box.
Please let me know if i need to provide any specific command line options.
regards, pallab
> Hello Pallab,
>
> I
Hello Pallab,
Is there a chance its something simple like having the Mac's
Fireware turned on? On my 10.4 system this is in System Preference-
>Sharing, and then the Firewall tab.
-Joshua Bernstein
Senior Software Engineer
Penguin Computing
On Sep 18, 2009, at 3:56 PM, Pallab Datta wrote:
Hello,
I am running open-mpi between a Mac OSX (v.10.5) and Ubuntu Server V.9.04
(Linux Box). I have configured OMPI V.1.3.3 on both of them with
--enable-heterogeneous --disable-shared --enable-static options. The Linux
box is connected via a wireless USB Adapter to the same sub-network in
which
There is no hosts file there originally
I put in
cat hosts
127.0.0.1 localhost
but still get the same thing
thanks,
Erin
erin@erin-laptop:~$
Erin M. Hodgess, PhD
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: hodge...@uhd.e
Hi Souvik
Also worth checking:
1) If you can ssh passwordless from ict1 to ict2 *and* vice versa.
2) If your /etc/hosts file on *both* machines list ict1 and ict2
and their IP addresses.
3) In case you have a /home directory on each machine (i.e. /home is
not NFS mounted) if your .bashrc files o
Hi Souvik
I would guess you only installed OpenMPI only on ict1, not on ict2.
If that is the case you won't have the required OpenMPI libraries
on ict:/usr/local, and the job won't run on ict2.
I am guessing this, because you used a prefix under /usr/local,
which tends to be a "per machine" dir
Dear all,
Myself quite new to Open MPI. Recently, I had installed openmpi-1.3.3
separately on two of my machines ict1 and ict2. These machines are
dual-socket quad-core (Intel Xeon E5410) i.e. each having 8 processors and
are connected by Gigabit ethernet switch. As a prerequisite, I can ssh
betwe
Dear all,
I have installed blcr 0.8.2 and Open MPI (r21973) on my NFS account. By
default,
it seems that checkpoints are saved in $HOME. However, I would prefer them
to be saved on a local disk (e.g.: /tmp).
Does anyone know how I can change the location where Open MPI saves
checkpoints?
B
yes, I had this issue before (we are on 9.04 as well).
it has to do with the hosts file.
Erin, can you send your hosts file?
I think you want to make this the first line of your host file:
127.0.0.1 localhost
Which Ubuntu, if memory serves defaults to the name of the machine instead
of loc
It doesn't matter - 1.3 isn't going to launch another daemon on the
local node.
The problem here is that OMPI isn't recognizing your local host as
being "local" - i.e., it thinks that the host mpirun is executing on
is somehow not the the local host. This has come up before with ubuntu
-
can you "ssh localhost" without a password?
-Whit
On Thu, Sep 17, 2009 at 11:50 PM, Hodgess, Erin wrote:
> It's 1.3, please.
>
> Thanks,
>
> Erin
>
>
> Erin M. Hodgess, PhD
> Associate Professor
> Department of Computer and Mathematical Sciences
> University of Houston - Downtown
> mailto: hod
It is dangerous to hold a local lock (like a mutex} across a blocking MPI
call unless you can be 100% sure everything that must happen remotely will
be completely independent of what is done with local locks & communication
dependancies on other tasks.
It is likely that a MPI_Comm_spawn call in w
Hi Josh,
It is good to hear from you that work is in progress towards resiliency of
Open-MPI. I was and I am waiting for this capability in Open-MPI. I have
almost finished my development work and waiting for this to happen so that I
can test my programs. It will be good if you can tell how long i
Hi Ashika,
Yes you can serialize the call using pthead mutex if you have created
threads using pthreads. Basically whatever thread libray you are using for
thread creation provides synchronization API's which you have to use here.
OPAL_THREAD_LOCK and UNLOCK code is also implemented using s
15 matches
Mail list logo