line from triops' rules, restarted iptables and now
communication works in all directions!
Thank You
Jody
On Tue, May 3, 2016 at 7:00 PM, Jeff Squyres (jsquyres)
wrote:
> Have you disabled firewalls between these machines?
>
> > On May 3, 2016, at 11:26 AM, jody wrote:
ly is caused by:
...
--
-
Again, i can call mpirun on triops from kraken und all squid_XX without a
problem...
What could cause this problem?
Thank You
Jody
On Tue, May 3, 2016 at 2:54 PM, Jeff Squyres (jsquyres)
wrote:
&
that the output does
indeed say 1.10.2)
Password-less ssh is enabled on both machines in both directions.
When i start mpirun form one machine (kraken) with a hostfile specifying
the other machine ("triops slots=8 max-slots=8),
it works:
-
jody@kraken ~ $ mpirun -np 3 --hostfile triopshosts u
=running
Jody
On Wed, Feb 26, 2014 at 10:38 AM, raha khalili wrote:
> Dear Jody
>
> Thank you for your reply. Based on hostfile examples you show me, I
> understand 'slots' is number of cpus of each node I mentioned in the file,
> am I true?
>
> Wishes
>
>
> On W
Hi
I think you should use the "--host" or "--hostfile" options:
http://www.open-mpi.org/faq/?category=running#simple-spmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Wed, Feb 26, 2014 at 8:31 AM, raha khalili wrote:
> De
It is better if you accept messages from all senders (MPI_ANY_SOURCE)
instead of particular ranks and then check where the
message came from by examining the status fields
(http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
Hope this helps
Jody
On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
processes made
it to this point and which ones did not.
Hope this helps a bit
Jody
On Tue, Sep 25, 2012 at 8:20 AM, Richard wrote:
> I have 3 computers with the same Linux system. I have setup the mpi cluster
> based on ssh connection.
> I have tested a very simple mpi program, it works on th
Thanks Ralph
I renamed the parameter in my script,
and now there are no more ugly messages :)
Jody
On Tue, Aug 28, 2012 at 3:17 PM, Ralph Castain wrote:
> Ah, I see - yeah, the parameter technically is being renamed to
> "orte_rsh_agent" to avoid having users need to k
.
Deprecated parameter: plm_rsh_agent
--
for every process that starts...
My openmpi version is 1.6 (gentoo package sys-cluster/openmpi-1.6-r1)
jody
On Tue, Aug 28, 2012 at 2:38 PM, Ralph Castain wrote:
> Guess I'm confuse
plm_rsh_agent "ssh -Y"' i can't open windows
from the remote:
jody@boss /mnt/data1/neander $ mpirun -np 5 -hostfile allhosts
-mca plm_base_verbose 1 --leave-session-attached xterm -hold -e
./MPIStruct
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
xter
ot;count".
If you expect data of 160 bytes you have to allocate a buffer
with a size greater or equal to 160 and you have to set your
"count" parameter to the size you allocated.
If you want to receive data in chunks, you have to send it in chunks.
I hope this helps
Jody
O
ce for the creation of the large data block),
but unfortunately my main application is not well suited for OpenMP
parallelization..
I guess i'll have to take more detailed look at my problem to see if i
can restructure it in a good way...
Thank You
Jody
On Mon, Apr 16, 2012 at 11:16 PM, Bria
for reading by the processes?
Thank You
Jody
Hi
Did you run your program with mpirun?
For example:
mpirun -np 4 ./a.out
jody
On Fri, Mar 16, 2012 at 7:24 AM, harini.s .. wrote:
> Hi ,
>
> I am very new to openMPI and I just installed openMPI 4.1.5 on Linux
> platform. Now am trying to run the examples in the folder got
Hi
I've got a really strange problem:
I've got an application which creates intercommunicators between a
master and some workers.
When i run it on our cluster with 11 processes it works,
when i run it with 12 processes it hangs inside MPI_Intercomm_create().
This is the hostfile:
squid_0.uzh.
Hi
You also must make sure that all slaves can
connect via ssh to each other and to the master
node without ssh.
Jody
On Wed, Dec 21, 2011 at 3:57 AM, Shaandar Nyamtulga wrote:
> Can you clarify your answer please.
> I have one master node and other slave nodes. I created rsa key on my
sessions on your nodes,
you can execute
mpirun --hostfile hostfile -np 4 printenv
and scan the output for PATH and LD_LIBRARY_PATH.
Hope this helps
Jody
On Sat, Jul 9, 2011 at 12:25 AM, Mohan, Ashwin wrote:
> Thanks Ralph.
>
>
>
> I have emailed the network admin on the
http://www.open-mpi.org/faq/?category=running#run-prereqs)
Hope this helps
Jody
On Thu, Jul 7, 2011 at 8:44 AM, zhuangchao wrote:
> hello all :
>
> I installed the openmpi-1.4.3 on redhat as the following step :
>
> 1. ./configure --prefix=/data1/cluster/openmpi
&g
arned if i had read that chapter more carefully...
Fortunately, i don't have to send around a lot of these structs,
so i will do the padding (using the offsetof macro Dave recommended).
Thanks again
Jody
On Wed, Jun 29, 2011 at 9:52 PM, Gus Correa wrote:
> Hi Jody
>
> jody wrote:
/deserialize
after receiving it.
Jody
On Wed, Jun 29, 2011 at 6:18 PM, Gus Correa wrote:
> jody wrote:
>>
>> Hi
>>
>> I have noticed on my machine that a struct which i have defined as
>>
>> typedef struct {
>> short iSpeciesID;
>> char
PI Datatype in order
to fill it up to the next multiple of 8 i could work around this problem.
(not very nice, and very probably not portable)
My question: is there a way to tell MPI to automatically use the
required padding?
Thank You
Jody
30 PM, Ralph Castain wrote:
>
> On May 2, 2011, at 8:21 AM, jody wrote:
>
>> Hi
>> Well, the difference is that one time i call the application
>> 'HelloMPI' with the '--xterm' option,
>> whereas in my previous mail i am calling the application 'x
Hi
Well, the difference is that one time i call the application
'HelloMPI' with the '--xterm' option,
whereas in my previous mail i am calling the application 'xterm'
(without the '--xterm' option)
Jody
On Mon, May 2, 2011 at 4:08 PM, Ralph Castain wro
l > 0 will open xterms, but with ' -mca
plm_base_verbose 0' there are again no xterms.
Thank You
Jody
On Mon, May 2, 2011 at 2:29 PM, Ralph Castain wrote:
>
> On May 2, 2011, at 2:34 AM, jody wrote:
>
>> Hi Ralph
>>
>> I rebuilt open MPI 1.4.2 with the de
Hi Ralph
I rebuilt open MPI 1.4.2 with the debug option on both chefli and squid_0.
The results are interesting!
I wrote a small HelloMPI app which basically calls usleep for a pause
of 5 seconds.
Now calling it as i did before, no MPI errors appear anymore, only the
display problems:
jody
Hi Ralph
Thank you for your suggestions.
I'll be happy to help you.
I'm not sure if i'll get around to this tomorrow,
but i certainly will do so on Monday.
Thanks
Jody
On Thu, Apr 28, 2011 at 11:53 PM, Ralph Castain wrote:
> Hi Jody
>
> I'm not sure when I
Hi
Unfortunately this does not solve my problem.
While i can do
ssh -Y squid_0 xterm
and this will open an xterm on m,y machiine (chefli),
i run into problems with the -xterm option of openmpi:
jody@chefli ~/share/neander $ mpirun -np 4 -mca plm_rsh_agent "ssh
-Y" -host squid_0
Hi Ralph
Is there an easy way i could modify the OpenMPI code so that it would use
the -Y option for ssh when connecting to remote machines?
Thank You
Jody
On Thu, Apr 7, 2011 at 4:01 PM, jody wrote:
> Hi Ralph
> thank you for your suggestions. After some fiddling, i found that af
ut with '-X' is till get those xauth warnings)
But the xterm option still doesn't work:
jody@chefli ~/share/neander $ mpirun -np 4 -host squid_0 -xterm 1,2
printenv | grep WORLD_RANK
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Warning: No xauth
Hi Ralph
No, after the above error message mpirun has exited.
But i also noticed that it is to ssh into squid_0 and open a xterm there:
jody@chefli ~/share/neander $ ssh -Y squid_0
Last login: Wed Apr 6 17:14:02 CEST 2011 from chefli.uzh.ch on pts/0
jody@squid_0 ~ $ xterm
xterm Xt error
work
But i do have xauth data (as far as i know):
On the remote (squid_0):
jody@squid_0 ~ $ xauth list
chefli/unix:10 MIT-MAGIC-COOKIE-1 5293e179bc7b2036d87cbcdf14891d0c
chefli/unix:0 MIT-MAGIC-COOKIE-1 146c7f438fab79deb8a8a7df242b6f4b
chefli.uzh.ch:0 MIT-MAGIC-COOKIE-1 146c7f438
Hi
At a first glance i would say this is not a OpenMPI problem,
but a wrf problem (though io must admit i have no knowledge whatsoever ith wrf)
Have you tried running a single instance of wrf.exe?
Have you tried to run a simple application (like a "hello world") on your nodes?
Jody
O
Hi Massimo
Just to make sure: usually the MPI_ERR_TUNCATE error is caused by
buffer sizes that are too small.
Can you verify that the buffers you are using are large enough to
hold the data they should receive?
Jody
On Sat, Feb 5, 2011 at 6:37 PM, Massimo Cafaro
wrote:
> Dear all,
>
&g
Thaks all
I did the simple copying of the 32Bit applications and now it works.
Thanks
Jody
On Wed, Feb 2, 2011 at 5:47 PM, David Mathog wrote:
> jody wrote:
>
>> How can i force OpenMPI to be built as a 32Bit application on a 64Bit
> machine?
>
> THe easiest way is not
successfully; not
able to guarantee that all other processes were killed!
I think this is caused by the fact that on the 64Bit machine Open MPI
is also built as a 64 bit application.
How can i force OpenMPI to be built as a 32Bit application on a 64Bit machine?
Thank You
Jody
On Tue, Feb 1, 2011 at
- is there a way to find this out?
Thank You
Jody
Hi
if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can compile it with the compiler option -fopenmp (in gcc)
Jody
On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
wrot
irun -np 5 --rankfile `rankcreate.sh 5` myApplication
May be this is of use for you
jody
On Fri, Dec 10, 2010 at 11:50 PM, Eugene Loh wrote:
> David Mathog wrote:
>
>> Also, in my limited testing --host and -hostfile seem to be mutually
>> exclusive.
>>
> No. You can u
an correctly start up
totalview) concerns
the hostfile and rankfile parameters of mpirun: how can i start an
open mpi application with
totalview so that my application starts the processes on the correct
processors as
defined in hostfile and rankfile?
Thank You
Jody
ise diagnosis.
jody
On Mon, Nov 1, 2010 at 6:41 PM, Jack Bryan wrote:
> thanks
> I use
> double* recvArray = new double[buffersize];
> The receive buffer size
> MPI::COMM_WORLD.Recv(&(recvDataArray[0]), xVSize, MPI_DOUBLE, 0, mytaskTag);
> delete [] recvArray ;
>
to compile:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
Cannot open configuration file
/opt/openmpi-1.4.2-64/share/openmpi/mpiCC-wrapper-data.txt
Error parsing data file mpiCC: Not found
So again, it looked into the original installation directory of the
64-bit installation for some
Hi
On a newly installed 64bit linux (2.6.32-gentoo-r7) with gcc version 4.4.4
i can't compile even simple Open-MPI applications (OpenMPI 1.4.2).
The message is:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
/usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/../../../../x86_64-pc-linux-gn
Hi Jack
Usually MPI_ERR_TRUNCATE means that the buffer you use in MPI_Recv
(or MPI::COMM_WORLD.Recv) is too sdmall to hold the message coming in.
Check your code to make sure you assign enough memory to your buffers.
regards
Jody
On Mon, Nov 1, 2010 at 7:26 AM, Jack Bryan wrote:
> HI,
>
gcc.i386 zlib.i386
(gdb)
I am using OpenMPI 1.4.2
Has anybody got an idea how i could find the problem?
Thank You
Jody
Where is the option 'default-hostfile' described?
It does not appear in mpirun's man page (for v. 1.4.2)
and i couldn't find anything like that with googling.
Jody
On Wed, Oct 27, 2010 at 4:02 PM, Ralph Castain wrote:
> Specify your hostfile as the default one:
>
&
Hi Brandon
Does it work if you try this:
mpirun -np 2 hostfile hosts.txt ilk
(see http://www.open-mpi.org/faq/?category=running#simple-spmd-run)
jody
On Sat, Oct 23, 2010 at 4:07 PM, Brandon Fulcher wrote:
> Thank you for the response!
>
> The code runs on my own machine as we
Hi
I don't know the reason for the strange behaviour, but anyway,
to measure time in an MPI application you should use MPI_Wtime(), not clock()
regards
jody
On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang wrote:
> Dear all,
>
> I got confused with my recent C++ MPI program'
But shouldn't something like this show up in the other processes as well?
I only see that in the master process, but the slave processes also
send data to each other and to the master.
On Mon, Oct 18, 2010 at 2:48 PM, Ralph Castain wrote:
>
> On Oct 18, 2010, at 1:41 AM, jody wrote:
I had this leak with OpenMPI 1.4.2
But in my case, there is no accumulation - when i repeat the same call,
no additional leak is reported for the second call
Jody
On Mon, Oct 18, 2010 at 1:57 AM, Ralph Castain wrote:
> There is no OMPI 2.5 - do you mean 1.5?
>
> On Oct 17, 2010, a
this server i could then send commands which changed the state of the
master.
Jody
On Tue, Oct 12, 2010 at 6:14 AM, Mahesh Salunkhe
wrote:
>
> Hello,
> Could you pl tell me how to connect a client(not in any mpi group ) to a
> process in a mpi group.
> (i.e. just like
t looks like an OpenMPI-internal leak,
because it happens iinside PMPI_Send,
but then i am using the function ConnectorBase::send()
several times from other callers than TileConnector,
but these don't show up in valgrind's output.
Does anybody have an idea what is happening here?
Thank You
jody
answered by trying, because it
depends strongly
on the volume of your messages and the quality of your hardware
(network and disk speed)
Jody
Hi
I don't know if i correctly understand what you need, but have you
already tried MPI_Comm_spawn?
Jody
On Mon, Sep 20, 2010 at 11:24 PM, Mikael Lavoie wrote:
> Hi,
>
> I wanna know if it exist a implementation that permit to run a single host
> process on the master of the c
Hi
@Ashley:
What is the exact semantics of an asynchronous barrier,
and is it part of the MPI specs?
Thanks
Jody
On Thu, Sep 9, 2010 at 9:34 PM, Ashley Pittman wrote:
>
> On 9 Sep 2010, at 17:00, Gus Correa wrote:
>
>> Hello All
>>
>> Gabrielle's que
ct which ranks should open an xterm.
(Again check the man pages of mpirun)
Jody
On Mon, Jul 26, 2010 at 8:55 AM, Jack Bryan wrote:
> Thanks
> It can be installed on linux and work with gcc ?
> If I have many processes, such as 30, I have to open 30 terminal windows ?
> thanks
> Jack
t for each process separately.
Jody
On Mon, Jul 26, 2010 at 4:08 AM, Jack Bryan wrote:
> Dear All,
> I run a 6 parallel processes on OpenMPI.
> When the run-time of the program is short, it works well.
> But, if the run-time is long, I got errors:
> [n124:45521] *** Process receive
Thanks for the patch - it works fine!
Jody
On Mon, Jul 12, 2010 at 11:38 PM, Ralph Castain wrote:
> Just so you don't have to wait for 1.4.3 to be released, here is the patch.
> Ralph
>
>
>
>
> On Jul 12, 2010, at 2:44 AM, jody wrote:
>
>> yes, i'm using
will call mpirun or mpiexec. But somewhere you have to tell OpenMPI
what to run on how many processors etc.
I'd suggest you take a look at the "MPI-The Complete Reference" Vol I and II
Jody
On Mon, Jul 12, 2010 at 5:07 PM, Brian Budge wrote:
> Hi Jody -
>
> Thanks for the reply.
ugh...
Perhaps there is a boost forum you can check out if the problem persists
Jody
On Sun, Jul 11, 2010 at 10:13 AM, Jack Bryan wrote:
> thanks for your reply.
> The message size is 72 bytes.
> The master sends out the message package to each 51 nodes.
> Then, after doing their local w
yes, i'm using 1.4.2
Thanks
Jody
On Mon, Jul 12, 2010 at 10:38 AM, Ralph Castain wrote:
>
> On Jul 12, 2010, at 2:17 AM, jody wrote:
>
>> Hi
>>
>> I have a master process which spawns a number of workers of which i'd
>> like to save the output
e to extend the -output-filename option i
such a way that it
would also combine job-id and rank withe the output file:
work_out.1.0
for the master's output, and
work_out.2.0
work_out.2.1
work_out.2.2
...
for the worker's output?
Thank You
Jody
Hi Brian
When you spawn processes with MPI_Comm_spawn(), one of the arguments
will be set to an intercommunicator of thes spawner and the spawnees.
You can use this intercommunicator as the communicator argument
in the MPI_functions.
Jody
On Fri, Jul 9, 2010 at 5:56 PM, Brian Budge wrote:
>
_buf, send_message_size, MPI_INT, RECEIVER,
TAG_DATA, MPI_COMM_WORLD);
/* clean up */
free(send_buf);
}
MPI_Finalize();
}
I hope this helps
Jody
On Sat, Jul 10, 2010 at 7:12 AM, Jack Bryan wrote:
> Dear All:
> How to find the buffer size of OpenMPI ?
> I need to t
buffer you passed to MPI_Recv.
As Zhang suggested: try to reduce your code to isolate the offending codes.
Can you create a simple application with two processes exchanging data which has
the MPI_ERR_TRUNCATE problem?
Jody
On Thu, Jul 8, 2010 at 5:39 AM, Jack Bryan wrote:
> thanks
>
)
and react accordingly
Jody
On Tue, Jul 6, 2010 at 7:41 AM, David Zhang wrote:
> if the master receives multiple results from the same worker, how does the
> master know which result (and the associated tag) arrive first? what MPI
> commands are you using exactly?
>
> On Mon, Jul
ef, TaskType, 1, idMaster, MPI_ANY_TAG, &st);
if (st.MPI_TAG == TAG_STOP) {
go_on=false;
} else {
result=workOnTask(TaskDef, TaskLen);
MPI_Send(a, MPI_INT, 1, idMaster, TAG_RESULT);
MPI_Send(result, resultType, 1, idMaster, TAG_RESULT_CONTENT);
}
}
I hope t
Hi
I am really no python expert, but it looks to me as if you were
gathering arrays filled with zeroes:
a = array('i', [0]) * n
Shouldn't this line be
a = array('i', [r])*n
where r is the rank of the process?
Jody
On Thu, May 20, 2010 at 12:00 AM, Battalgazi YILDI
minal window for the process you are
interested in.
Jody
On Thu, May 20, 2010 at 1:28 AM, Sang Chul Choi wrote:
> Hi,
>
> I am wondering if there is a way to run a particular process among multiple
> processes on the console of a linux cluster.
>
> I want to see the screen ou
Just to be sure:
Is there a copy of the shared library on the other host (hpcnode1) ?
jody
On Mon, May 10, 2010 at 5:20 PM, Prentice Bisbal wrote:
> Are you runing thee jobs through a queuing system like PBS, Torque, or SGE?
>
> Prentice
>
> Miguel Ángel Vázquez wrote:
&g
on
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/fpu_wp.pdf
but i think AMD Opteron does not.
But i am no expert in this area - i only found out about this when i
mentioned to someone
the differences in the results obtained from a 32Bit platform and a
64bit platform. Sorry.
Jo
I once got different results when running on a 64-Bit platform instead of
a 32 bit platform - if i remember correctly, the reason was that on the
32-bit platform 80bit extended precision floats were used but on the 64bit
platform only 64bit floats.
On Sun, Apr 25, 2010 at 3:39 AM, Fabian Hänsel
inter to the start of the array
(however, i can't exactly explain
why it worked with the hard-coded string))
Jody
On Mon, Apr 19, 2010 at 6:31 PM, Andrew Wiles wrote:
> Hi all Open MPI users,
>
> I write a simple MPI program to send a text message to another process. The
> code
@Trent
> the 1024 RSA has already been cracked.
Yeah but unless you've got 3 guys spending 100 hours varying the
voltage of your processors
it is still safe... :)
On Tue, Apr 6, 2010 at 11:35 AM, Reuti wrote:
> Hi,
>
> Am 06.04.2010 um 09:48 schrieb Terry Frankcombe:
>
>>> 1. Run the following
uling layer on top of pbs.
However, there are folks here who would know far more than I do about
these sorts of things.
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
I have an environment a few trusted users could use to test. However,
I have neither the expertise or time to do the debugging myself.
Cheers, Jody
On 2010-03-29, at 1:27 PM, Jeff Squyres wrote:
On Mar 29, 2010, at 4:11 PM, Cristobal Navarro wrote:
i realized that xcode dev tools
id/openmpi/etc/openmpi-mca-params.conf to make sure
that the right ports are used:
# set ports so that they are more valid than the default ones (see
email from Ralph Castain)
btl_tcp_port_min_v4 = 36900
btl_tcp_port_range = 32
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
I'm not sure if this is the cause of your problems:
You define the constant BUFFER_SIZE, but in the code you use a constant
called BUFSIZ...
Jody
On Fri, Mar 26, 2010 at 10:29 PM, Jean Potsam wrote:
> Dear All,
> I am having a problem with openmpi . I have installed op
Required statement
stop
end program test_MPI_write_adv2
===
Regards
jody
On Thu, Feb 25, 2010 at 2:47 AM, Terry Frankcombe wrote:
> On Wed, 2010-02-24 at 13:40 -0500, w k wrote:
>> H
Hi
I can't answer your question about the array q offhand,
but i will try to translate your program to C and see if
it fails the same way.
Jody
On Wed, Feb 24, 2010 at 7:40 PM, w k wrote:
> Hi Jordy,
>
> I don't think this part caused the problem. For fortran, it does
Hi Gabriele
you could always pipe your output through grep
my_app | grep "MPI_ABORT was invoked"
jody
On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati
wrote:
> Hi Nadia,
>
> thanks for quick reply.
>
> But i suppose that parameter is 0 by default. Suppose
; count = 0
> end if
>
> if (count .gt. 0) then
> allocate(temp(count))
> temp(1) = 2122010.0d0
> end if
In C/C++ something like this would almost certainly lead to a crash,
but i don't know if this would be the case in Fortran...
jody
On Wed, Feb 24, 2010 at
e PS3 and a is your PS3 host,
and app_dell is your application compiled on the dell, and b is your dell host.
Check the MPI FAQs
http://www.open-mpi.org/faq/?category=running#mpmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Thu, Jan 28, 2010 at 3:
Thanks, that did it!
BTW, in the man page for mpirun you should perhaps mention the "!"
option in xterm - the one that keeps the xterms open after the
application exits.
Thanks
Jody
On Mon, Dec 21, 2009 at 3:25 PM, Ralph Castain wrote:
> Is your MANPATH set to point to /op
-f77
--disable-mpi-f90 --with-threads
and afterwards made a soft link
ln -s /opt/openmpi-1.4 /opt/openmpi
This is on fedora fc8, but i have the same problem on my gentoo
machines (2.6.29-gentoo-r5)
Does anybody know how to get replace the old man files with the new ones?
Thank You
Jody
Hi Ralph
I finally got around to install version 1.4.
The xterm works fine.
And in order to get gdb going on the spawned processes, i need to add
an argument "--args"
in the argument list of the spawner so that the parameters of the
spawned processes are getting through gdb.
Thanks ag
s the -xterm option, then that option gets
applied to the dynamically spawned procs too"
Does this passing on also apply to the -x options?
Thanks
Jody
On Wed, Dec 16, 2009 at 3:42 PM, Ralph Castain wrote:
> It is in a later version - pretty sure it made 1.3.3. IIRC, I added it at
&
Thanks for your reply
That sounds good. I have Open-MPI version 1.3.2, and mpirun seems not
to recognize the --xterm option.
[jody@plankton tileopt]$ mpirun --xterm -np 1 ./boss 9 sample.tlf
--
mpirun was unable to launch the
environment variable in order to
display their xterms with gdb on my workstation.
Another negative point would be the need to change the argv parameters
every time one switches between debugging and normal running.
Has anybody got some hints on how to debug spawned processes?
Thank You
Jody
int you to an MPI primer or tute.
>
Have a look at the Open MPI FAQ:
http://www.open-mpi.org/faq/?category=running
It shows you how to run a Open-MPI program on single or multiple machines
Jody
Sorry, i can't help you here.
I have no experience with neither intel compilers nor IB
Jody
On Wed, Oct 21, 2009 at 4:14 AM, Sangamesh B wrote:
>
>
> On Tue, Oct 20, 2009 at 6:48 PM, jody wrote:
>>
>> Hi
>> Just curious:
>> Is there a particular reason
Hi
Just curious:
Is there a particular reason why you want version 1.2?
The current version is 1.3.3!
Jody
On Tue, Oct 20, 2009 at 2:48 PM, Sangamesh B wrote:
> Hi,
>
> Its required here to install Open MPI 1.2 on a HPC cluster with - Cent
> OS 5.2 Linux, Mellanox IB card, swi
that is where i put the application.
To start your application, follow the instructions in the FAQ:
http://www.open-mpi.org/faq/?category=running
If you want to use host files, read about how to use them in the FAQ:
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope that helps
Jody
Hi
Have look at the Open MPI FAQ:
http://www.open-mpi.org/faq/
It gives you all the information you need to start working with your cluster.
Jody
On Wed, Sep 30, 2009 at 8:25 AM, ankur pachauri wrote:
> dear all,
>
> i am new to openmpi, all that i need is to set up the cluster of
Did you also change the "&buffer" to buffer in your MPI_Send call?
Jody
On Tue, Sep 22, 2009 at 1:38 PM, Everette Clemmer wrote:
> Hmm, tried changing MPI_Irecv( &buffer) to MPI_Irecv( buffer...)
> and still no luck. Stack trace follows if that's help
Hi
I'm not sure if i completely understand your requirements,
but have you tried MPI_WTime?
Jody
On Fri, Sep 11, 2009 at 7:54 AM, amjad ali wrote:
> Hi all,
> I want to get the elapsed time from start to end of my parallel program
> (OPENMPI based). It should give same time for th
mpicc
and friends...
Cheers, Jody
On Aug 19, 2009, at 15:57 PM, tomek wrote:
OK - I have fixed it by including -L/opt/openmpi/lib at the very
beginning of mpicc ... -L/opt/openmpi/lib -o app.exe the rest ...
But something is wrong with dyld anyhow.
On 19 Aug 2009, at 21:04, Jody Klymak
ets linked with /usr/lib/libmpi...
Note, that the /opt/openmpi/bin path is properly set and ompi_info
does outputs the right info.
You do not need to set DYLD_LIBRARY_PATH. I don't have it set and my
mpi applications run fine.
Did 4 work?
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
Hi
I had a similar problem.
Following a suggestion from Lenny,
i removed the "max-slots" entries from
my hostsfile and it worked.
It seems that there still are some minor bugs in the rankfile mechanism.
See the post
http://www.open-mpi.org/community/lists/users/2009/08/10384.php
Jod
-mpi.org/faq/?category=running#mpirun-scheduling
but i couldn't find any explanation. (furthermore, in the FAQ it says
"max-slots"
in one place, but "max_slots" in the other one)
Thank You
Jody
On Mon, Aug 17, 2009 at 3:29 PM, Lenny
Verkhovsky wrote:
> can you try n
osts (i.e. plankton instead of plankton.uzh.ch) in
the host file...
However, I encountered a new problem:
if the rankfile lists all the entries which occur in the host file
there is an error message.
In the following example, the hostfile is
[jody@plankton neander]$ cat th_02
nano_00.uzh.ch slots=2 ma
1 - 100 of 247 matches
Mail list logo