that line from triops' rules, restarted iptables and now
communication works in all directions!
Thank You
Jody
On Tue, May 3, 2016 at 7:00 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:
> Have you disabled firewalls between these machines?
>
> > On May 3, 2016, at 11
is caused by:
...
--
-
Again, i can call mpirun on triops from kraken und all squid_XX without a
problem...
What could cause this problem?
Thank You
Jody
On Tue, May 3, 2016 at 2:54 PM, Jeff Squyres (jsquyres) <js
=running
Jody
On Wed, Feb 26, 2014 at 10:38 AM, raha khalili <khadije.khal...@gmail.com>wrote:
> Dear Jody
>
> Thank you for your reply. Based on hostfile examples you show me, I
> understand 'slots' is number of cpus of each node I mentioned in the file,
> am I true?
>
>
Hi
I think you should use the "--host" or "--hostfile" options:
http://www.open-mpi.org/faq/?category=running#simple-spmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Wed, Feb 26, 2014 at 8:31 AM, raha khalili <khadije.
It is better if you accept messages from all senders (MPI_ANY_SOURCE)
instead of particular ranks and then check where the
message came from by examining the status fields
(http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
Hope this helps
Jody
On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
processes made
it to this point and which ones did not.
Hope this helps a bit
Jody
On Tue, Sep 25, 2012 at 8:20 AM, Richard <codemon...@163.com> wrote:
> I have 3 computers with the same Linux system. I have setup the mpi cluster
> based on ssh connection.
> I have tested a very sim
Thanks Ralph
I renamed the parameter in my script,
and now there are no more ugly messages :)
Jody
On Tue, Aug 28, 2012 at 3:17 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Ah, I see - yeah, the parameter technically is being renamed to
> "orte_rsh_agent" to avoid hav
quot;ssh -Y"' i can't open windows
from the remote:
jody@boss /mnt/data1/neander $ mpirun -np 5 -hostfile allhosts
-mca plm_base_verbose 1 --leave-session-attached xterm -hold -e
./MPIStruct
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
xterm: Xt error: Can't open dis
unt".
If you expect data of 160 bytes you have to allocate a buffer
with a size greater or equal to 160 and you have to set your
"count" parameter to the size you allocated.
If you want to receive data in chunks, you have to send it in chunks.
I hope this helps
Jody
On Tue
for the creation of the large data block),
but unfortunately my main application is not well suited for OpenMP
parallelization..
I guess i'll have to take more detailed look at my problem to see if i
can restructure it in a good way...
Thank You
Jody
On Mon, Apr 16, 2012 at 11:16 PM, Brian Austin
for reading by the processes?
Thank You
Jody
Hi
Did you run your program with mpirun?
For example:
mpirun -np 4 ./a.out
jody
On Fri, Mar 16, 2012 at 7:24 AM, harini.s .. <hharin...@gmail.com> wrote:
> Hi ,
>
> I am very new to openMPI and I just installed openMPI 4.1.5 on Linux
> platform. Now am trying t
Hi
I've got a really strange problem:
I've got an application which creates intercommunicators between a
master and some workers.
When i run it on our cluster with 11 processes it works,
when i run it with 12 processes it hangs inside MPI_Intercomm_create().
This is the hostfile:
Hi
You also must make sure that all slaves can
connect via ssh to each other and to the master
node without ssh.
Jody
On Wed, Dec 21, 2011 at 3:57 AM, Shaandar Nyamtulga <nyam...@hotmail.com> wrote:
> Can you clarify your answer please.
> I have one master node and other slave node
sessions on your nodes,
you can execute
mpirun --hostfile hostfile -np 4 printenv
and scan the output for PATH and LD_LIBRARY_PATH.
Hope this helps
Jody
On Sat, Jul 9, 2011 at 12:25 AM, Mohan, Ashwin <ashmo...@seas.wustl.edu> wrote:
> Thanks Ralph.
>
>
>
> I have emai
//www.open-mpi.org/faq/?category=running#run-prereqs)
Hope this helps
Jody
On Thu, Jul 7, 2011 at 8:44 AM, zhuangchao <freeo...@163.com> wrote:
> hello all :
>
> I installed the openmpi-1.4.3 on redhat as the following step :
>
> 1. ./configure --prefix=/dat
if i had read that chapter more carefully...
Fortunately, i don't have to send around a lot of these structs,
so i will do the padding (using the offsetof macro Dave recommended).
Thanks again
Jody
On Wed, Jun 29, 2011 at 9:52 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
> Hi Jody
it up to the next multiple of 8 i could work around this problem.
(not very nice, and very probably not portable)
My question: is there a way to tell MPI to automatically use the
required padding?
Thank You
Jody
, Ralph Castain <r...@open-mpi.org> wrote:
>
> On May 2, 2011, at 8:21 AM, jody wrote:
>
>> Hi
>> Well, the difference is that one time i call the application
>> 'HelloMPI' with the '--xterm' option,
>> whereas in my previous mail i am calling the application '
Hi
Well, the difference is that one time i call the application
'HelloMPI' with the '--xterm' option,
whereas in my previous mail i am calling the application 'xterm'
(without the '--xterm' option)
Jody
On Mon, May 2, 2011 at 4:08 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On
will open xterms, but with ' -mca
plm_base_verbose 0' there are again no xterms.
Thank You
Jody
On Mon, May 2, 2011 at 2:29 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On May 2, 2011, at 2:34 AM, jody wrote:
>
>> Hi Ralph
>>
>> I rebuilt open MPI 1.4.2 with the
Hi Ralph
I rebuilt open MPI 1.4.2 with the debug option on both chefli and squid_0.
The results are interesting!
I wrote a small HelloMPI app which basically calls usleep for a pause
of 5 seconds.
Now calling it as i did before, no MPI errors appear anymore, only the
display problems:
jody
Hi Ralph
Thank you for your suggestions.
I'll be happy to help you.
I'm not sure if i'll get around to this tomorrow,
but i certainly will do so on Monday.
Thanks
Jody
On Thu, Apr 28, 2011 at 11:53 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Hi Jody
>
> I'm not sure when I'
Hi
Unfortunately this does not solve my problem.
While i can do
ssh -Y squid_0 xterm
and this will open an xterm on m,y machiine (chefli),
i run into problems with the -xterm option of openmpi:
jody@chefli ~/share/neander $ mpirun -np 4 -mca plm_rsh_agent "ssh
-Y" -host squid_0
Hi Ralph
Is there an easy way i could modify the OpenMPI code so that it would use
the -Y option for ssh when connecting to remote machines?
Thank You
Jody
On Thu, Apr 7, 2011 at 4:01 PM, jody <jody@gmail.com> wrote:
> Hi Ralph
> thank you for your suggestions. After some
xauth warnings)
But the xterm option still doesn't work:
jody@chefli ~/share/neander $ mpirun -np 4 -host squid_0 -xterm 1,2
printenv | grep WORLD_RANK
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Warning: No xauth data; using fake authentication data for X11
Hi Ralph
No, after the above error message mpirun has exited.
But i also noticed that it is to ssh into squid_0 and open a xterm there:
jody@chefli ~/share/neander $ ssh -Y squid_0
Last login: Wed Apr 6 17:14:02 CEST 2011 from chefli.uzh.ch on pts/0
jody@squid_0 ~ $ xterm
xterm Xt error
xauth data (as far as i know):
On the remote (squid_0):
jody@squid_0 ~ $ xauth list
chefli/unix:10 MIT-MAGIC-COOKIE-1 5293e179bc7b2036d87cbcdf14891d0c
chefli/unix:0 MIT-MAGIC-COOKIE-1 146c7f438fab79deb8a8a7df242b6f4b
chefli.uzh.ch:0 MIT-MAGIC-COOKIE-1 146c7f438fab79deb8a8a7df242b6
Hi
At a first glance i would say this is not a OpenMPI problem,
but a wrf problem (though io must admit i have no knowledge whatsoever ith wrf)
Have you tried running a single instance of wrf.exe?
Have you tried to run a simple application (like a "hello world") on your nodes?
Jody
O
Hi Massimo
Just to make sure: usually the MPI_ERR_TUNCATE error is caused by
buffer sizes that are too small.
Can you verify that the buffers you are using are large enough to
hold the data they should receive?
Jody
On Sat, Feb 5, 2011 at 6:37 PM, Massimo Cafaro
<massimo.caf...@unisalento
Thaks all
I did the simple copying of the 32Bit applications and now it works.
Thanks
Jody
On Wed, Feb 2, 2011 at 5:47 PM, David Mathog <mat...@caltech.edu> wrote:
> jody <jody@gmail.com> wrote:
>
>> How can i force OpenMPI to be built as a 32Bit applicat
successfully; not
able to guarantee that all other processes were killed!
I think this is caused by the fact that on the 64Bit machine Open MPI
is also built as a 64 bit application.
How can i force OpenMPI to be built as a 32Bit application on a 64Bit machine?
Thank You
Jody
On Tue, Feb 1, 2011
- is there a way to find this out?
Thank You
Jody
Hi
if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can compile it with the compiler option -fopenmp (in gcc)
Jody
On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
&l
irun -np 5 --rankfile `rankcreate.sh 5` myApplication
May be this is of use for you
jody
On Fri, Dec 10, 2010 at 11:50 PM, Eugene Loh <eugene@oracle.com> wrote:
> David Mathog wrote:
>
>> Also, in my limited testing --host and -hostfile seem to be mutually
>> exc
sis.
jody
On Mon, Nov 1, 2010 at 6:41 PM, Jack Bryan <dtustud...@hotmail.com> wrote:
> thanks
> I use
> double* recvArray = new double[buffersize];
> The receive buffer size
> MPI::COMM_WORLD.Recv(&(recvDataArray[0]), xVSize, MPI_DOUBLE, 0, mytaskTag);
> delete [] re
:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
Cannot open configuration file
/opt/openmpi-1.4.2-64/share/openmpi/mpiCC-wrapper-data.txt
Error parsing data file mpiCC: Not found
So again, it looked into the original installation directory of the
64-bit installation for some files
So
Hi
On a newly installed 64bit linux (2.6.32-gentoo-r7) with gcc version 4.4.4
i can't compile even simple Open-MPI applications (OpenMPI 1.4.2).
The message is:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
/usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/../../../../x86_64-pc-linux-gnu/bin
Hi Jack
Usually MPI_ERR_TRUNCATE means that the buffer you use in MPI_Recv
(or MPI::COMM_WORLD.Recv) is too sdmall to hold the message coming in.
Check your code to make sure you assign enough memory to your buffers.
regards
Jody
On Mon, Nov 1, 2010 at 7:26 AM, Jack Bryan <dtus
gcc.i386 zlib.i386
(gdb)
I am using OpenMPI 1.4.2
Has anybody got an idea how i could find the problem?
Thank You
Jody
Where is the option 'default-hostfile' described?
It does not appear in mpirun's man page (for v. 1.4.2)
and i couldn't find anything like that with googling.
Jody
On Wed, Oct 27, 2010 at 4:02 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Specify your hostfile as the default one:
&
Hi Brandon
Does it work if you try this:
mpirun -np 2 hostfile hosts.txt ilk
(see http://www.open-mpi.org/faq/?category=running#simple-spmd-run)
jody
On Sat, Oct 23, 2010 at 4:07 PM, Brandon Fulcher <min...@gmail.com> wrote:
> Thank you for the response!
>
> The code runs on
Hi
I don't know the reason for the strange behaviour, but anyway,
to measure time in an MPI application you should use MPI_Wtime(), not clock()
regards
jody
On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang <storm...@gmail.com> wrote:
> Dear all,
>
> I got confused with my
But shouldn't something like this show up in the other processes as well?
I only see that in the master process, but the slave processes also
send data to each other and to the master.
On Mon, Oct 18, 2010 at 2:48 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On Oct 18, 2010, at 1
I had this leak with OpenMPI 1.4.2
But in my case, there is no accumulation - when i repeat the same call,
no additional leak is reported for the second call
Jody
On Mon, Oct 18, 2010 at 1:57 AM, Ralph Castain <r...@open-mpi.org> wrote:
> There is no OMPI 2.5 - do you mean 1.5?
>
to
this server i could then send commands which changed the state of the
master.
Jody
On Tue, Oct 12, 2010 at 6:14 AM, Mahesh Salunkhe
<mahesh.salun...@gmail.com> wrote:
>
> Hello,
> Could you pl tell me how to connect a client(not in any mpi group ) to a
> process in a mp
st glimpse it looks like an OpenMPI-internal leak,
because it happens iinside PMPI_Send,
but then i am using the function ConnectorBase::send()
several times from other callers than TileConnector,
but these don't show up in valgrind's output.
Does anybody have an idea what is happening here?
Thank You
jody
answered by trying, because it
depends strongly
on the volume of your messages and the quality of your hardware
(network and disk speed)
Jody
Hi
I don't know if i correctly understand what you need, but have you
already tried MPI_Comm_spawn?
Jody
On Mon, Sep 20, 2010 at 11:24 PM, Mikael Lavoie <mikael.lav...@gmail.com> wrote:
> Hi,
>
> I wanna know if it exist a implementation that permit to run a single host
> pro
Hi
@Ashley:
What is the exact semantics of an asynchronous barrier,
and is it part of the MPI specs?
Thanks
Jody
On Thu, Sep 9, 2010 at 9:34 PM, Ashley Pittman <ash...@pittman.co.uk> wrote:
>
> On 9 Sep 2010, at 17:00, Gus Correa wrote:
>
>> Hello All
>>
>&g
.
(Again check the man pages of mpirun)
Jody
On Mon, Jul 26, 2010 at 8:55 AM, Jack Bryan <dtustud...@hotmail.com> wrote:
> Thanks
> It can be installed on linux and work with gcc ?
> If I have many processes, such as 30, I have to open 30 terminal windows ?
> thanks
> Jack
>
&
for each process separately.
Jody
On Mon, Jul 26, 2010 at 4:08 AM, Jack Bryan <dtustud...@hotmail.com> wrote:
> Dear All,
> I run a 6 parallel processes on OpenMPI.
> When the run-time of the program is short, it works well.
> But, if the run-time is long, I got errors:
> [n1
Thanks for the patch - it works fine!
Jody
On Mon, Jul 12, 2010 at 11:38 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Just so you don't have to wait for 1.4.3 to be released, here is the patch.
> Ralph
>
>
>
>
> On Jul 12, 2010, at 2:44 AM, jody wrote:
>
>&g
n or mpiexec. But somewhere you have to tell OpenMPI
what to run on how many processors etc.
I'd suggest you take a look at the "MPI-The Complete Reference" Vol I and II
Jody
On Mon, Jul 12, 2010 at 5:07 PM, Brian Budge <brian.bu...@gmail.com> wrote:
> Hi Jody -
>
> Thanks for
ugh...
Perhaps there is a boost forum you can check out if the problem persists
Jody
On Sun, Jul 11, 2010 at 10:13 AM, Jack Bryan <dtustud...@hotmail.com> wrote:
> thanks for your reply.
> The message size is 72 bytes.
> The master sends out the message package to each 51 nodes.
&
yes, i'm using 1.4.2
Thanks
Jody
On Mon, Jul 12, 2010 at 10:38 AM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On Jul 12, 2010, at 2:17 AM, jody wrote:
>
>> Hi
>>
>> I have a master process which spawns a number of workers of which i'd
>> lik
at it
would also combine job-id and rank withe the output file:
work_out.1.0
for the master's output, and
work_out.2.0
work_out.2.1
work_out.2.2
...
for the worker's output?
Thank You
Jody
Hi Brian
When you spawn processes with MPI_Comm_spawn(), one of the arguments
will be set to an intercommunicator of thes spawner and the spawnees.
You can use this intercommunicator as the communicator argument
in the MPI_functions.
Jody
On Fri, Jul 9, 2010 at 5:56 PM, Brian Budge <brian
/* clean up */
free(send_buf);
}
MPI_Finalize();
}
I hope this helps
Jody
On Sat, Jul 10, 2010 at 7:12 AM, Jack Bryan <dtustud...@hotmail.com> wrote:
> Dear All:
> How to find the buffer size of OpenMPI ?
> I need to transfer large data between nodes on
of the buffer you passed to MPI_Recv.
As Zhang suggested: try to reduce your code to isolate the offending codes.
Can you create a simple application with two processes exchanging data which has
the MPI_ERR_TRUNCATE problem?
Jody
On Thu, Jul 8, 2010 at 5:39 AM, Jack Bryan <dtustud...@hotmail.
)
and react accordingly
Jody
On Tue, Jul 6, 2010 at 7:41 AM, David Zhang <solarbik...@gmail.com> wrote:
> if the master receives multiple results from the same worker, how does the
> master know which result (and the associated tag) arrive first? what MPI
> commands are you using exac
, MPI_ANY_TAG, );
if (st.MPI_TAG == TAG_STOP) {
go_on=false;
} else {
result=workOnTask(TaskDef, TaskLen);
MPI_Send(a, MPI_INT, 1, idMaster, TAG_RESULT);
MPI_Send(result, resultType, 1, idMaster, TAG_RESULT_CONTENT);
}
}
I hope this helps
Jody
On Mon, Jun 21
Hi
I am really no python expert, but it looks to me as if you were
gathering arrays filled with zeroes:
a = array('i', [0]) * n
Shouldn't this line be
a = array('i', [r])*n
where r is the rank of the process?
Jody
On Thu, May 20, 2010 at 12:00 AM, Battalgazi YILDIRIM
<yildiri...@gmail.
ngle terminal window for the process you are
interested in.
Jody
On Thu, May 20, 2010 at 1:28 AM, Sang Chul Choi <gos...@gmail.com> wrote:
> Hi,
>
> I am wondering if there is a way to run a particular process among multiple
> processes on the console of a linux cluster.
>
&g
Just to be sure:
Is there a copy of the shared library on the other host (hpcnode1) ?
jody
On Mon, May 10, 2010 at 5:20 PM, Prentice Bisbal <prent...@ias.edu> wrote:
> Are you runing thee jobs through a queuing system like PBS, Torque, or SGE?
>
> Prentice
>
> Migue
://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/fpu_wp.pdf
but i think AMD Opteron does not.
But i am no expert in this area - i only found out about this when i
mentioned to someone
the differences in the results obtained from a 32Bit platform and a
64bit platform. Sorry.
Jody
On Mon
I once got different results when running on a 64-Bit platform instead of
a 32 bit platform - if i remember correctly, the reason was that on the
32-bit platform 80bit extended precision floats were used but on the 64bit
platform only 64bit floats.
On Sun, Apr 25, 2010 at 3:39 AM, Fabian Hänsel
@Trent
> the 1024 RSA has already been cracked.
Yeah but unless you've got 3 guys spending 100 hours varying the
voltage of your processors
it is still safe... :)
On Tue, Apr 6, 2010 at 11:35 AM, Reuti wrote:
> Hi,
>
> Am 06.04.2010 um 09:48 schrieb Terry Frankcombe:
layer on top of pbs.
However, there are folks here who would know far more than I do about
these sorts of things.
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
I have an environment a few trusted users could use to test. However,
I have neither the expertise or time to do the debugging myself.
Cheers, Jody
On 2010-03-29, at 1:27 PM, Jeff Squyres wrote:
On Mar 29, 2010, at 4:11 PM, Cristobal Navarro wrote:
i realized that xcode dev tools
/etc/openmpi-mca-params.conf to make sure
that the right ports are used:
# set ports so that they are more valid than the default ones (see
email from Ralph Castain)
btl_tcp_port_min_v4 = 36900
btl_tcp_port_range = 32
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
I_write_adv2
===
Regards
jody
On Thu, Feb 25, 2010 at 2:47 AM, Terry Frankcombe <te...@chem.gu.se> wrote:
> On Wed, 2010-02-24 at 13:40 -0500, w k wrote:
>> Hi Jordy,
>>
>> I d
Hi
I can't answer your question about the array q offhand,
but i will try to translate your program to C and see if
it fails the same way.
Jody
On Wed, Feb 24, 2010 at 7:40 PM, w k <thuw...@gmail.com> wrote:
> Hi Jordy,
>
> I don't think this part caused the problem. For fort
Hi Gabriele
you could always pipe your output through grep
my_app | grep "MPI_ABORT was invoked"
jody
On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati
<g.fatig...@cineca.it> wrote:
> Hi Nadia,
>
> thanks for quick reply.
>
> But i suppose that parameter
ount = 0
> end if
>
> if (count .gt. 0) then
> allocate(temp(count))
> temp(1) = 2122010.0d0
> end if
In C/C++ something like this would almost certainly lead to a crash,
but i don't know if this would be the case in Fortran...
jody
On Wed, Feb 24, 2010 at 4:38 AM, w k &
and a is your PS3 host,
and app_dell is your application compiled on the dell, and b is your dell host.
Check the MPI FAQs
http://www.open-mpi.org/faq/?category=running#mpmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Thu, Jan 28, 2010 at 3:08 AM
Thanks, that did it!
BTW, in the man page for mpirun you should perhaps mention the "!"
option in xterm - the one that keeps the xterms open after the
application exits.
Thanks
Jody
On Mon, Dec 21, 2009 at 3:25 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Is your M
-f77
--disable-mpi-f90 --with-threads
and afterwards made a soft link
ln -s /opt/openmpi-1.4 /opt/openmpi
This is on fedora fc8, but i have the same problem on my gentoo
machines (2.6.29-gentoo-r5)
Does anybody know how to get replace the old man files with the new ones?
Thank You
Jody
Hi Ralph
I finally got around to install version 1.4.
The xterm works fine.
And in order to get gdb going on the spawned processes, i need to add
an argument "--args"
in the argument list of the spawner so that the parameters of the
spawned processes are getting through gdb.
Thanks ag
Thanks for your reply
That sounds good. I have Open-MPI version 1.3.2, and mpirun seems not
to recognize the --xterm option.
[jody@plankton tileopt]$ mpirun --xterm -np 1 ./boss 9 sample.tlf
--
mpirun was unable to launch
environment variable in order to
display their xterms with gdb on my workstation.
Another negative point would be the need to change the argv parameters
every time one switches between debugging and normal running.
Has anybody got some hints on how to debug spawned processes?
Thank You
Jody
point you to an MPI primer or tute.
>
Have a look at the Open MPI FAQ:
http://www.open-mpi.org/faq/?category=running
It shows you how to run a Open-MPI program on single or multiple machines
Jody
Hi
Just curious:
Is there a particular reason why you want version 1.2?
The current version is 1.3.3!
Jody
On Tue, Oct 20, 2009 at 2:48 PM, Sangamesh B <forum@gmail.com> wrote:
> Hi,
>
> Its required here to install Open MPI 1.2 on a HPC cluster with - Cent
> OS 5.2
Hi
Have look at the Open MPI FAQ:
http://www.open-mpi.org/faq/
It gives you all the information you need to start working with your cluster.
Jody
On Wed, Sep 30, 2009 at 8:25 AM, ankur pachauri <ankurpacha...@gmail.com> wrote:
> dear all,
>
> i am new to openmpi, all that i
Did you also change the "" to buffer in your MPI_Send call?
Jody
On Tue, Sep 22, 2009 at 1:38 PM, Everette Clemmer <clemm...@gmail.com> wrote:
> Hmm, tried changing MPI_Irecv( ) to MPI_Irecv( buffer...)
> and still no luck. Stack trace follows if that's helpful:
&g
Hi
I'm not sure if i completely understand your requirements,
but have you tried MPI_WTime?
Jody
On Fri, Sep 11, 2009 at 7:54 AM, amjad ali <amja...@gmail.com> wrote:
> Hi all,
> I want to get the elapsed time from start to end of my parallel program
> (OPENMPI based). It should
and friends...
Cheers, Jody
On Aug 19, 2009, at 15:57 PM, tomek wrote:
OK - I have fixed it by including -L/opt/openmpi/lib at the very
beginning of mpicc ... -L/opt/openmpi/lib -o app.exe the rest ...
But something is wrong with dyld anyhow.
On 19 Aug 2009, at 21:04, Jody Klymak wrote
libmpi...
Note, that the /opt/openmpi/bin path is properly set and ompi_info
does outputs the right info.
You do not need to set DYLD_LIBRARY_PATH. I don't have it set and my
mpi applications run fine.
Did 4 work?
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
Hi
I had a similar problem.
Following a suggestion from Lenny,
i removed the "max-slots" entries from
my hostsfile and it worked.
It seems that there still are some minor bugs in the rankfile mechanism.
See the post
http://www.open-mpi.org/community/lists/users/2009/08/10384.php
Jod
-mpi.org/faq/?category=running#mpirun-scheduling
but i couldn't find any explanation. (furthermore, in the FAQ it says
"max-slots"
in one place, but "max_slots" in the other one)
Thank You
Jody
On Mon, Aug 17, 2009 at 3:29 PM, Lenny
Verkhovsky<lenny.verkhov...@gmail.co
osts (i.e. plankton instead of plankton.uzh.ch) in
the host file...
However, I encountered a new problem:
if the rankfile lists all the entries which occur in the host file
there is an error message.
In the following example, the hostfile is
[jody@plankton neander]$ cat th_02
nano_00.uzh.ch slots=2 ma
Hi Lenny
Thanks - using the full names makes it work!
Is there a reason why the rankfile option treats
host names differently than the hostfile option?
Thanks
Jody
On Mon, Aug 17, 2009 at 11:20 AM, Lenny
Verkhovsky<lenny.verkhov...@gmail.com> wrote:
> Hi
> This message mean
Hi
When i use a rankfile, i get an error message which i don't understand:
[jody@plankton tests]$ mpirun -np 3 -rf rankfile -hostfile testhosts ./HelloMPI
--
Rankfile claimed host plankton that was not allocated
, but otherwise all
you are missing is the cute tachometer display.
Cheers, Jody
Cheers,
Alan
On Fri, Aug 14, 2009 at 17:20, Warner Yuen <wy...@apple.com> wrote:
Hi Alan,
Xgrid support for Open MPI is currently broken in the latest version
of Open MPI. See the ticket below. Howe
to an sshd deciding it was a security breach and killing all the
processes).
Anyways, all seems to be working so far. Sorry that my poor choice in
user management caused so many mysteries. Thanks for everyone's help.
Cheers, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
http://www.open-mpi.org/faq/?category=tcp
for more information). The program example/connectivity_c.c is also
a useful minimal program for testing communication on the cluster.
Thanks again for everyone's help, particularly Ralph, Jeff and Gus.
Cheers, Jody
On Aug 12, 2009, at 12:46 PM, Jody Klymak wrote:
So I think ranks 0 and 2 are on xserve02 and rank 1 is on xserve01,
Should read xserve03,
--
Jody Klymak
http://web.uvic.ca/~jklymak/
ignal 0).
--
mpirun: clean termination accomplished
Thanks, Jody
The port numbers are fine and can be different or the same - it is
totally random. The procs exchange their respective port info during
wireup.
On Wed, Aug 12, 2009 at 1
Hi Ralph,
That gives me something more to work with...
On Aug 12, 2009, at 9:44 AM, Ralph Castain wrote:
I believe TCP works fine, Jody, as it is used on Macs fairly widely.
I suspect this is something funny about your installation.
One thing I have found is that you can get this error
X users are using non-tcpip communication, and
the tcp stuff just doesn't work in 1.3.3.
Thanks, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak/
1 - 100 of 218 matches
Mail list logo