Tim wrote:
Thanks Eugene!
My case, after simplified, is to speed up the time-consuming computation in the
loop below by assigning iterations to several nodes in a cluster by MPI. Each
iteration of the loop computes each element of an array. The computation of
each element is independent of o
Hi Tim
Your OpenMP layout suggests that there are no data dependencies
in your "complicated_computation()" and the operations therein
are local.
I will assume this is true in what I suggest.
In MPI you could use MPI_Scatter to distribute the (initial)
array values before the computational loop,
Thanks Eugene!
My case, after simplified, is to speed up the time-consuming computation in the
loop below by assigning iterations to several nodes in a cluster by MPI. Each
iteration of the loop computes each element of an array. The computation of
each element is independent of others in the a
On Thu, 2010-01-28 at 17:05 -0800, Tim wrote:
> Also I only need the loop that computes every element of the array to
> be parallelized. Someone said that the parallel part begins with
> MPI_Init and ends with MPI_Finilize, and one can do any serial
> computations before and/or after these calls. B
Hi Tom,
sorry to add something in the same vein as Eugene's reply. i think
this is an excellent resource
http://ci-tutor.ncsa.illinois.edu/login.php. It's a great online course and
detailed! Before I took proper classes, this helped me a lot!!
On Thu, Jan 28, 2010 at 7:05 PM, Tim wrote:
Tim wrote:
Thanks, Eugene.
I admit I am not that smart to understand well how to use MPI, but I did read some basic materials about it and understand how some simple problems are solved by MPI.
But dealing with an array in my case, I am not certain about how to apply MPI
to it. Are you sayi
When attempting to launch an application on both local and remote
windows7 hosts, mpirun either hangs indefinately or abends.
The application executes correctly on both machines, when only launched
on a single host.
I believe mpirun is using WMI, README.WINDOWS indicates that this is the
case if
Thanks, Eugene.
I admit I am not that smart to understand well how to use MPI, but I did read
some basic materials about it and understand how some simple problems are
solved by MPI.
But dealing with an array in my case, I am not certain about how to apply MPI
to it. Are you saying to use sen
Take a look at some introductory MPI materials to learn how to use MPI
and what it's about. There should be resources on-line... take a look
around.
The main idea is that you would have many processes, each process would
have part of the array. Thereafter, if a process needs data or results
Hi,
(1). I am wondering how I can speed up the time-consuming computation in the
loop of my code below using MPI?
int main(int argc, char ** argv)
{
// some operations
f(size);
// some operations
return 0;
}
void
Hi,
it looks that there is an issue with totalview and
openmpi
message queue just empty and output shows:
WARNING: Field mtc_ndims_or_nnodes of type mca_topo_base_comm_1_0_0_t not
found!
WARNING: Field mtc_dims_or_index of type mca_topo_base_comm_1_0_0_t not
found!
WARNING: Field mtc_periods_or_ed
See, it was a simple thing. Thank you for the information. I am trying it
now. Have to recompile and re-install openmpi for a heterogeneous network.
Now, knowing what to search for, I found that I can set the configuration of
the cluster in a file that mpirun and mpiexec can read.
mpirun --app
>On Jan 28, 2010, at 10:57 AM, Laurence Marks wrote:
>> I am trying to find out if there is any way to create an error-handler
>> or something else that will trap an error exit from the run-time
>> library due to a fortran I/O error, or possibly some openmpi calls or
>> options that will do the sa
On Jan 28, 2010, at 10:57 AM, Laurence Marks wrote:
> I am trying to find out if there is any way to create an error-handler
> or something else that will trap an error exit from the run-time
> library due to a fortran I/O error, or possibly some openmpi calls or
> options that will do the same th
On 28-gen-10, at 12:35, Jeff Squyres (jsquyres) wrote:
What was blacs compiled against, lam or ompi?
What is your LD_LIBRARY_PATH set to?
Are you ensuring to use ompi's mpirun (vs, for example, lam's mpirun)
yes everything was ok, I had tried everything I could think of, rpath,
--prefix,..
Hi Justin,
Unfortunately, for Open MPI on Windows, not all the Fortran compilers
are supported, and the f90 bindings haven't been implemented. But the
f77 bindings are available, and the Windows version of GNU Fortran
compilers should work in that case, e.g. g77, g95.
Regards,
Shiqing
Jus
Hello all,
I am trying to build a 32 bit version of OpenMP on Window XP 64
using Visual Studio 6 and Compaq Visual Fortran 6.6b. I am using CMake to
configure the build I specify Visual Studio 6 as my generator for this project.
I specify where my c (cl.exe) and fortran (f90.
I am trying to find out if there is any way to create an error-handler
or something else that will trap an error exit from the run-time
library due to a fortran I/O error, or possibly some openmpi calls or
options that will do the same thing.
Let me expand a little. I am working with a very larger
Sorry guys -- this one slipped off the radar. You're right that it didn't make
it into v1.4.1.
Short version
-
I looked into this yesterday and chatted with some other OMPI developers about
it. We agree; we can move the user options up in the command line creation.
I'll file a t
Also, did you remember to configure with --enable-heterogeneity?
On Jan 28, 2010, at 12:43 AM, jody wrote:
> Hi
> I'm not sure i completely understood.
> Is it the case that an application compiled on the dell will not work
> on the PS3 and vice versa?
>
> If this is the case, you could try this
What was blacs compiled against, lam or ompi?
What is your LD_LIBRARY_PATH set to?
Are you ensuring to use ompi's mpirun (vs, for example, lam's mpirun)
-jms
Sent from my PDA. No type good.
- Original Message -
From: users-boun...@open-mpi.org
To: us...@open-mpi.org
Sent: Wed Jan 27
Hi
I'm not sure i completely understood.
Is it the case that an application compiled on the dell will not work
on the PS3 and vice versa?
If this is the case, you could try this:
shell$ mpirun -np 1 --host a app_ps3 : -np 1 --host b app_dell
where app_ps3 is your application compiled on the PS3
22 matches
Mail list logo