Maybe you meant to search for OpenMP instead of Open-MPI.
You can achieve something close to what you want by using OpenMP for
on-node parallelism and MPI for inter-node communication.
-Brian
On Mon, Apr 16, 2012 at 11:02 AM, George Bosilca wrote:
> No currently there is no way in MPI (and subse
Hi Toufik,
That might explain something. Open MPI detects that you have CCP
installed on your system, which actually doesn't work. Could you please
check if CCP has been removed completely? Run "set" command to make sure
there is no CCP_* env variable any more. That should solve the problem.
Hi Jeff,
They are definitions for enabling dllexport/import declarations on Windows, and
they existed since the initial version for Cygwin. Normally these definitions
are hidden via mpicc command wrapper, but on Windows, when user tries to
compile a project in Visual Studio, they have to be ad
No currently there is no way in MPI (and subsequently in Open MPI) to achieve
this. However, in the next version of the MPI standard there will be a function
allowing processes to shared a memory segment
(https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/284).
If you like living on the bleedi
On Apr 16, 2012, at 1:54 PM, Shiqing Fan wrote:
> They are definitions for enable dllexport/import declarations on Windows, and
> they existed since the initial version for Cygwin. Normally these definitions
> are hidden via mpicc command wrapper, but on Windows, when user tries to
> compile a
Hi,
Sorry about the lag. I'll take a closer look at this ASAP.
Appreciate your patience,
Sam
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of
Ralph Castain [r...@open-mpi.org]
Sent: Monday, April 16, 2012 8:52 AM
To: Seyyed Mohtadin Ha
Hi Jody
I don't believe we have exposed our shared memory system for general use - it's
pretty deeply buried in the messaging system. We do have a branch where some of
us are playing with an ORTE-level shared memory system for precisely this kind
of use-case, but it isn't ready yet.
On Apr 16
No earthly idea. As I said, I'm afraid Sam is pretty much unavailable for the
next two weeks, so we probably don't have much hope of fixing it.
I see in your original note that you tried the 1.5.5 beta rc and got the same
results, so I assume this must be something in your system config that is
Hi
In my application i have to generate a large block of data (several
gigs) which subsequently has to be accessed by all processes (read
only),
Because of its size, it would take quite some time to serialize and
send the data to the different processes. Furthermore, i risk
running out of memory i
Shiqing --
What are these defines? Shouldn't they be invisible when compiling MPI
applications?
On Apr 9, 2012, at 4:13 PM, Shiqing Fan wrote:
> Hi Greg,
>
> Glad to hear that it works for you.
>
> And yes, these definitions are necessary for compiling any MPI application on
> Windows.
>
Hi Jayesh,
I am working on a 32-bit windows 7 and yes, I had a HPC pack installed but i
removed it before installing openMPI.
best regards,Toufik.
List-Post: users@lists.open-mpi.org
Date: Mon, 16 Apr 2012 10:47:43 +0200
From: f...@hlrs.de
To: us...@open-mpi.org
CC: h_touf...@hotmail.fr
Subject:
I recompiled everything from scratch with GCC 4.4.5 and 4.7 using OMPI
1.4.5 tarball.
I did some tests and it does not seem that i can make it work, i tried
these:
btl_sm_num_fifos 4
btl_sm_free_list_num 1000
btl_sm_free_list_max 100
mpool_sm_min_size 15
mpool_sm_max_size 75
Hi Toufik,
Do you have a HPC pack or CCP installed on your windows 7? It seems that
Open MPI is trying to use ccp to allocate resources. Is your windows 7
64 bit or 32 bit?
Regards,
Shiqing
On 2012-04-10 7:43 PM, toufik hadjazi wrote:
Hi,
even when i try to run : ompi_clean.exe or any ot
Hi Bradi,
Yes, as you are on an XP machine, the io forwarding is not working for
Open MPI. So you won't see the remote output from the local command
windows. The only way is to direct the output into a file, for example:
mpirun -n 2 app.exe > output.txt . This will generate the output file on
I did try with both MaxSessions and MaxStartups set to 200, unfortunately
it did not help - I still got the same errors as before.
> Date: Sat, 14 Apr 2012 12:58:49 -0400
>
From: Tim Miller
>
Subject: Re: [OMPI users] OpenMPI fails to run with -np larger than 10
>
To: Open MPI Users
>
Message-I
15 matches
Mail list logo