Under my programming environment, FORTRAN, it is possible to parallel read
(using native read function instead of MPI's parallel read function).
Although you'll run into problem when you try to parallel write to the same
file.
On Wed, Mar 9, 2011 at 8:45 PM, Jack Bryan wrote:
> Hi,
>
> I have a
Jeff,
Thanks for the reply and you are correct about the error. Here is a
summary of what happened, with an additional question at the end.
I originally installed lam-mpi to run FDS, as suggested in the FDS
manual. Everything works smoothly with lam-mpi, but on the lam-mpi
website it suggests tr
Hi,
I have a file, which is located in a system folder, which can be accessed by
all parallel processes.
Does Open MPI allow multi processes to access the same file at the same time ?
For example, all processes open the file and load data from it at the same
time.
Any help is really apprecia
Jeff,
it's funny because I do not see my problem with C (when using
long long) but only with Fortran and INTEGER8.
I have rewritten the testcase so that it uses MPI_REDUCE_LOCAL,
which unfortunately does not link with openmpi-1.4.3. Apparently
this is a new feature of openmpi-1.5.
Here's the mo
I am experiencing an error with an application called MPIBLAST. I am
trying to understand more about what this error represents in terms of
this application:
mpiblast_writer.cppStreamliner::CalculateMessageSizes - miscalculate
message sizes
Is this a problem with openmpi, infiniband or something
On 03/09/2011 09:44 AM, Jeff Squyres wrote:
The MPI_Comm_connect and MPI_Comm_accept calls are collective over their entire
communicators.
So if you pass MPI_COMM_WORLD into MPI_Comm_connect/accept, then *all*
processes in those respective MPI_COMM_WORLD's need to call
MPI_Comm_connect/accept
FYI, we finally managed to get GPUDirect to work. We didn't have
gpudirect patches in our OFED kernel modules (only available for RHEL
5.4 and 5.5), we had to rebuild them for SLES11. Thanks a lot for your help.
Now it works... but it seems to hang when the shared buffer size exceeds
1MB. I don't
The MPI_Comm_connect and MPI_Comm_accept calls are collective over their entire
communicators.
So if you pass MPI_COMM_WORLD into MPI_Comm_connect/accept, then *all*
processes in those respective MPI_COMM_WORLD's need to call
MPI_Comm_connect/accept.
For your 2nd question, when you get this to