christophe petit wrote:
i'm studying the parallelized version of a solving 2D heat equation
code in order to understand cartesian topology and the famous
"MPI_CART_SHIFT".
Here's my problem at this part of the code :
---
Hello, I'm using OpenMPI with VTK (Visualization Toolkit) now on Windows Vista,
and here are some problems occured during installation.
OpenMPI 1.5: Error during CMake, no matter using MinGW32 or VS2005 as compiler
OpenMPI 1.4.3:
1 Building with VS2005 is OK, but when I used MinGW v3.81 (I h
Hello. May be this question already answered, but i can't see it in list
archive.
I'm running about 60 Xen nodes with about 7-20 virtual machines under
it. I want to gather disk,cpu,memory,network utilisation from virtual
machines and get it into database for later processing.
As i see, my archit
Just to let you know -- our main Windows developer is out for a little bit.
He'll reply when he returns, but until he does, there's really no one else who
can answer your question. Sorry! :-\
On Oct 22, 2010, at 4:01 AM, 邵思睿 wrote:
> Hello, I'm using OpenMPI with VTK (Visualization Toolkit
Hi,
Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:
> Hello. May be this question already answered, but i can't see it in list
> archive.
>
> I'm running about 60 Xen nodes with about 7-20 virtual machines under
> it. I want to gather disk,cpu,memory,network utilisation from virtual
> machines
On Fri, 2010-10-22 at 14:07 +0200, Reuti wrote:
> Hi,
>
> Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:
>
> > Hello. May be this question already answered, but i can't see it in list
> > archive.
> >
> > I'm running about 60 Xen nodes with about 7-20 virtual machines under
> > it. I want to
On Oct 20, 2010, at 9:43 PM, Raymond Muno wrote:
> On 10/20/2010 8:30 PM, Scott Atchley wrote
>> Are you building OMPI with support for both MX and IB? If not and you only
>> want MX support, try configuring OMPI using --disable-memory-manager (check
>> configure for the exact option).
>>
>> We
Ray,
Looking back at your original message, you say that it works if you use the
Myricom supplied mpirun from the Myrinet roll. I wonder if this is a mismatch
between libraries on the compute nodes.
What do you get if you use your OMPI's mpirun with:
$ mpirun -n 1 -H ldd $PWD/
I am wondering
Hello,
There was a bug in the use of hostfiles when a username is supplied which
has been fixed in OpenMPI v1.4.2.
I have just installed the v1.5 and the bug seems to come out again : only
the first username provided in the machinefile is taken into account.
See mails below for the history.
My c
MPI won't do this - if a node dies, the entire MPI job is terminated.
Take a look at OpenRCM, a subproject of Open MPI:
http://www.open-mpi.org/projects/orcm/
This is designed to do what you describe as we have a similar (open source)
project underway at Cisco. If I were writing your system, I
Well that stinks. I'll take care of it - sorry about that. Guess a patch didn't
come across at some point.
On Oct 22, 2010, at 6:55 AM, Olivier Riff wrote:
> Hello,
>
> There was a bug in the use of hostfiles when a username is supplied which has
> been fixed in OpenMPI v1.4.2.
> I have just
On Fri, 2010-10-22 at 07:36 -0600, Ralph Castain wrote:
> MPI won't do this - if a node dies, the entire MPI job is terminated.
>
>
> Take a look at OpenRCM, a subproject of Open MPI:
>
>
> http://www.open-mpi.org/projects/orcm/
>
>
> This is designed to do what you describe as we have a simi
Am 22.10.2010 um 14:09 schrieb Vasiliy G Tolstov:
> On Fri, 2010-10-22 at 14:07 +0200, Reuti wrote:
>> Hi,
>>
>> Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:
>>
>>> Hello. May be this question already answered, but i can't see it in list
>>> archive.
>>>
>>> I'm running about 60 Xen nodes
On Fri, 2010-10-22 at 16:04 +0200, Reuti wrote:
> Am 22.10.2010 um 14:09 schrieb Vasiliy G Tolstov:
>
> > On Fri, 2010-10-22 at 14:07 +0200, Reuti wrote:
> >> Hi,
> >>
> >> Am 22.10.2010 um 10:58 schrieb Vasiliy G Tolstov:
> >>
> >>> Hello. May be this question already answered, but i can't see
Hi,
I tried to build Open MPI 1.5 on SunOS x86_64 with the Oracle/Sun
Studio C compiler and gcc-4.2.0 in 32- and 64-bit mode. I couldn't
built the package with Oracle/Sun C 5.9 in 32-bit mode with thread
support.
sunpc4 openmpi-1.5-SunOS.x86_64.32_cc 110 tail -15
log.make.SunOS.x86_64.32_cc
ma
Hi,
I am using open MPI to transfer data between nodes.
But the received data is not what the data sender sends out .
I have tried C and C++ binding .
data sender:double* sendArray = new double[sendResultVec.size()];
for (int ii =0 ; ii < sendResultVec.size() ; ii++) {
It doesn't look like you have completed the request that came back from Irecv.
You need to TEST or WAIT on requests before they are actually completed (e.g.,
in the case of a receive, the data won't be guaranteed to be in the target
buffer until TEST/WAIT indicates that the request has complete
I am out of the office until 11/01/2010.
I will be out of the office on vacation the last week of Oct. Back Nov 1.
I will not see any email.
Note: This is an automated response to your message "[OMPI users] OPEN MPI
data transfer error" sent on 10/22/10 15:19:05.
This is the only notification
Hi,
I have used mpi_waitall() to do it.
The data can be received but the contents are wrong.
Any help is appreciated.
thanks
> From: jsquy...@cisco.com
> Date: Fri, 22 Oct 2010 15:35:11 -0400
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] OPEN MPI data transfer error
>
> It doesn't look
On Oct 22, 2010, at 5:36 PM, Jack Bryan wrote:
> I have used mpi_waitall() to do it.
>
> The data can be received but the contents are wrong.
Can you send a more accurate code snipit, and/or the code that you are using to
check whether the data is right/wrong?
I ask because I'm a little sus
Did you use the waitall on the sender or the receiver side? I noticed you
didn't have the request variable at the receiver side that is needed in the
waitall.
On Fri, Oct 22, 2010 at 2:48 PM, Jeff Squyres wrote:
> On Oct 22, 2010, at 5:36 PM, Jack Bryan wrote:
>
> > I have used mpi_waitall() to
Hi, I am completely new to MPI and am having trouble running a job between
two cpus.
The same thing happens no matter what MPI job I try to run, but here is a
simple 'hello world' style program I am trying to run.
#include
#include
int main(int argc, char **argv)
{
int *buf, i, rank, nints,
22 matches
Mail list logo