Naturally, a forgotten attachment.
An to edit that, it was compiled to be used with OpenMPI 1.4.1, but as I
understand, 1.4.2 is just a bug fix of 1.4.1.
--- On Tue, 7/13/10, Robert Walters wrote:
From: Robert Walters
Subject: Re: [OMPI users] OpenMPI Hangs, No Error
To: "Open MPI Users"
Lis
I think I forgot to mention earlier that the application I am using is
pre-compiled. It is a finite element software called LS-DYNA. It is not open
source and I likely cannot obtain the code it uses for MPP. This version I am
using was specifically compiled, by the parent company, for OpenMPI 1.
Like Ralph says, the slow down may not be coming from the kernel, but rather
on waiting for messages. What MPI send/recv commands are you using?
On Tue, Jul 13, 2010 at 11:53 AM, Ralph Castain wrote:
> I'm afraid that having 2 cores on a single machine will always outperform
> having 1 core on
As far as I can tell, it appears the problem is somewhere in our communicator
setup. The people knowledgeable on that area are going to look into it later
this week.
I'm creating a ticket to track the problem and will copy you on it.
On Jul 13, 2010, at 6:57 AM, Ralph Castain wrote:
>
> On J
I'm afraid that having 2 cores on a single machine will always outperform
having 1 core on each machine if any communication is involved.
The most likely thing that is happening is that OMPI is polling waiting for
messages to arrive. You might look closer at your code to try and optimize it
bet
Hi Eloi:
To select the different bcast algorithms, you need to add an extra mca
parameter that tells the library to use dynamic selection.
--mca coll_tuned_use_dynamic_rules 1
One way to make sure you are typing this in correctly is to use it with
ompi_info. Do the following:
ompi_info -mca
OpenMPI,
Following up. The sysadmin opened ports for machine to machine communication
and OpenMPI is running successfully with no errors in connectivity_c, hello_c,
or ring_c. Since, I have started to implement our MPP software (finite element
analysis) that we have, and upon running a simple,
Hi,
I've found that "--mca coll_tuned_bcast_algorithm 1" allowed to switch to the
basic linear algorithm.
Anyway whatever the algorithm used, the segmentation fault remains.
Does anyone could give some advice on ways to diagnose the issue I'm facing ?
Regards,
Eloi
On Monday 12 July 2010 10:
It works perfectly! Thanks a lot guys. You've been really helpful especially
you Damien Hocking and Shiqing Fan. All this complicated process makes me
wonder how complicated the code behind OpenMPI and MPI in general is. In
some cases, mailing lists really are a lot more useful than online forums.
On Jul 13, 2010, at 3:36 AM, Grzegorz Maj wrote:
> Bad news..
> I've tried the latest patch with and without the prior one, but it
> hasn't changed anything. I've also tried using the old code but with
> the OMPI_DPM_BASE_MAXJOBIDS constant changed to 80, but it also didn't
> help.
> While lookin
Am 13.07.2010 um 04:50 schrieb Bowen Zhou:
> Since each node has its own memory in a distributed memory system,
> there is no such thing as "global variable" that can be accessed by all
> processes. So you need to use MPI to scatter the input from rank 0
> process to all the other processes explic
Bad news..
I've tried the latest patch with and without the prior one, but it
hasn't changed anything. I've also tried using the old code but with
the OMPI_DPM_BASE_MAXJOBIDS constant changed to 80, but it also didn't
help.
While looking through the sources of openmpi-1.4.2 I couldn't find any
call
Thanks for the patch - it works fine!
Jody
On Mon, Jul 12, 2010 at 11:38 PM, Ralph Castain wrote:
> Just so you don't have to wait for 1.4.3 to be released, here is the patch.
> Ralph
>
>
>
>
> On Jul 12, 2010, at 2:44 AM, jody wrote:
>
>> yes, i'm using 1.4.2
>>
>> Thanks
>> Jody
>>
>> On Mon,
13 matches
Mail list logo