Thanks!
On Mar 22, 2012, at 6:12 PM, Jeffrey Squyres wrote:
>> From the context of the code, I'm assuming it's supposed to be MPI_SOURCE.
>> I'll commit shortly.
>
>
> On Mar 22, 2012, at 7:54 PM, Ralph Castain wrote:
>
>> Yo Brian
>>
>> I believe you have an error in this commit:
>>
>> pm
I was reading the FAQs for the ClamAV anti-virus program (included on
Mac OS X) at http://www.clamav.net/lang/en/faq/faq-upgrade/. At the
end is a note that caught my eye about problem compilers.
ClamAV supports a wide variety of compilers, hardware and operating
systems. Our core compiler
>From the context of the code, I'm assuming it's supposed to be MPI_SOURCE.
>I'll commit shortly.
On Mar 22, 2012, at 7:54 PM, Ralph Castain wrote:
> Yo Brian
>
> I believe you have an error in this commit:
>
> pml_ob1_iprobe.c:113: error: 'ompi_status_public_t' has no member named
> 'MPI_S
Yo Brian
I believe you have an error in this commit:
pml_ob1_iprobe.c:113: error: 'ompi_status_public_t' has no member named
'MPI_STATUS'
I checked the definition of that struct, and the error is correct - there is no
such member. What should it be?
Ralph
On Mar 22, 2012, at 4:55 PM, brbar.
On Thu, 22 Mar 2012, Shamis, Pavel wrote:
What: Change coll tuned default to pairwise exchange
Why: The linear algorithm does not scale to any reasonable number of PEs
When: Timeout in 2 days (Fri)
Is there any reason the default should not be changed?
Nathan,
I can see why people thin
>
>> What: Change coll tuned default to pairwise exchange
>>
>> Why: The linear algorithm does not scale to any reasonable number of PEs
>>
>> When: Timeout in 2 days (Fri)
>>
>> Is there any reason the default should not be changed?
>
> Nathan,
>
> I can see why people think the linear algor
On Mar 21, 2012, at 12:14 , Nathan Hjelm wrote:
> What: Change coll tuned default to pairwise exchange
>
> Why: The linear algorithm does not scale to any reasonable number of PEs
>
> When: Timeout in 2 days (Fri)
>
> Is there any reason the default should not be changed?
Nathan,
I can see w
We did not support ARM until Open MPI 1.5.x.
On Mar 21, 2012, at 7:07 AM, Juan Solano wrote:
>
> Hello,
>
> I have a problem using Open MPI on my linux system (pandaboard running
> Ubuntu precise). A call to MPI_Init_thread with the following parameters
> hangs:
>
> MPI_Init_thread(0, 0, MPI_
What OMPI version are you using?
On Mar 21, 2012, at 5:07 AM, Juan Solano wrote:
>
> Hello,
>
> I have a problem using Open MPI on my linux system (pandaboard running
> Ubuntu precise). A call to MPI_Init_thread with the following parameters
> hangs:
>
> MPI_Init_thread(0, 0, MPI_THREAD_MULTI
Thanks Josh.
george.
On Mar 22, 2012, at 10:09 , Josh Hursey wrote:
> Should be fixed in r26177.
>
> On Thu, Mar 22, 2012 at 7:51 AM, Josh Hursey wrote:
>> Fair enough. Upon further inspection of the request_invoke() handler,
>> you are correct that it is not required here if we do not modif
Should be fixed in r26177.
On Thu, Mar 22, 2012 at 7:51 AM, Josh Hursey wrote:
> Fair enough. Upon further inspection of the request_invoke() handler,
> you are correct that it is not required here if we do not modify the
> default value for req_status.MPI_ERROR.
>
> I'll work on a revised patch
Fair enough. Upon further inspection of the request_invoke() handler,
you are correct that it is not required here if we do not modify the
default value for req_status.MPI_ERROR.
I'll work on a revised patch this morning and commit. One that does
not use this field.
Per your comment from your fir
On Mar 21, 2012, at 15:13 , Josh Hursey wrote:
> I see your point about setting MPI_ERR_PENDING on the internal status
> versus the status returned by MPI_Waitall. As I mentioned, the reason
> I choose to do that is to support the ompi_errhandler_request_invoke()
> function. I could not think of
13 matches
Mail list logo