Fix the double-check locking[1] by defining the cls_initialized member to
volatile.
Greetings
Bert Wesarg
[1]: http://en.wikipedia.org/wiki/Double-checked_locking
---
opal/class/opal_object.h |2 +-
1 files changed, 1 insertion(+), 1 deletion(-)
diff --quilt old/opal/class/opal_object.h
On Tue, Mar 06, 2007 at 10:10:44AM +0100, Bert Wesarg wrote:
> Fix the double-check locking[1] by defining the cls_initialized member to
> volatile.
>
> Greetings
>
> Bert Wesarg
>
> [1]: http://en.wikipedia.org/wiki/Double-checked_locking
Can you explain how the Java example from this page appl
Gleb Natapov wrote:
> On Tue, Mar 06, 2007 at 10:10:44AM +0100, Bert Wesarg wrote:
>> Fix the double-check locking[1] by defining the cls_initialized member to
>> volatile.
>>
>> Greetings
>>
>> Bert Wesarg
>>
>> [1]: http://en.wikipedia.org/wiki/Double-checked_locking
> Can you explain how the J
On Tue, Mar 06, 2007 at 10:44:53AM +0100, Bert Wesarg wrote:
>
>
> Gleb Natapov wrote:
> > On Tue, Mar 06, 2007 at 10:10:44AM +0100, Bert Wesarg wrote:
> >> Fix the double-check locking[1] by defining the cls_initialized member to
> >> volatile.
> >>
> >> Greetings
> >>
> >> Bert Wesarg
> >>
> >>
Hello,
Gleb Natapov wrote:
> If it does this after opal_atomic_lock() (which is explicit memory
> barrier) then it is broken.
Than, gcc 4.1.1 on the amd64 architecture is broken:
The test-cases were compiled in the test/asm directory, with -O3
Bert
#define OMPI_BUILDING 0
#include "ompi_config
On Tue, Mar 06, 2007 at 11:24:06AM +0100, Bert Wesarg wrote:
> Hello,
>
> Gleb Natapov wrote:
> > If it does this after opal_atomic_lock() (which is explicit memory
> > barrier) then it is broken.
> Than, gcc 4.1.1 on the amd64 architecture is broken:
And can you repeat the test please, but make "
Gleb Natapov wrote:
> On Tue, Mar 06, 2007 at 11:24:06AM +0100, Bert Wesarg wrote:
>> Hello,
>>
>> Gleb Natapov wrote:
>>> If it does this after opal_atomic_lock() (which is explicit memory
>>> barrier) then it is broken.
>> Than, gcc 4.1.1 on the amd64 architecture is broken:
> And can you repeat
On Tue, Mar 06, 2007 at 12:13:16PM +0100, Bert Wesarg wrote:
> Gleb Natapov wrote:
> > On Tue, Mar 06, 2007 at 11:24:06AM +0100, Bert Wesarg wrote:
> >> Hello,
> >>
> >> Gleb Natapov wrote:
> >>> If it does this after opal_atomic_lock() (which is explicit memory
> >>> barrier) then it is broken.
>
Hello,
I followed the call to test the rc1, but a simple test programm hangs, but
non deterministic. all but one orted have quit. but no cpu eating from
orted or mpirun.
The test system is a xeon cluster with myrinet interconnect.
the outouts are splited into two mails (100kb limit)
Bert Wesarg
Part 2 of the output tar
ompi-out2.tar.gz
Description: GNU Zip compressed data
Hi Bert Wesarg,
Thank you for your quick testing of 1.2rc1. 1.2 is expected to fail when
using MPI_THREAD_MULTIPLE. I suspect that a working and tested
MPI_THREAD_MULTIPLE will be one of our goals for 1.3.
On 3/6/07, Bert Wesarg wrote:
Hello,
I followed the call to test the rc1, but a simple
Hi,
this is realy sad, version 1.1.2 works quite good with threads (multiple
threads which starts mpi requests), only 1 of 10 (or even less) kills with
a SIGSEGV. And this this simple test program works even longer.
Bert
Tim Mattox wrote:
> Hi Bert Wesarg,
> Thank you for your quick testing of 1
Unfortunately, MPI_THREAD_MULTIPLE has never received a lot of
testing in any version of OMPI (including v1.1). Various members
tested the bozo cases (e.g., ensure we don't double lock, etc.), and
periodically tested/debugged simple multi-threaded apps, but not much
more than that.
As su
Bert,
Thanks for this patch. I apply it in the trunk as revision r13939.
Thanks again.
george.
On Mar 5, 2007, at 12:10 PM, Bert Wesarg wrote:
This saves some memory for the constructors and destructors arrays
of a
class by counting the constructors and destructors while we are
counti
Bert,
Your previous patch saves some memory while the current one use some
more. I prefer to keep the array, as it's not only out of any
critical path but it's not performance related. An array can do the
job without any problems and use less memory than the linked list.
Thanks,
geo
Hello,
thanks, but I was in preparation to submit a superseded patch, which is
more intrusive to the class object system.
Bert
George Bosilca wrote:
> Bert,
>
> Thanks for this patch. I apply it in the trunk as revision r13939.
> Thanks again.
>
>george.
>
> On Mar 5, 2007, at 12:10 PM,
Hello,
this gives the option to use the umem cache feature from the libumem[1]
for the opal object system.
It is full backward compatible to the old system.
But the patch exists of more changes:
(1) reorder opal_class_t, in the hope that vital members fit in the first
cache line
(2) a per c
---
opal/class/opal_free_list.c | 24 ++--
1 files changed, 18 insertions(+), 6 deletions(-)
diff --quilt old/opal/class/opal_free_list.c new/opal/class/opal_free_list.c
--- old/opal/class/opal_free_list.c
+++ new/opal/class/opal_free_list.c
@@ -22,23 +22,25 @@
#include "
This is a self fix reply
>
>
> ---
>
> opal/class/opal_object.c | 210
> +++
> opal/class/opal_object.h | 201
> 2 files changed,
Bert,
Reordering the members in the opal_class structure can be discussed.
In general, we don't focus on improving anything that's outside the
critical path (such as the class sub-system). Even if all Open MPI
objects are derived from this class, it is definitively not
performance aware,
20 matches
Mail list logo