I think you can check this by looking in config.h.

Reentrancy just means it uses a threadsafe memory manager, i.e. there is no
global data structure keeping track of memory allocations, but a thread
local one.

Bill.

On 14 November 2015 at 10:56, <highcalcula...@gmail.com> wrote:

> Ok, yes I had understood that the private statement produces local copies
> of the subsequent variables for each thread, but do not need that indeed
> here.
>
> It seems to work now... thanks!
>
> What does re-entrancy in case of MPIR correspond to (I understand this
> usually means that a program can be interrupted and continued after another
> program has been executed) - like if not all variables are cleared, and the
> C program is ended and started up again?
>
> I did not use MSVC for the build. From the documentation it seems that I
> have built in re-entrant mode - how could I check this?
>
> Thanks!
>
>
> On Thursday, November 12, 2015 at 12:06:20 PM UTC+1, Bill Hart wrote:
>
>> Ah, you are using Windows. I don't really know how to build in reentrant
>> mode using MSVC. Hopefully Brian can help you with that. It might be the
>> default, I don't know.
>>
>>
>> On 12 November 2015 at 11:53, <highcal...@gmail.com> wrote:
>>
>>> Ok; how can I build it in re-entrant mode and check whether I might have
>>> done so already (since the successful build took a long time...)?
>>> Regarding the separation of memory regions: What is wrong with the
>>> following code:
>>>
>>>     int i, N = 10;
>>>     mpf_t a[N], b;
>>>     for (i=0; i<N; i++) { mpf_init(a[i]);  }
>>>     mpf_init(b);
>>>
>>>     #pragma omp parallel private(a, b)
>>>      {
>>>         for (i=0; i<N; i++) { mpf_set_d(a[i], (double) i); }
>>>         mpf_set_d(b, 1.0);
>>>
>>>         #pragma omp for
>>>         for (i=0; i<N; i++) {
>>>
>>>             mpf_add(a[i], a[i], b);
>>>         }
>>>     }
>>>
>>>
>> I'm not sure what the private(a, b) does. Does it make a copy of a and b
>> per thread. I'm not sure if that is what you want to do. I'm not certain it
>> would make any difference anyway, unless it copies the whole of the array
>> a, which it might.
>>
>> The program looks ok to me otherwise.
>>
>> Probably the issue is with not building in reentrant mode. The memory
>> manager in MPIR needs to be reentrant.
>>
>> Bill.
>>
>> which again lets the program crash at runtime only.
>>> And where is a good introduction to using pthreads?
>>> Thanks a lot.
>>>
>>>
>>> On Thursday, November 12, 2015 at 11:22:46 AM UTC+1, Bill Hart wrote:
>>>>
>>>> Hi,
>>>>
>>>> You need to build MPIR in reentrant mode.
>>>>
>>>> Also, you need to ensure that no two threads can write to the same MPIR
>>>> variable at the same time. It's just like writing any other parallel
>>>> program which uses data structures. There must be a separation of memory
>>>> regions used by the different threads. This is harder to do with OpenMP
>>>> than with pthreads directly.
>>>>
>>>> Bill.
>>>>
>>>> On 12 November 2015 at 11:07, <highcal...@gmail.com> wrote:
>>>>
>>>>> Dear Bill, Dear All,
>>>>>
>>>>> after using MPIR successfully for some time, I am looking for a
>>>>> faster execution of my code and though of a parallel version. Is there a
>>>>> way to do it?
>>>>>
>>>>> I tried OpenMP by doing the following: In a toy example, I included
>>>>> <omp.h>, gcc’ed with "–fopenmp", and along the following non-MPIR code:
>>>>>
>>>>>
>>>>>
>>>>>                 int i, nloops, thread_id=0, N = 1000000;
>>>>>
>>>>>
>>>>>
>>>>>                 #pragma omp parallel private(thread_id, nloops)
>>>>>
>>>>>
>>>>>                 {
>>>>>
>>>>>                                nloops=0;
>>>>>
>>>>>
>>>>>
>>>>>                 #pragma omp for
>>>>>
>>>>>                                for (i=0; i<N; i++) {
>>>>>
>>>>>                                                nloops++;
>>>>>
>>>>>
>>>>>                                }
>>>>>
>>>>>                                thread_id = omp_get_thread_num();
>>>>>
>>>>>                                printf("Thread %d performed %d
>>>>> iterations of the loop.\n", thread_id, nloops );
>>>>>
>>>>>                 }
>>>>>
>>>>>
>>>>>
>>>>> which resulted in the output:
>>>>>
>>>>> Thread 3 performed 250000 iterations of the loop.
>>>>>
>>>>> Thread 0 performed 250000 iterations of the loop.
>>>>>
>>>>> Thread 2 performed 250000 iterations of the loop.
>>>>>
>>>>> Thread 1 performed 250000 iterations of the loop.
>>>>>
>>>>>
>>>>>
>>>>> I then tried an MPIR version:
>>>>>
>>>>>                 int i, nloops, thread_id=0, N = 1000000;
>>>>>
>>>>>                 mpf_t a, b; mpf_inits(a, b, 0);
>>>>>
>>>>>
>>>>>
>>>>>                 #pragma omp parallel private(thread_id, nloops, a, b)
>>>>>
>>>>>
>>>>>                 {
>>>>>
>>>>>                                 mpf_set_d(a, 0.0);
>>>>>
>>>>>                                mpf_set_d(b, 1.0);
>>>>>
>>>>>
>>>>>
>>>>>                 #pragma omp for
>>>>>
>>>>>                                for (i=0; i<N; i++) {
>>>>>
>>>>>                                                 mpf_add(a, a, b);
>>>>>
>>>>>                                }
>>>>>
>>>>>                                thread_id = omp_get_thread_num();
>>>>>
>>>>>                                gmp_printf("Thread %d performed %.*Ff
>>>>> iterations of the loop.\n", thread_id, 10, a );
>>>>>
>>>>>                 }
>>>>>
>>>>>                 mpf_clears(a, b, 0);
>>>>>
>>>>>
>>>>>
>>>>> which compiled without errors but resulted in an error (“program …
>>>>> stopped working during execution”) at runtime.
>>>>>
>>>>> Is it at all possible to run it in parallel with MPIR – and is there a
>>>>> mistake in the implementation?
>>>>>
>>>>> Tests: It runs through without error if the “mpf_add” inside the
>>>>> “#pragma for” loop is erased and b is set not after but before the 
>>>>> “#pragma
>>>>> omp parallel” statement (but not if the “mpf_add” command is erased and b
>>>>> is declared as above).
>>>>>
>>>>> I am using gcc and MinGW on a 64-bit (2 cores) Windows 7 machine.
>>>>>
>>>>>
>>>>> Thanks and best regards!
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "mpir-devel" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to mpir-devel+...@googlegroups.com.
>>>>> To post to this group, send email to mpir-...@googlegroups.com.
>>>>> Visit this group at http://groups.google.com/group/mpir-devel.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "mpir-devel" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to mpir-devel+...@googlegroups.com.
>>> To post to this group, send email to mpir-...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/mpir-devel.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "mpir-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mpir-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to mpir-devel@googlegroups.com.
> Visit this group at http://groups.google.com/group/mpir-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mpir-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mpir-devel+unsubscr...@googlegroups.com.
To post to this group, send email to mpir-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/mpir-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to