Probably should - looks like this may take more thought and probably
>>>> will be handled in
>>>> discussions next week
>>>>
>>>>> On Feb 17, 2016, at 11:26 AM, Howard Pritchard wrote:
>>>>>
>>>>> Hi Folks,
>>>
So this seems to be still broken.
mca_btl_openib.so: undefined symbol: opal_memory_linux_malloc_set_alignment
I built with "--with-memory-manager=none"
Regards
--Nysal
On Tue, Feb 16, 2016 at 10:19 AM, Ralph Castain wrote:
> It is very easy to reproduce - configure with:
> enable_mem_debug=no
In listen_thread():
194 while (pmix_server_globals.listen_thread_active) {
195 FD_ZERO(&readfds);
196 FD_SET(pmix_server_globals.listen_socket, &readfds);
197 max = pmix_server_globals.listen_socket;
Is it possible that pmix_server_globals.listen_thread_active can be fa
ce to fix it right away.
>
>
> > On Oct 6, 2015, at 11:17 AM, Nysal Jan K A wrote:
> >
> > In v1.8 there is a RTE barrier in finalize.
> OMPI_LAZY_WAIT_FOR_COMPLETION waits for the barrier to complete. Internally
> opal_progress() is invoked. In the master branch we cal
In v1.8 there is a RTE barrier in finalize. OMPI_LAZY_WAIT_FOR_COMPLETION
waits for the barrier to complete. Internally opal_progress() is invoked.
In the master branch we call PMIX fence instead. PMIX_WAIT_FOR_COMPLETION
seems to only call usleep. How will ompi progress outstanding operations ?
R
Opened PR# 260. Would be good to have that included in 1.8.5
Regards
--Nysal
On Fri, Apr 24, 2015 at 10:22 PM, Ralph Castain wrote:
> Any last minute issues people need to report? Otherwise, this baby is
> going to ship
>
> Paul: I will include your README suggestions as they relate to 1.8.5.
>
Yeah, I remember this one. Its a bug in that specific version of the
compiler. I had reported it to the compiler team a couple of years back.
Quoting from the email I sent them:
The "stw r0,0(r31)" probably overwrites the previous stack pointer ?
static inline int opal_atomic_cmpset_32(volati
I opened a github issue to track this -
https://github.com/open-mpi/ompi/issues/383
--Nysal
On Fri, Feb 6, 2015 at 11:36 AM, Nysal Jan K A wrote:
> It seems the ompi_free_list_init() in libnbc_open() failed for some
> reason. That would explain why mca_coll_libnbc_component.active_reque
It seems the ompi_free_list_init() in libnbc_open() failed for some reason.
That would explain why mca_coll_libnbc_component.active_requests is not
initialized and hence crash in libnbc_close().
This might help, but still doesn't explain why the free list initialization
failed:
diff --git a/ompi/m