For any archive readers or otherwise interested parties:

Apparently Ubuntu 11.10 onwards includes Open MPI 1.4.3 with an ARM patch
set different from mine. It was lacking a functional
opal_sys_timer_get_cycles() implementation.

There is an open bug being tracked through:
https://bugs.launchpad.net/ubuntu/+source/openmpi/+bug/949044
This may or may not be the cause of Juan's issue, but it prevented
helloworld from running to completion.

/
        Leif

> -----Original Message-----
> From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
> Behalf Of Jeffrey Squyres
> Sent: 22 March 2012 18:20
> To: Open MPI Developers
> Subject: Re: [OMPI devel] MPI_Init_thread problem on ubuntu ARM (open-
> mpi 1.4.3)
> 
> We did not support ARM until Open MPI 1.5.x.
> 
> On Mar 21, 2012, at 7:07 AM, Juan Solano wrote:
> 
> >
> > Hello,
> >
> > I have a problem using Open MPI on my linux system (pandaboard
> running
> > Ubuntu precise). A call to MPI_Init_thread with the following
> parameters
> > hangs:
> >
> >  MPI_Init_thread(0, 0, MPI_THREAD_MULTIPLE, &provided);
> >
> > it seems that we are stuck on this loop in function
> > opal_condition_wait():
> >
> > while (c->c_signaled == 0) {
> >    opal_progress();
> >
> >
> > this is the call stack:
> >
> > #0  opal_condition_wait (c=0x42528, m=0x42500) at
> > ../../../../../../opal/threads/condition.h:76
> > #1  0xb6d23124 in orte_rml_oob_send (peer=0xb6e40ae0, iov=0xbeffefa4,
> > count=1, tag=1, flags=16)
> >    at ../../../../../../orte/mca/rml/oob/rml_oob_send.c:153
> > #2  0xb6d2351a in orte_rml_oob_send_buffer (peer=0xb6e40ae0,
> > buffer=0xbeffefdc, tag=1, flags=0)
> >    at ../../../../../../orte/mca/rml/oob/rml_oob_send.c:269
> > #3  0xb6e2dca6 in orte_routed_base_register_sync (setup=true) at
> > ../../../../../orte/mca/routed/base/routed_base_register_sync.c:91
> > #4  0xb6d46274 in init_routes (job=3667329025, ndat=0x0) at
> > ../../../../../../orte/mca/routed/binomial/routed_binomial.c:890
> > #5  0xb6e1a088 in orte_ess_base_app_setup () at
> > ../../../../../orte/mca/ess/base/ess_base_std_app.c:150
> > #6  0xb6d2e630 in rte_init (flags=0 '\000') at
> > ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c:276
> > #7  0xb6e01404 in orte_init (flags=0 '\000') at
> > ../../../orte/runtime/orte_init.c:131
> > #8  0xb6f552dc in ompi_mpi_init (argc=0, argv=0x0, requested=0,
> > provided=0xbefff67c) at ../../../ompi/runtime/ompi_mpi_init.c:344
> > #9  0xb6f7c6f2 in PMPI_Init_thread (argc=0x0, argv=0x0, required=0,
> > provided=0xbefff67c) at pinit_thread.c:84
> > #10 0x00008572 in main () at test_lib.c:8
> >
> >
> > In function opal_condition_wait(), opal_using_threads() returns
> false,
> > shouldn't this returns true in this case, as we are calling the
> > initialization function with MPI_THREAD_MULTIPLE?
> >
> > The global opal_uses_threads is set by calling
> opal_set_using_threads()
> > from MPI_Init_thread(), however this happens further down in this
> > function and we never reach the point in which this is set.
> >
> > Thanks,
> > Juan.
> > _______________________________________________
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel




Reply via email to