[OMPI users] Confused on simple MPI/OpenMP program

2013-04-03 Thread Ed Blosch
Consider this Fortran program snippet: program test ! everybody except rank=0 exits. call mpi_init(ierr) call mpi_comm_rank(MPI_COMM_WORLD,irank,ierr) if (irank /= 0) then call mpi_finalize(ierr) stop endif ! rank 0 tries to set number of OpenMP threads to 4 call

Re: [OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Gus Correa
Hi Reza It is hard to guess with little information. Other things you could check: 1) Are you allowed to increase the stack size (say, by the sys admin in limits.conf)? If using a job queue system, does it limit the stack size somehow? 2) If you can compile and run the Open MPI examples

Re: [OMPI users] memory per core/process

2013-04-03 Thread Ralph Castain
Here is a v1.6 port of what was committed to the trunk. Let me know if/how it works for you. The option you will want to use is: mpirun -mca opal_set_max_sys_limits stacksize:unlimited or whatever number you want to give (see ulimit for the units). Note that you won't see any impact if you run

Re: [OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Ralph Castain
I agree with Gus - check your stack size. This isn't occurring in OMPI itself, so I suspect it is in the system setup. On Apr 3, 2013, at 10:17 AM, Reza Bakhshayeshi wrote: > Thanks for your answers. > > @Ralph Castain: > Do you mean what error I receive? > It's the

Re: [OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Reza Bakhshayeshi
Thanks for your answers. @Ralph Castain: Do you mean what error I receive? It's the output when I'm running the program: *** Process received signal *** Signal: Segmentation fault (11) Signal code: Address not mapped (1) Failing at address: 0x1b7f000 [ 0]

Re: [OMPI users] FCA collectives disabled by default

2013-04-03 Thread Brock Palen
That would do it. Thanks! Now to make even the normal ones work Brock Palen www.umich.edu/~brockp CAEN Advanced Computing bro...@umich.edu (734)936-1985 On Apr 3, 2013, at 10:31 AM, Ralph Castain wrote: > Looking at the source code, it is because those other

Re: [OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Gus Correa
Hi Reza Check the system stacksize first ('limit stacksize' or 'ulimit -s'). If it is small, you can try to increase it before you run the program. Say (tcsh): limit stacksize unlimited or (bash): ulimit -s unlimited I hope this helps, Gus Correa On 04/03/2013 10:29 AM, Ralph Castain wrote:

Re: [OMPI users] FCA collectives disabled by default

2013-04-03 Thread Ralph Castain
Looking at the source code, it is because those other collectives aren't implemented yet :-) On Apr 2, 2013, at 12:07 PM, Brock Palen wrote: > We are starting to play with FCA on our Mellonox based IB fabric. > > I noticed from ompi_info that FCA support for a lot of

Re: [OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Ralph Castain
Could you perhaps share the stacktrace from the segfault? It's impossible to advise you on the problem without seeing it. On Apr 3, 2013, at 5:28 AM, Reza Bakhshayeshi wrote: > ​Hi > ​​I have installed HPCC benchmark suite and openmpi on a private cloud > instances. >

[OMPI users] Segmentation fault with HPCC benchmark

2013-04-03 Thread Reza Bakhshayeshi
​Hi ​​I have installed HPCC benchmark suite and openmpi on a private cloud instances. Unfortunately I get Segmentation fault error mostly when I want to run it simultaneously on two or more instances with: mpirun -np 2 --hostfile ./myhosts hpcc Everything is on Ubuntu server 12.04 (updated) and