Re: [OMPI users] Lustre hints via environment variables/runtime parameters

2012-12-03 Thread pascal . deveze
users-boun...@open-mpi.org a écrit sur 01/12/2012 14:47:09 : > De : Eric Chamberland > A : us...@open-mpi.org > Date : 01/12/2012 14:47 > Objet : [OMPI users] Lustre hints via environment variables/runtime parameters > Envoyé par : users-boun...@open-mpi.org >

Re: [OMPI users] machine exited on signal 11 (Segmentation fault).

2012-04-19 Thread pascal . deveze
users-boun...@open-mpi.org a écrit sur 19/04/2012 12:42:44 : > De : Rohan Deshpande > A : Open MPI Users > Date : 19/04/2012 12:44 > Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation fault). > Envoyé par : users-boun...@open-mpi.org > >

Re: [OMPI users] machine exited on signal 11 (Segmentation fault).

2012-04-19 Thread pascal . deveze
users-boun...@open-mpi.org a écrit sur 19/04/2012 10:24:16 : > De : Rohan Deshpande > A : Open MPI Users > Date : 19/04/2012 10:24 > Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation fault). > Envoyé par : users-boun...@open-mpi.org > >

Re: [OMPI users] machine exited on signal 11 (Segmentation fault).

2012-04-19 Thread pascal . deveze
I do not see where you initialize the offset on the "Non-master tasks". This could be the problem. Pascal users-boun...@open-mpi.org a écrit sur 19/04/2012 09:18:31 : > De : Rohan Deshpande > A : Open MPI Users > Date : 19/04/2012 09:18 > Objet : Re:

Re: [OMPI users] problem with MPI-IO at filesizes greater then the 32 Bit limit...

2011-09-05 Thread pascal . deveze
Hi, I am not sure I understand what you are doing. users-boun...@open-mpi.org a écrit sur 03/09/2011 11:05:04 : > De : alibeck > A : Open MPI Users > Date : 03/09/2011 11:05 > Objet : [OMPI users] problem with MPI-IO at filesizes greater

Re: [OMPI users] Bindings not detected with slurm (srun)

2011-08-22 Thread pascal . deveze
users-boun...@open-mpi.org a écrit sur 18/08/2011 14:41:25 : > De : Ralph Castain > A : Open MPI Users > Date : 18/08/2011 14:45 > Objet : Re: [OMPI users] Bindings not detected with slurm (srun) > Envoyé par : users-boun...@open-mpi.org > > Afraid I am

[OMPI users] Bindings not detected with slurm (srun)

2011-08-18 Thread pascal . deveze
Hi all, When slurm is configured with the following parameters TaskPlugin=task/affinity TaskPluginParam=Cpusets srun binds the processes by placing them into different cpusets, each containing a single core. e.g. "srun -N 2 -n 4" will create 2 cpusets in each of the two allocated nodes and

Re: [OMPI users] File seeking with shared filepointer issues

2011-07-06 Thread pascal . deveze
he long (USA) weekend :> ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA [pièce jointe "shared_file_ptr_jumpshot.png" supprimée par Pascal Deveze/FR/BULL] ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users

Re: [OMPI users] File seeking with shared filepointer issues

2011-06-27 Thread pascal . deveze
Christian, Suppose you have N processes calling the first MPI_File_get_position_shared (). Some of them are running faster and could execute the call to MPI_File_seek_shared() before all the other have got their file position. (Note that the "collective" primitive is not a synchronization. In

Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view

2011-04-04 Thread Pascal Deveze
Why don't you use the command "mpirun" to run your mpi programm ? Pascal fa...@email.com a écrit : Pascal Deveze wrote: > Could you check that your programm closes all MPI-IO files before calling MPI_Finalize ? Yes, I checked that. All files should be closed. I've also written

Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view

2011-04-04 Thread Pascal Deveze
Could you check that your programm closes all MPI-IO files before calling MPI_Finalize ? fa...@email.com a écrit : > Even inside MPICH2, I have given little attention to threadsafety and > the MPI-IO routines. In MPICH2, each MPI_File* function grabs the big > critical section lock -- not

Re: [OMPI users] printf and scanf problem of C code compiled with Open MPI

2011-03-30 Thread Pascal Deveze
Maybe this could solve your problem: Just add \n in the string you want to display: printf("Please give N= \n"); Of course, this will return, but the string is displayed. This run by me without the fflush(). On the other hand, do you really observe that the time of the scanf () and the time

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-17 Thread Pascal Deveze
a écrit : On Wed, 17 Nov 2010, Pascal Deveze wrote: I think the limit for a write (and also for a read) is 2^31-1 (2G-1). In a C program, after this value, an integer becomes negative. I suppose this is also true in Fortran. The solution, is to make a loop of writes (reads) of no more than