Re: [OMPI users] MPI_ERR_TRUNCATE: On Broadcast

2012-11-10 Thread Lim Jiew Meng
Thanks, it turns out, this was caused by an error earlier in the code,
resolved on StackOverflow
http://stackoverflow.com/questions/13290608/mpi-err-truncate-on-broadcast


On Fri, Nov 9, 2012 at 9:20 PM, Jeff Squyres  wrote:

> Offhand, your code looks fine.
>
> Can you send a small, self-contained example?
>
>
> On Nov 8, 2012, at 9:42 AM, Lim Jiew Meng wrote:
>
> > I have an int I intend to broadcast from root (rank==(FIELD=0)).
> >
> > int
> >  winner
> >
> >
> > if (rank == FIELD) {
> >
> > winner
> > = something;
> > }
> >
> >
> > MPI_Barrier
> > (MPI_COMM_WORLD);
> >
> > MPI_Bcast
> > (&winner, 1, MPI_INT, FIELD, MPI_COMM_WORLD);
> >
> > MPI_Barrier
> > (MPI_COMM_WORLD);
> > if (rank != FIELD) {
> >
> > cout
> > << rank << " informed that winner is " << winner << endl;
> > }
> > But it appears I get
> >
> > [JM:6892] *** An
> >  error occurred in MPI_Bcast
> >
> > [JM:6892] ***
> >  on communicator MPI_COMM_WORLD
> >
> > [JM:6892] *** MPI_ERR_TRUNCATE:
> >  message truncated
> >
> > [JM:6892] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
> > Found that I can increase the buffer size in Bcast
> >
> > MPI_Bcast(&winner, NUMPROCS, MPI_INT, FIELD, MPI_COMM_WORLD);
> > Where NUMPROCS is number of running processes. (actually seems like I
> just need it to be 2). Then it runs, but gives unexpected output ...
> >
> > 1 informed that winner is 103
> > 2 informed that winner is 103
> > 3 informed that winner is 103
> > 5 informed that winner is 103
> > 4 informed that winner is 103
> > When I cout the winner, it should be -1
> >
> > Whats wrong? In a simple try, it appears to work:
> >
> >
> >   MPI_Init(NULL, NULL);
> >
> >   MPI_Comm_size(MPI_COMM_WORLD, &numProcs);
> >
> >   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >
> >
> >
> >   if (rank == 0) {
> >
> >   srand(time(NULL));
> >
> >   tmp = (rand() % 100) + 1;
> >
> >   cout << "generated " << tmp << endl;
> >
> >   }
> >
> >
> >
> >   MPI_Barrier(MPI_COMM_WORLD);
> >
> >   MPI_Bcast(&tmp, 1, MPI_INT, 0, MPI_COMM_WORLD);
> >
> >   MPI_Barrier(MPI_COMM_WORLD);
> >
> >
> >
> >   if (rank != 0) {
> >
> >   cout << rank << " received " << tmp << endl;
> >
> >   }
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] Fwd: [Mpi-forum] New MPI-3.0 standard in hardcover - the green book

2012-11-10 Thread Jeff Squyres
FYI.  I already ordered one.  :-)

Begin forwarded message:

> From: Rolf Rabenseifner 
> Subject: [Mpi-forum] New MPI-3.0 standard in hardcover - the green book
> Date: November 10, 2012 12:39:31 PM EST
> To: Main MPI Forum mailing list 
> Reply-To: Main MPI Forum mailing list 
> 
> -
> Dear MPI Forum member, 
> If you want a copy, please come as soon as possible to our SC12 booth,
> HLRS booth 1036. I have only a limited number of books with me.
> Best regards
> Rolf
> -
> 
> Dear MPI user,
> 
> now, the new official MPI-3.0 standard (Sep. 21, 2012) 
> in hardcover can be shipped to anywhere in the world.
> 
> As a service (at costs) for users of the Message Passing Interface, 
> HLRS has printed the new Standard, Version 3.0 (852 pages)
> in hardcover. The price is only 19.50 Euro or 25 US-$.
> 
> You can find a picture of the book and the text of the standard at
> http://www.mpi-forum.org/docs/docs.html
> Selling & shipping of the book is done through
> 
>https://fs.hlrs.de/projects/par/mpi/mpi30/
> 
> 
> MPI-3.0 implemetations will be available soon. 
> See Richard Graham "The MPI-3.0 Standard", slides 53-54, 
> presented at the EuroMPI, Sep 24-26, 2012 in Vienna:
> http://www.par.univie.ac.at/conference/eurompi2012/docs/s7t2.pdf
> 
> -
> See also at SC12, Salt Lake City, HLRS-booth 1036, Nov. 12-15 and 
> - MPICH BOF,   Tuesday   Nov. 13, 12:15-1:15pm, Room 155-B
> - OpenMPI BOF, Wednesday Nov. 14, 12:15-1:15pm, Room 155-B
> - MPI-3.0 BOF, Thursday  Nov. 15, 12:15-1:15pm, Room 355-A
> A limited number of the books will be available directly there. 
> -
> 
> 
> Details on MPI-3.0: 
> --- 
> MPI-3.0 is a new version of the MPI standard that better serves the
> needs of the parallel computing community for better platform and
> application support.
> 
> Major improvements are:
> - Improvements to the one-sided communication:
>-- New window allocation methods
>-- New RMA interfaces
>-- Special support for clusters of shared memory nodes
> - Nonblocking collectives
> - Scalable sparse collectives (neighbor communication in topoligies)
> - Thread-safe message probe
> - New MPI Fortran binding that is the first time consistent
>   with the Fortran standard: the new mpi_f08 module
> - Enhancements for the existing Fortran mpi module
> - Non-collective communicator creation
> - Support for large counts (larger than 32 bit values)
> - Enhancements to MPI_Init, etc.
> - Additional new tools interface
> - The MPIR process acquisition interface (an extra document).
> Major removals are:
> - Some MPI-1.1 functionality that was deprecated since MPI-2.0.
> - For C++ programming, the C++ bindings were substituted 
>   by the C bindings. For this, several MPI-2.2 errata items 
>   were defined.
> 
> Best regards 
> Rolf Rabenseifner
> 
> PS: Picture of the book:
>http://www.mpi-forum.org/docs/images/mpi-report-3.0-2012-09-21-as-1book.jpg
> 
> -
> Dr. Rolf Rabenseifner .. . . . . . . . . . email rabenseif...@hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart .. . . . . . . . . fax : ++49(0)711/685-65832
> Head of Dpmt Parallel Computing .. .. www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany
> -
> 
> -- 
> Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de
> High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530
> University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832
> Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner
> Nobelstr. 19, D-70550 Stuttgart, Germany . (Office: Allmandring 30)
> ___
> mpi-forum mailing list
> mpi-fo...@lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-forum


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/