On Wed, Mar 19, 2014 at 9:02 AM, Seufzer, William J. (LARC-D307)
<bill.seuf...@nasa.gov> wrote:
> Thanks Dan,
>
> Yes, I ran across 4 nodes (32 cores) and my log file returned a randomized 
> list of integers 0 through 31. With other information from PBS I could see 
> the names of the 4 nodes that were allocated (I believe I didn't have 32 
> processes on one node).

Who knows, but it looks like Trilinos is okay.

> Previous to this I inserted lines from the fipy parallel example into my code 
> and got the expected result from mpi an epetra.
>
> from mpi4py import MPI
> from PyTrilinos import Epetra
>
> m4comm = MPI.COMM_WORLD
> epcomm = Epetra.PyComm()
>
> myRank = m4comm.Get_rank()
>
> mpi4py_info = "mpi4py: processor %d of %d" % (m4comm.Get_rank(),
>                                               m4comm.Get_size())
> trilinos_info = "PyTrilinos: processor %d of %d" % (epcomm.MyPID(),
>                                                     epcomm.NumProc())
> print " :: ".join((mpi4py_info, trilinos_info))
>
> I still have the "from mpi4py import MPI" in my code and use the myRank==0 
> node to print out status as the program runs.

Can you send me the script and geo file and I'll try running it on
multiple nodes and check that it at least works for me or try and
debug it.

Thanks.

-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to