Timo,
 
> I fixed a different bug in step-32 related to n_dofs() a few minutes
> ago and now step-32 runs correctly for me.
> 
>> This fixed the compile errors. Now when I run 'make run', I get an MPI
>> error:
>>
>> Fatal error in MPI_Allreduce: Invalid MPI_Op, error stack:
>> MPI_Allreduce(773): MPI_Allreduce(sbuf=0x7fff52f15adf,
>> rbuf=0x7fff52f15ade, count=1, INVALID DATATYPE, op=0x1482dc0,
>> comm=0x84000000) failed
> 
> I don't get this error. We are using a custom MPI_Op in the code and
> that is triggering an error for you. Can you update deal.II and run
> step-32 in debug mode and see if the error is still there? If yes, can
> you tell me what MPI library and what version you are using (or try a
> different recent one)?

That error was generated by mpich2, version 1.2.1p1.

I rebuilt everything (Trilinos and Deal.II) with openmpi, version 1.4.1,
and that error disappeared, and I ended up with this instead:


deal.II/examples/step-32$ mpirun.openmpi -np 2 ./step-32
this is step-32. ref=5
**dof_setup ... 
***dof_distribute ... 0.9798 wall, max @0, min=0.9794 @1, avg=0.9796
Number of active cells: 12288 (on 6 levels)
Number of degrees of freedom: 162432 (99840+12672+49920)

***index_sets ... 0.0056 wall, max @1, min=0.0051 @0, avg=0.0054
***make_hanging_nodes_vel ... 0.0310 wall, max @0, min=0.0261 @1,
avg=0.0286
***boundary_values_vel ... 0.2411 wall, max @1, min=0.2387 @0, avg=0.2399
***hanging_nodes_and_bv_temperature ... 0.1645 wall, max @0, min=0.1570
@1, avg=0.1607
***setup_stokes_matrix ... *** glibc detected *** ./step-32: free():
invalid pointer: 0x0000000002b61470 ***
*** glibc detected *** ./step-32: free(): invalid pointer:
0x0000000003a21730 ***
======= Backtrace: =========
======= Backtrace: =========
/lib/libc.so.6(+0x775b6)[0x7f06350cc5b6]
/lib/libc.so.6(+0x775b6)/lib/libc.so.6(cfree+0x73)[0x7f06350d2e83]
/lib/libc.so.6(cfree+0x73)[0x7f66ddefbe83]
/trilinos/10.6.0/lib/libepetra.so(_ZN16Epetra_CrsMatrix15OptimizeStorageEv+0x62b)[0x7f0638f78f47]
/trilinos/10.6.0/lib/libepetra.so(_ZN16Epetra_CrsMatrix15OptimizeStorageEv+0x62b)[0x7f66e1da1f47]
/trilinos/10.6.0/lib/libepetra.so(_ZN16Epetra_CrsMatrix12FillCompleteERK10Epetra_MapS2_b+0x21f)[0x7f0638f780ed]
/trilinos/10.6.0/lib/libepetra.so(_ZN16Epetra_CrsMatrix12FillCompleteERK10Epetra_MapS2_b+0x21f)[0x7f66e1da10ed]
/trilinos/10.6.0/lib/libepetra.so(_ZN18Epetra_FECrsMatrix14GlobalAssembleERK10Epetra_MapS2_b+0x29b)[0x7f0638fa0e89]
/trilinos/10.6.0/lib/libepetra.so(_ZN18Epetra_FECrsMatrix14GlobalAssembleERK10Epetra_MapS2_b+0x29b)[0x7f66e1dc9e89]
./step-32(_ZN6dealii16TrilinosWrappers12SparseMatrix8compressEv+0x6f)[0x4ea171]
./step-32(_ZN6dealii16TrilinosWrappers12SparseMatrix8compressEv+0x6f)[0x4ea171]
/deal.II/lib/libdeal_II.g.so.6.4.pre/deal.II/lib/libdeal_II.g.so.6.4.pre(_ZN6dealii16TrilinosWrappers12SparseMatrix6reinitERKNS0_15SparsityPatternE+0xe9)[0x7f66e7cc348d]
/deal.II/lib/libdeal_II.g.so.6.4.pre(_ZN6dealii16TrilinosWrappers17BlockSparseMatrix6reinitINS0_20BlockSparsityPatternEEEvRKT_+0xe8)[0x7f063ee7866e]
./step-32(_ZN21BoussinesqFlowProblemILi2EE19setup_stokes_matrixERKSt6vectorIN6dealii8IndexSetESaIS3_EE+0x1a0)[0x507906]
./step-32(_ZN21BoussinesqFlowProblemILi2EE10setup_dofsEv+0xf73)[0x4fb02b]
./step-32(_ZN21BoussinesqFlowProblemILi2EE3runEj+0x1b6)[0x4f1ce0]
./step-32(main+0x142)[0x4dda91]
/deal.II/lib/libdeal_II.g.so.6.4.pre(_ZN6dealii16TrilinosWrappers17BlockSparseMatrix6reinitINS0_20BlockSparsityPatternEEEvRKT_+0xe8)[0x7f66e7ca166e]
./step-32[0x4dd3a9]

followed by a memory map. I'm sort of lost as to what is going on here; a
colleague has been using deal.ii (with petsc) with the same mpich2 MPI
library without any problems, and the only difference I can see is that I'm
using Trilinos instead of petsc, which seems unrelated to the mpich2 error.
Thanks again for the help,

Jennifer
_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to