Re: [deal.II] How to set material id with MPI

2019-08-22 Thread Phạm Ngọc Kiên
Hi colleagues, I have a question for parallel::distributed::Triangulation When 2 cells share 1 edge, but they are living in 2 different MPI processes, how can I choose only 1 cell containing the common edge from them. I think I have to set material id for the cell in the first process, and then tel

Re: [deal.II] Re: Question about constraints

2019-08-22 Thread yuesu jin
Thank you very much, Daniel!! I appreciate your help! On Wed, Aug 21, 2019 at 11:40 PM Daniel Arndt wrote: > Yuesu, > > I am learning step-7 which has the same class AffineConstraints class, >> in step6, we use the member function ::distribute_local_to_global to put >> the local matrix and rhs

Re: [deal.II] Installation with MPI and PETSc => hdf is insufficient. How to solve that issue? (my own approach didn't work)

2019-08-22 Thread Wolfgang Bangerth
On 8/22/19 11:30 AM, Tobias Neef wrote: > > The make install is still not working. > hdf5 is still insufficient. > It tells me afterwards that it found: HDF5_WITH_MPI = FALSE It is telling you that the HDF5 installation it found wasn't compiled with MPI enabled. But since presumably either deal.

[deal.II] Installation with MPI and PETSc => hdf is insufficient. How to solve that issue? (my own approach didn't work)

2019-08-22 Thread Tobias Neef
Dear deal.II team, I want to learn example 55 and it is very difficult to install deal.ii with the additional libraries. I opened the deal.II-9.1.1/build/ CMakeCache.txt and changed the following lines from OFF to ON: DEAL_II_WITH_MPI:BOOL=ON DEAL_II_WITH_PETSC:BOOL=ON and added the petsc path

Re: [deal.II] getting components of Blockvector

2019-08-22 Thread Daniel Arndt
Gabriel, [...] > u_x.reinit(n_dofs_in_one_direction); > *FEValues fe_values(fe, > quadrature_formula, update_values);* > *std::vector> velocity_values(n_q_points);* > *const FEValuesExtractors::Vector velocities(0);* > > *for (const auto &cell : dof_handler.a

[deal.II] Re: getting components of Blockvector

2019-08-22 Thread Konrad
Hi Gabriel, I think tutorial step-22 (the part about renumbering dofs) discusses what you need: You could do that when setting up your system: std::vector block_component(dim + 1, 0); block_component[dim] = 1; DoFRenumbering::component_wise

Re: [deal.II] Distributing objects on cluster nodes according to distributed triangulation (MPI)

2019-08-22 Thread Daniel Arndt
Konrad, I have a little conceptual question. Maybe it is dumb but I am new to MPI... > > Say I have a triangulation of type > *parallel::distributed::Triangulation* that gets distributed over N > nodes on a cluster. Each active cell should own (or point to) an object > that makes up a (modified no

[deal.II] Distributing objects on cluster nodes according to distributed triangulation (MPI)

2019-08-22 Thread Konrad
Hi deal.ii community, I have a little conceptual question. Maybe it is dumb but I am new to MPI... Say I have a triangulation of type *parallel::distributed::Triangulation* that gets distributed over N nodes on a cluster. Each active cell should own (or point to) an object that makes up a (mo

[deal.II] getting components of Blockvector

2019-08-22 Thread Gabriel Peters
Hey everyone, I have the following problem I have a classical velocity - pressure Blockvector "soluion" and a FEsystem (initialized via * fe(FE_Q(degree + 1), dim, FE_Q(degree), 1)* For some testing I want to get the velocity compontents of solution in seperate vectors, i.e. a vecto