[deal.II] Are shell elements available in deal.II?

2018-03-05 Thread Yuxiang Wang
Hi, Sorry for the spam. I tried to search but did not find an implementation of shell elements in deal.II. Since this is a commonly used element, I'd like to make sure that it's not me missing it. Could you please help confirm? Best, Shawn -- The deal.II project is located at http://www.dea

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
Second update (sorry for so many updates) I changed my strategy to use set(r, c, v) function to set the values so that I can use the const iterators. also called compress after every add: for (auto r = Abs_A_matrix->block(0, 0).local_range().first; r < Abs_A_matrix->block(0, 0).local_range().seco

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
An update: I tried to use a iteration below to iterate over local entries: (The reason I use local_range() for only (0, 0) block and iterator for the entire block matrix is because I only need the block(0, 0), and sparse matrix class does not have a non-const iterator, I have to call the local

Re: [deal.II] Re: step-22 partial boundary conditions

2018-03-05 Thread Jane Lee
Following this, note that using the stress as a tensor function produced the same results/problems (same errors too as doing it with component_i) but wouldn't have thought that would have made a difference anyway... On Monday, March 5, 2018 at 6:59:43 PM UTC, Jane Lee wrote: > > Hi Wolfgang, > >

Re: [deal.II] Re: step-22 partial boundary conditions

2018-03-05 Thread Jane Lee
Hi Wolfgang, I believe the formula is correct. the cubic term comes from p=z^3 being the pressure manufactured solution. so in (pI-2e) you get a z^3 term and indeed a linear term in the 2e portion. The code commpiles and the error analysis is correct with Dirichlet conditions on the top and bo

[deal.II] Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
Hi, I'm using PETScWrapper to parallelize my code. In my preconditioner for the GMRES solver, there is one step that requires a matrix copied from the system matrix, and set all the elements to be the absolute value. It was fine in serial because I can iterator over all the entries simply using