Hi,

this is not the answer to your question but if I understand correctly, 
everything works fine with Trilinos and the only reason why you need PETSc 
is to use MUMPS. If that's the case, instead of using Amesos_KLU with 
Trilinos, you can use SuperLU_dist 
(http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_dist). It's not the 
simplest code to install but SuperLU_dist is parallel and the only thing 
that you need to change in your code is the option in AdditiionalData.

Best,

Bruno

On Thursday, February 9, 2017 at 1:19:03 PM UTC-5, Spencer Patty wrote:
>
>
> A problem I am working on results in a non symmetric 4x4 block matrix 
> system with the first block representing a vector valued velocity and the 
> remaining 3 blocks scalar quantities that are all coupled.  
>
> The fe system is represented as 
>
> FESystem<dim> (FESystem<dim>(FE_Q<dim> 
> (parameters.degree_of_fe_velocity),dim), 1,  // velocity
>
>                FE_Q<dim>(parameters.degree_of_fe_velocity), 1, // normal 
> velocity
>
>                FE_Q<dim>(parameters.degree_of_fe_velocity), 1, // 
> curvature
>
>                FE_Q<dim>(parameters.degree_of_fe_velocity), 1  // 
> willmore force
>
>               )
>
>
> The other terms are needed for the physics of the problem I am working on 
> but in the end, all I really need is the velocity.  After solving for these 
> components, we extract the velocity component and create a new DofHandler 
> consisting only of the velocity dofs and pass those to a transport module 
> where they are the velocity field to be used.  In order to accomplish this 
> extraction it seems necessary to have the dofs separated blockwise so that 
> we can extract the first block and use it as is.  We thus apply the 
> renumberings
>
>
>         DoFRenumbering::hierarchical (*(dof_handler_ptr));
>
>         DoFRenumbering::component_wise (*(dof_handler_ptr)),
>
>                                         system_sub_blocks);
>
>
> where system_sub_blocks has 0 for the first dim components and then 
> increasing by one for the rest of the components.  (essentially this is the 
> same as block wise renumbering)
>
>
> Now, we have not been able to come up with a good preconditioner for this 
> system yet so that iterative methods all currently fail.  Thus, I must 
> resort to direct methods.  I have succeeded in coming up with a way to 
> construct the system not as a block but as a TrilinosWrappers::SparseMatrix 
> for putting into the Amesos_klu or available direct solvers through 
> Trilinos and it works great.  Afterwards, we copy the non block solution 
> vector back to a block vector for all the other parts since it is only the 
> solver that needs a non block system.
>
>
>       if (parameters.bUseBlockSystemMatrix == false)
>
>       {
>
>         system_hanging_node_constraints_and_bv_velocity
> .distribute(solution_notblock_lo);
>
>         solution_notblock_lr = solution_notblock_lo;
>
>         // copy solution_notblock_lo to solution_lo
>
>         IndexSet::ElementIterator iter = locally_owned_dofs_ptr->begin(),
>
>                                    end = locally_owned_dofs_ptr->end();
>
>         for (; iter != end; ++iter)
>
>           solution_lo(*iter) = solution_notblock_lo(*iter);
>
>
>         // since we have inserted (set) the values of the solution_lo 
> vector,
>
>         // we must now compress with the insert operation to be complete.
>
>         solution_lo.compress(VectorOperation::insert);
>
>       }
>
>       system_hanging_node_constraints_and_bv_velocity
> .distribute(solution_lo);
>
>       solution_lr = solution_lo;
>
>
>   This has worked well for us but these trilinos direct solvers are only 
> serial solvers and we want to solve 2D and 3D systems so we need something 
> that can handle larger problems.  Our next idea was to try out MUMPS in 
> parallel through petsc to see if it would expand our available size of 
> problem.  I have been able to rewrite the code base to use generic linear 
> algebra to switch between petsc and trilinos type vectors matrices and 
> solvers/preconditioners. (That was a a surprisingly large amount of work to 
> get all the interfaces used consistently)  It works as expected for 
> problems which use the iterative solvers (still as block systems) but we 
> run into problems with the direct solvers. 
>
>
> It appears that for petsc, the assumption that the locally owned dofs 
> Index Sets are contiguous is really throwing a wrench in our plans for the 
> non block system approach.  I have seen the other discussions where 
> everyone says essentially it is not currently possible to get petsc and 
> petsc wrappers to a point where it could use non contiguous locally owned 
> index sets.  I guess I am ok with this since I believe I may have found a 
> way around it but I am having trouble figuring out exactly how to execute 
> this plan.
>
>
> Thus, in the case that we are using the direct solvers, we do not renumber 
> cellwise which gives us the contiguous index sets and we construct the non 
> block system and give it to mumps and it solves the system just fine.  I 
> have tried with 1 or 2 processors so far and it returns a solution ready to 
> be passed on to the transport module.  Now, the extraction of the velocity 
> component is the tricky part and the subject of this question.
>
>
> With the trilinos system, the following code snippet allowed me to extract 
> the desired solution vector and a corresponding dof handler. (where 
> fe_system_ptr was the full 4 block fe system object)
>
>
>       // Sadly, we have no recourse except to construct a new dof_handler
>
>       // representing the velocity block and make sure it has the same 
> ordering
>
>       // as the system dof_handler does.  So we give it the desired 
> fe_system
>
>       // and renumber and return a shared pointer.  Since velocity is the 
> first
>
>       // block, the numbers should be same so we can return the 
> solution.block{u_block}
>
>       // as well without shifting indices
>
>       std::shared_ptr<DoFHandler<dim> > get_dof_handler_velocity()
>
>       {
>
>         std::shared_ptr<DoFHandler<dim> > dof_velocity_ptr (new DoFHandler
> <dim>(triangulation));
>
>         dof_velocity_ptr->distribute_dofs(fe_system_ptr->base_element(0)); 
> // give the velocity block fe
>
>
>         // order same as dof_handler_ptr in setup_system
>
>         DoFRenumbering::hierarchical (*(dof_velocity_ptr));
>
>         DoFRenumbering::component_wise (*(dof_velocity_ptr),
>
>                                         std::vector<unsigned int> (dim,0)); 
> // this might not actually do anything
>
>                                                                           
>   // since it is essentially blockwise
>
>         return dof_velocity_ptr;
>
>       }
>
>
>       LinearAlgebra::Vector & get_velocity(){return solution_lr.block(0);}
>
>
>
> When I don't reorder the dofs, and I use the above snippets for passing to 
> the transport module, It triggers the following error when I use the 
> fe_values.get_function_values() on the velocity vector.
>
>
>
> An error occurred in line <1614> of file 
> </Users/srobertp/software/dealii/include/deal.II/base/index_set.h> in 
> function
>
>     IndexSet::size_type dealii::IndexSet::index_within_set(const 
> size_type) const
>
> The violated condition was: 
>
>     is_element(n) == true
>
> The name and call sequence of the exception was:
>
>     ExcIndexNotPresent (n)
>
> Additional Information: 
>
> The global index 8060 is not an element of this set.
>
>
> in other words, it is again probably making the assumption that the 
> dofindices are contiguous but in this case, they are not since we are 
> nowdealing with a subset of the full contiguous index set and without 
> renumbering blockwise, that subset is not contiguous.
>
>
> So, in this new case, what would be a good way to accomplish the desired 
> result?  I thought about creating a new full system dof_handler and using 
> the DoFRenumbering::compute_cell_wise() function to get the std::vector map 
> of new global dof indices.  Then renumbering the new dofhandler and using 
> the map to copy over the solutions.  Then I could use the above snippets on 
> that new system.  Would I need a new FEsystem to go with this as well?  If 
> so, that seems like a lot of extra memory and components to accomplish 
> this.  Am I missing something that would make this easier?  
>
>
> Essentially, I am asking for some help on how to convert my non blockwise 
> renumbered system and solution to a renumbered system and solution so I can 
> extract the first block with an appropriate dof handler of its own?
>
>
> Thanks in advance for any help!
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to