Hello,

I have a small question and I think the issue lies here. Are these two 
function calls equivalent?

ghosted_vector.reinit(
  /*locally owned indices*/,
  /*ghost indices only*/,
  mpi_communicator
);

ghosted_vector.reinit(
  /*locally owned indices*/,
  /*relevant indices (owned + ghost)*/,
  mpi_communicator
);

The program runs as expected for a serial execution. I am using the first 
version here and may be that is causing a crash in a parallel execution.

Any reference to a use case of 
p::d::Triangulation::global_active_cell_index_partitioner() would be very 
helpful :) !

Thanks
On Friday, June 11, 2021 at 11:40:37 AM UTC+5:30 vachanpo...@gmail.com 
wrote:

> Hello,
>
> I am having an issue in using DataOut for such vector in a parallel 
> process. I am attaching a MWE which captures my problem. I am encountering 
> a segmentation fault (signal 11).
>
> #include <deal.II/base/mpi.h>
> #include <deal.II/base/utilities.h>
> #include <deal.II/distributed/tria.h>
> #include <deal.II/grid/tria_accessor.h>
> #include <deal.II/grid/grid_generator.h>
> #include <deal.II/lac/generic_linear_algebra.h>
> #include <deal.II/fe/fe_dgq.h>
> #include <deal.II/fe/mapping_q_generic.h>
> #include <deal.II/dofs/dof_handler.h>
> #include <deal.II/numerics/data_out.h>
>
> #include <fstream>
>
> using namespace dealii;
> namespace LA
> {
> using namespace ::LinearAlgebraPETSc;
> }
>
> int main(int argc, char **argv)
> {
> Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
>
> constexpr int dim = 3;
>
> MPI_Comm mpi_comm(MPI_COMM_WORLD);
>
> parallel::distributed::Triangulation<dim> triang(mpi_comm);
> GridGenerator::subdivided_hyper_cube(triang, 5);
> const std::shared_ptr<const Utilities::MPI::Partitioner> cell_partitioner 
> =
> triang.global_active_cell_index_partitioner().lock();
> LA::MPI::Vector vec(cell_partitioner->locally_owned_range(), mpi_comm);
> LA::MPI::Vector gh_vec(
> cell_partitioner->locally_owned_range(),
> cell_partitioner->ghost_indices(),
> mpi_comm
> );
>
> DoFHandler<dim> dof_handler(triang);
> FE_DGQ<dim> fe(2);
> dof_handler.distribute_dofs(fe);
>
> for(auto &cell: dof_handler.active_cell_iterators()){
> if(!cell->is_locally_owned()) continue;
>
> vec[cell->global_active_cell_index()] = cell->global_active_cell_index();
> }
> vec.compress(VectorOperation::insert);
>
> gh_vec = vec;
>
> DataOut<dim> data_out;
> DataOutBase::VtkFlags flags;
> flags.write_higher_order_cells = true;
> data_out.set_flags(flags);
>
> data_out.attach_dof_handler(dof_handler);
>
> data_out.add_data_vector(gh_vec, "x");
>
> data_out.build_patches(
> MappingQGeneric<dim>(2),
> 2,
> DataOut<dim>::CurvedCellRegion::curved_inner_cells
> );
>
> std::ofstream proc_file(
> "output" + Utilities::int_to_string(
> Utilities::MPI::this_mpi_process(mpi_comm),
> 2
> ) + ".vtu"
> );
> data_out.write_vtu(proc_file);
> proc_file.close();
>
> if(Utilities::MPI::this_mpi_process(mpi_comm) == 0){
> std::vector<std::string> filenames;
> for(int i=0; i<Utilities::MPI::n_mpi_processes(mpi_comm); i++){
> filenames.emplace_back(
> "output" + Utilities::int_to_string(i, 2) + ".vtu"
> );
> } // loop over processes
>
> std::ofstream master_file("output.pvtu");
> data_out.write_pvtu_record(master_file, filenames);
> master_file.close();
> }
> return 0;
> }
>
>
> I suspect I am not partitioning the ghosted vector properly hence causing 
> a mismatch in what is available and what DataOut wants. Am I missing any 
> other step in between?
>
> Thanking in anticipation
>
> On Tue, 8 Jun 2021 at 19:32, vachan potluri <vachanpo...@gmail.com> wrote:
>
>> Thank you :).
>>
>> On Tue, 8 Jun, 2021, 19:24 Wolfgang Bangerth, <bang...@colostate.edu> 
>> wrote:
>>
>>> On 6/8/21 4:18 AM, vachanpo...@gmail.com wrote:
>>> > If I want to add such a vector to DataOut, will the regular 
>>> > DataOut::add_data_vector() work? Or is something else required to be 
>>> done?
>>>
>>> Yes, DataOut::add_data_vector() can take two kinds of vectors:
>>> * Ones that have as many entries as there are DoFs
>>> * Ones that have as many entries as there are active cells
>>>
>>> Best
>>>   W.
>>>
>>> -- 
>>> ------------------------------------------------------------------------
>>> Wolfgang Bangerth          email:                 bang...@colostate.edu
>>>                             www: 
>>> http://www.math.colostate.edu/~bangerth/
>>>
>>> -- 
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see 
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to dealii+un...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/dealii/2c89ac60-d144-8ef3-62db-f15770096bd6%40colostate.edu
>>> .
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c4fe656c-bee2-40e4-878f-a2f845589539n%40googlegroups.com.

Reply via email to