Re: [deal.II] Re: installing dealii on a cluster

2017-10-05 Thread Anna Avdeeva
Dear Daniel,

Thank you for you replies.
The same error happens with the automatically configured version, but on 
64%:

[ 64%] Building CXX object source/CMakeFiles/deal_II.dir/dummy.cc.o
Linking CXX shared library ../lib/libdeal_II.so
collect2: error: ld terminated with signal 7 [Bus error], core dumped
make[2]: *** [lib/libdeal_II.so.9.0.0-pre] Error 1
make[1]: *** [source/CMakeFiles/deal_II.dir/all] Error 2
make: *** [all] Error 2

I have send e-mail to our sysadmins, but so far no reply.
Let me know if you have any other ideas.

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallel implementation

2017-10-05 Thread Anna Avdeeva
Dear Wolfgang,

I do compute the solution at many points along the profile, and while Ex and Ey 
look reasonable, the Ez does not. Is it possible that there is a problem in 
point_value function for z component for Nedelec elements? 

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallel implementation

2017-10-05 Thread Anna Avdeeva
Dear Wolfgang,

Thank you for your replies.

I am still struggling with outputs of the solution to the file.

I have two type of outputs: 1) output vtu files for each processor on the 
whole mesh and 2) creating a simple txt file with values of the solution at 
receiver locations

1) For the first one I created the Postprocessor class that computes the 
quantities of the total electric and magnetic fields and then I use 
DataOut data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(*locally_relevant_solution*, postprocessor);
for (unsigned int i = 0; i < subdomain.size(); ++i)  subdomain(i) = 
triangulation.locally_owned_subdomain();
data_out.add_data_vector(subdomain, "subdomain");
data_out.build_patches();

And this seems to work and I can display reasonable solutions in paraview. 
Meaning that computation of the *locally_relevant_solution* is correct.

2) For the file with the solution values at the receiver locations I think 
I followed the approach implemented in ASPECT.
So I go through all the receivers look for the point value and then check 
how many processors found this point and if more then one I sum the values 
across all processors and divide by the number of processors the point was 
found at. I write this values into a 
std::vector current_field_values(n_Rc,  Vector(2 * 
(dim + dim)));

To compute the point value I use the deal.ii function 
VectorTools::point_value, and I suspect I am using it wrong:
I call it in the following way
VectorTools::point_value(dof_handler, *locally_relevant_solution*, 
receiver_locations[p], current_point_values[p]);

After this call I write the values on the screen and I see that the value 
of the Ez at the surface of the earth is far from 0, while it should be 
very small and the vtu file produced with help of postprocessor also shows 
a very small values there.

Do you have an idea what would be the correct call to the point value 
function, maybe I should not give it *locally_relevant_solution* as an 
input, or something else.

*locally_relevant_solution* is the private *LA::MPI::BlockVector *of the 
main class.

Sorry for bothering you again with this.

Anna






 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: installing dealii on a cluster

2017-10-04 Thread Anna Avdeeva
Dear Timo and Daniel,

I have tried to submit the job, but it stopped again at the same place:
 
[ 57%] Linking CXX shared library ../lib/libdeal_II.so

Attached is the error file.

Any ideas?

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


make_dealii.e1515274
Description: Binary data


[deal.II] Re: installing dealii on a cluster

2017-10-04 Thread Anna Avdeeva
Dear Daniel,

If I type "free -h" I get the following info:

  totalusedfree  shared  buff/cache   
available
Mem:   251G 44G 23G 11G182G
164G
Swap:   11G535M 11G

I can try to submit to the queue, rather then just using the head node. Do 
you think this might help?

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] installing dealii on a cluster

2017-10-04 Thread Anna Avdeeva
Dear all,

I am trying to install deal.ii on a cluster.
 in my University, that I can run bigger jobs with parallel computations.

I am installing library locally in my home directory. 
Before configuring deal.ii, I have installed petsc, p4est-2.0 and 
netcdf-cxx-4.2.
Other necessary libraries were already installed and I only had to load the 
modules.

I have managed to successfully configure deal.ii with the following command:

*cmake ../dealii -DCMAKE_INSTALL_PREFIX=/home/a/aa682/dealii_install 
-DDEAL_II_WITH_MPI=ON 
-DCMAKE_C_COMPILER=/cm/shared/apps/intel/impi/2017.1.132/intel64/bin/mpicc 
-DCMAKE_CXX_COMPILER=/cm/shared/apps/intel/impi/2017.1.132/intel64/bin/mpicxx 
-DCMAKE_Fortran_COMPILER=/cm/shared/apps/intel/impi/2017.1.132/intel64/bin/mpif90
 
-DDEAL_II_WITH_NETCDF=ON 
-DNETCDF_LIBRARIES=/cm/shared/apps/netcdf/intel-mpi/4.4.1/lib/libnetcdf.so 
-DDEAL_II_WITH_P4EST=ON -DP4EST_DIR=/home/a/aa682/Libraries/p4est-2.0/ 
-DDEAL_II_WITH_HDF5=ON 
-DHDF5_LIBRARIES=/cm/shared/apps/hdf5/intel-mpi/1.8.17/lib/libhdf5.so 
-DHDF5_WITH_MPI=ON -DDEAL_II_WITH_PETSC=ON 
-DPETSC_DIR=/home/a/aa682/Libraries/petsc -DPETSC_ARCH=arch-linux2-c-debug 
-DDEAL_II_WITH_SLEPC=OFF -DCMAKE_CXX_FLAGS=" 
-I/home/a/aa682/Libraries/netcdf-cxx-4.2/cxx"*

But then when typing make all, I am always getting the following error 
message






*[ 57%] Linking CXX shared library ../lib/libdeal_II.socollect2: error: ld 
terminated with signal 7 [Bus error], core dumpedmake[2]: *** 
[lib/libdeal_II.so.9.0.0-pre] Error 1make[2]: *** Deleting file 
`lib/libdeal_II.so.9.0.0-pre'make[1]: *** 
[source/CMakeFiles/deal_II.dir/all] Error 2make: *** [all] Error 2*

Did anybody else have a similar error? 
I would very much appreciate any help.

Thank you very much in advance,
Anna




-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-25 Thread Anna Avdeeva
Dear Timo,

Sorry, I did not mean to write to your personal e-mail. Just pressed the 
wrong reply button by mistake.
Thank you very much for your reply.  I will check the way how I set 
mpi_communicator now. Hopefully, will find the problem.

I have changed step-40 couple of days ago myself to MUMPS, instead of CG 
and it works. So at least I can be sure that I do understand how to use 
MUMPS, and PetSc with MUMPS were installed correctly.

Anna 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallel implementation

2017-09-22 Thread Anna Avdeeva
Dear Wolfgang,

to compute values of the electric field at the receivers I follow the 
strategy of ASPECT code as you suggested
To do this I sum the current_point_values across processors and divide by 
the number of processors that contain point p as following

// Reduce all collected values into local Vector
 Utilities::MPI::sum(current_point_values[p], mpi_communicator,
   current_point_values[p]);

 // Normalize in cases where points are claimed by multiple processors
 if (n_procs > 1)
  current_point_values[p] /= n_procs;

Now to compute magnetic field I need to do similar things but with 
current_point_gradients[p], which are std::vector >(dim + 
dim).
Unfortunately, Utilities::MPI::sum does not exist for such type.

Could you please advice me what is the best way to do this for the 
gradients? 

Thank you in advance,
Anna

 


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-20 Thread Anna Avdeeva
Dear Timo,


The main reason I do this is that I do not understand how to reuse this 
decomposition in deal.ii.
I am relatively new to deal.ii and C++, and I have never used MUMPS before.


The way I set it up with SparseDirectUMFPACK was to use InnerPreconditioner 
structure:


template
struct InnerPreconditioner;


template<>
struct InnerPreconditioner<0>{

typedef SparseDirectUMFPACK type;

};


and defined inner preconditioner as :


matrix_B_preconditioner= std_cxx11::shared_ptr::type >(new typename 
LinearSolvers::InnerPreconditioner::type());


matrix_B_preconditioner->initialize(matrix_B.block(0, 0), typename 
LinearSolvers::InnerPreconditioner::type::AdditionalData());


I can not simply substitute 


typedef SparseDirectUMFPACK type; with typedef 
PETScWrappers::SparseDirectMUMPS type;

since PETScWrappers::SparseDirectMUMPS does not have member function vmult.


I am having trouble to understand where I need to add this function vmult.

I would appreciate any advice on this. Maybe there is some example in 
deal.ii that use MUMPS to construct a preconditioner.


Thank you very much in advance,

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo,

Because I do not understand how to reuse this decomposition in deal.ii.
I am relatively new to deal.ii and C++, and I have never used mumps before.

I would appreciate any advice on this. Maybe there is some example in deal.ii 
that use mumps to construct a preconditioner.

Thank you very much in advance,

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo, 

Thank you for your reply.

I am still having trouble with implementing my code with MUMPS.

I will briefly describe the problem:

I am solving 2 systems of Maxwell's equations in the following way:

the systems are Ax1=b1 and Ax2=b2; 
A is a sparse block symmetric matrix (C -M; -M -C). 
I solve it with FGMRES preconditioned with Pa= (B 0; 0 B) = (C+M 0; 0 C+M).
To obtain inv(Pa) I need to obtain inv(B). And this is what I would like to 
achieve with MUMPS. However, to save time I want to compute inv(B) only 
ones for both b1 and b2.

So I would like to reuse LDLt the second time I solve the system with b2.

With SparseDirectUMFPACK  I did as following.

1) I declared std_cxx11::shared_ptr::type> 
matrix_B_preconditioner; in my main class,
2) at the beginning of setup_dofs I had matrix_B_preconditioner.reset();
3) I assembled the system and rhs separately, and after assembling the 
system matrix I defined
matrix_B_preconditioner = std_cxx11::shared_ptr< typename 
LinearSolvers::InnerPreconditioner::type >
( new typename LinearSolvers::InnerPreconditioner::type() 
);
matrix_B_preconditioner->initialize(matrix_B.block(0, 0), typename 
LinearSolvers::InnerPreconditioner::type::AdditionalData() 
);

4) when solving 
const LinearSolvers::Pa_Preconditioner::type >
pa_preconditioner(*matrix_B_preconditioner);

Where 
a) InnerPreconditioner was
template
struct InnerPreconditioner;
template<>
struct InnerPreconditioner<0>
{
typedef SparseDirectUMFPACK type;
//typedef PETScWrappers::SparseDirectMUMPS type;
};

b) Pa_Preconditioner was:
template
class Pa_Preconditioner: public Subscriptor
{
public:
Pa_Preconditioner(const PreconditionerB _B);
void vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) 
const;
private:
const PreconditionerB _B;
};

template
Pa_Preconditioner::
Pa_Preconditioner(const PreconditionerB _B) :
preconditioner_B (_B)
{
}
template
void Pa_Preconditioner::
vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) const
{ 
 preconditioner_B.vmult(dst.block(0), src.block(0));
 preconditioner_B.vmult(dst.block(1), src.block(1));
}

As far as I understand with MUMPS I have to change this strategy and modify 
Pa_Preconditioner class and have matrix B as input to it, and 
change Pa_Preconditioner::vmult to

template
Pa_Preconditioner::
Pa_Preconditioner(const Btype ,const MPI_Comm _communicator) :
B (),
mpi_communicator (_communicator)
{
}

template
void Pa_Preconditioner::
vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) const
{ 
SolverControl cn;
PetScWrappers::SparseDirectMUMPS solver(cn,mpi_communicator);
solver.set_symmetric_mode(true);
solver.solve(B,dst.block(0),src.block(0));
solver.solve(B,dst.block(1),src.block(1));
}

Does this look reasonable? I think, this makes program much less general 
than my previous version where I could choose between direct and iterative 
solvers with the template parameter. And also I am not sure that LDLt 
decomposition of B is reused in this case.

Any suggestions will be highly appreciated.

Thank you in advance, 
Anna

Show trimmed content 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo, 

Thank you for your reply.

I am still having trouble with implementing my code with MUMPS.

I will briefly describe the problem:

I am solving 2 systems of Maxwell's equations in the following way:

the systems are Ax1=b1 and Ax2=b2; 
A is a sparse block symmetric matrix (C -M; -M -C). 
I solve it with FGMRES preconditioned with Pa= (B 0; 0 B) = (C+M 0; 0 C+M).
To obtain inv(Pa) I need to obtain inv(B). And this is what I would like to 
achieve with MUMPS. However, to save time I want to compute inv(B) only 
ones for both b1 and b2.

So I would like to reuse LDLt the second time I solve the system with b2.

With SparseDirectUMFPACK  I did as following.

1) I declared std_cxx11::shared_ptr::type> 
matrix_B_preconditioner; in my main class,
2) at the beginning of setup_dofs I had matrix_B_preconditioner.reset();
3) I assembled the system and rhs separately, and after assembling the 
system matrix I defined
matrix_B_preconditioner = std_cxx11::shared_ptr< typename 
LinearSolvers::InnerPreconditioner::type >(
  new typename LinearSolvers::InnerPreconditioner::type());
matrix_B_preconditioner->initialize(matrix_B.block(0, 0), typename 
LinearSolvers::InnerPreconditioner::type::AdditionalData());

4) when solving 
const LinearSolvers::Pa_Preconditioner::type >
pa_preconditioner(*matrix_B_preconditioner);

Where 
a) InnerPreconditioner was
template
struct InnerPreconditioner;
template<>
struct InnerPreconditioner<0>
{
typedef SparseDirectUMFPACK type;
//typedef PETScWrappers::SparseDirectMUMPS type;
};

b) Pa_Preconditioner was:
template
class Pa_Preconditioner: public Subscriptor
{
public:
Pa_Preconditioner(const PreconditionerB _B);
void vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) 
const;
private:
const PreconditionerB _B;
};

template
Pa_Preconditioner::
Pa_Preconditioner(const PreconditionerB _B) :
preconditioner_B (_B)
{
}
template
void Pa_Preconditioner::
vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) const
{ 
 preconditioner_B.vmult(dst.block(0), src.block(0));
 preconditioner_B.vmult(dst.block(1), src.block(1));
}

As far as I understand with MUMPS I have to change this strategy and modify 
Pa_Preconditioner class and have matrix B as input to it, and 
change Pa_Preconditioner::vmult to

template
Pa_Preconditioner::
Pa_Preconditioner(const Btype ,const MPI_Comm _communicator) :
B (),
mpi_communicator (_communicator)
{
}

template
void Pa_Preconditioner::
vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) const
{ 
SolverControl cn;
PetScWrappers::SparseDirectMUP|MPS solver(cn,mpi_communicator);
solver.set_symmetric_mode(true);
solver.solve(B,dst.block(0),src.block(0));
solver.solve(B,dst.block(1),src.block(1));
PetSCWrappers::SparseDirectMUMPS solver()
solver.solve(B,dst.block(0), src.block(0));
solver.solve(B,dst.block(1), src.block(1));
}

Does this look reasonable. I think, this makes program much less general 
than my previous version where I could choose between direct and iterative 
solvers with the template parameter.

Any suggestions will be highly appreciated.

Thank you in advance, 
Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallel implementation

2017-09-13 Thread Anna Avdeeva
Dear Wolfgang,

thank you very much for the link. I have followed their approach, but 
before I can check the result, I have to substitute the solver I used for a 
parallel solver.

Anna 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-13 Thread Anna Avdeeva
Dear All,

I have modified step-22 to solve system of Maxwell's equations, and then 
used step-40(and 55) to parallelize the code.
In the serial version I used

template
struct InnerPreconditioner;
template<>
struct InnerPreconditioner<0>
{
typedef SparseDirectUMFPACK type;
};
template<>
struct InnerPreconditioner<1>
{
// type of preconditioner for an iterative solver
};

Which defined the type of preconditioner I need for CG solver, in order to 
obtain yet another preconditioner for FGMRES solution of my system.
Now with parallel implementation I would like to use 
PETScWrappers::SparseDirectMUMPS instead of SparseDirectUMFPACK.

However I am getting the following error:

 error: ‘SparseDirectMUMPS’ in namespace ‘LA’ does not name a type
 typedef LA::SparseDirectMUMPS type;

Here LA is defined as 
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && 
defined(FORCE_USE_OF_TRILINOS))
using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
}

Is it possible to do something similar to step-22 with  SparseDirectMUMPS, 
or I have to rewrite the code and remove the struct InnerPreconditioner ?

Thank you very much in advance for all your help.

Anna
 








-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Parallel implementation

2017-09-11 Thread Anna Avdeeva
Dear Wolfgang,

Yes I would like to create a LA::MPI::BlockVector solution_at_receiver. And 
then copy it to a 
BlockVector localized_solution_at_receiver(solution_at_receiver) 
And output localized_solution_at_receiver only on the 0 processor.

I am having trouble initializing and filling solution_at_receiver.

If I am simply write

solution_at_receiver.reinit(4);
for (unsigned int i=0; i<4; ++i)
solution_at_receiver.block(i).reinit(n_Rc*dim);

I am getting error message, that there is no such function reinit with such 
arguments.

Could you please tell me what is the correct way.

Thank you very much in advance.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Parallel implementation

2017-09-11 Thread Anna Avdeeva
Dear All,

I am not fluent with C++ and MPI and would very much appreciate your help. 

I am now following steps 40 and 55 to write parallel implementation of the 
solution of the system of Maxwell's equations and am having many questions, 
but let us start with the following:

After solving the system I have a solution vector distributed between 
different processors
I am using 

parallel::distributed::Triangulation triangulation;
LA::MPI::BlockVector locally_relevant_solution; 
and during solve I have
LA::MPI::BlockVector distributed_solution(owned_partitioning, 
mpi_communicator);
constraints.set_zero(distributed_solution);
...
constraints.distribute(distributed_solution);
locally_relevant_solution = distributed_solution;

Now I would like to create a file containing the solution vector and some 
function of the solution vector at the set of receiver locations.
If I had a solution vector on one processor I could easily create such file 
(using VectorTools::point_value/point_gradient), but when solution is 
distributed, I am not quite sure what the best way is.

Anna


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] combine error estimates from two different solutions

2017-09-07 Thread Anna Avdeeva
Dear All,

I am solving a system of Maxwell's equations for geophysical application.
For my application I need to compute two solutions for two different 
polarizations of the source field.
In other words, I am solving the system twice with different 
right-hand-sides. And for each polarization, I get BlockVector solution(2) 
containing real and imaginary part of electric field.

For now, to refine the mesh, I use KellyErrorEstimator, and in general it 
produces different error_estimates for the two solutions.
So I can have two different meshes after mesh refinement step. However 
since the system matrix is the same I would like to have a single mesh for 
both polarizations. So my question is what is the best way to do this?

Do I need to combine two solutions in one BlockVector both_solutions(4) or 
I somehow can combine error estimates?

Thank you very much in advance,
Anna




-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Using Hypre AMS preconditioner in deal.ii

2017-09-05 Thread Anna Avdeeva
Hi All,

I am working on the implementation of 3D electromagnetic forward solver for 
geophysical applications. 
I am following the algorithm as described in Grayver and Kolev, 2015. The 
system is solved with preconditioned FGMRES method. However construction of 
the preconditioner is not straightforward, and so far I used direct solver 
for this. Unfortunately direct solver requires a lot of memory for medium 
size 3D problem, and this is why I want (as suggested in Grayver) to 
use CG with auxiliary space preconditioner AMS instead. The implementation 
of AMS is readily available in Hypre library which should be possible to 
access through PetSc. 
I struggle to understand how to do this within deal.ii. Is there a PETSc 
wrapper for this.

Any help will be greatly appreciated.

Thank you very much in advance,
Anna 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Computing the curl of a solution vector field obtained from Nedelec elements at receiver locations

2017-08-30 Thread Anna Avdeeva
Thank you for the prompt reply.

I will implement the first option with VectorTools::point_value/gradient() 
. 

Could you please also point out to me where I can find clarifications to 
the following:
1) There are many versions of VectorTools::point_value function. At the 
moment I use

void VectorTools::point_value ( 
const DoFHandler< dim, spacedim > & dof, const VectorType & fe_function, 
const Point< spacedim > & point, Vector< typename VectorType::value_type > 
& value )

Will the result be better if I use a version with mapping argument? 

2) I am interested to know how exactly the 
VectorTools::point_value/gradient() interpolate vector field values from 
quadrature points to arbitrary location. I suspect that it uses only one 
cell?

3) For some of the applications I am interested in the receivers are 
located at the interface between the air(or sea) and the ground. In other 
words receivers are at the boundary of the cell. Does it mean I should 
shift the receiver location slightly downwards, to insure that it lies 
inside the cell and to get more predictable result?

Finally, do you think using PointValueHistory class is a better way?

Thank you very much in advance,
Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Computing the curl of a solution vector field obtained from Nedelec elements at receiver locations

2017-08-29 Thread Anna Avdeeva
Hi All,

I have successfully computed the secondary electric field for 3D case as 
described in the paper by Grayver, 2014. These field is declared as 
BlockVector solution, where the block correspond to real and imaginary 
parts.
Now I would like to compute total electric E (secondary plus primary) and 
magnetic field (coef* curlE)
1) I can use PostProcessor class to do this
template 
  class Postprocessor : public DataPostprocessor
  {
const SmartPointer eprimary;
  public:
Postprocessor (const SmartPointer eprimary) :
   eprimary (eprimary)
  {
  }
virtual
void
evaluate_vector_field( ...
...
}
But then I need to extract values of the total fields at set of receiver 
locations.
If I had total fields in the form similar to solution, I could use
for (unsigned int i = 0; i < receiver_locations.size (); ++i)
  {
   VectorTools::point_value (dof_handler, solution,
receiver_locations[i],
solution_at_receiver); ...
  }
But since total fields are computed with help of postprocessor, I do not know 
how to extract values at receivers.

2) The other option is to follow the advice by Ross Kynch, and have loop over 
cells, quadrature points and dofs to get local contributions to total electric 
and magnetic field.
In this case I am not sure how then I get a global vector.

Please let me know what is the best way.

I would very much appreciate you reply,
Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: problem adding two blocks of BlockSparseMatrix

2017-07-26 Thread Anna Avdeeva
Dear Jean-Paul,

I am new to deal.ii and C++ programming and never used Linear operators, so 
will have to study documentation. My understanding so far is that when using 
LinearOperators I am not actually constructing the matrix, but only know how 
such matrix would act on a vector. Am I correct? Could you please let me know 
whether I will be able to invert the matrix constructed by LinearOperators with 
a direct solver, or it would only be possible to use iterative solver.  

Anna

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: problem adding two blocks of BlockSparseMatrix

2017-07-26 Thread Anna Avdeeva
Thank you for the prompt reply. The inverse of the matrix addition will be 
used to construct a preconditioner for the solution of the original system 
with FGMRES.

On Wednesday, July 26, 2017 at 3:15:51 PM UTC+9, Jean-Paul Pelteret wrote:
>
> Dear Anna,
>
> So it appears that you're hitting this assertion 
> <https://github.com/dealii/dealii/blob/master/include/deal.II/lac/sparse_matrix.templates.h#L364>,
>  
> which is not checking that the sparsity pattern looks the same but rather 
> that both blocks reference the same sparsity pattern object 
> <https://github.com/dealii/dealii/blob/master/include/deal.II/lac/sparse_matrix.h#L1596>.
>  
> Since you build up your system as a block system, even if your 
> discretisations for E_re and E_im are the same, the different blocks of the 
> block sparse matrix each reference a different sparsity pattern (as stored 
> in the blocks in dsp).
>
> Depending on what you're wanting to do here, the LinearOperator 
> <https://www.dealii.org/8.5.0/doxygen/deal.II/classLinearOperator.html> 
> class *might* offer a work-around for this issue. What are you ultimately 
> going to do with the result of the matrix addition?
>
> Regards,
> Jean-Paul
>
> On Wednesday, July 26, 2017 at 5:14:11 AM UTC+2, Anna Avdeeva wrote:
>>
>> Dear All,
>>
>> I need to subtract two blocks of a BlockSparseMatrix 
>> system_matrix.
>> I am trying to achieve this by 
>> creating SpraseMatrix tmp1, then 
>>   tmp1.copy_from(system_matrix.block(0,0));
>>   tmp1.add(-1.0, system_matrix.block(0,1));
>>
>> However, I am getting the following error message:
>> "When copying one sparse matrix into another, or when adding one sparse 
>> matrix to another, both matrices need to refer to the same sparsity 
>> pattern."
>>
>> Could you please help me to overcome this problem? 
>>
>> system_matrix has BlockSparsityPattern sparsity_pattern;
>>
>> which was obtained in the following way:
>>
>> BlockDynamicSparsityPattern dsp (2, 2);
>> dsp.block (0, 0).reinit (n_E_re, n_E_re);
>> dsp.block (1, 0).reinit (n_E_im, n_E_re);
>> dsp.block (0, 1).reinit (n_E_re, n_E_im);
>> dsp.block (1, 1).reinit (n_E_im, n_E_im);
>> dsp.collect_sizes ();
>> DoFTools::make_sparsity_pattern (dof_handler, dsp, constraints,
>> false);
>> sparsity_pattern.copy_from (dsp);
>>
>>
>> I have tried various ways of declaring tmp1. Such as 
>> SparseMatrix tmp1(system_matrix.block(0,0).get_sparsity_pattern())
>> or creating tmp1 as BlockSparseMatrix tmp1(sparsity_pattern)
>>
>> Thank you in advance for your help. 
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: problem adding two blocks of BlockSparseMatrix

2017-07-26 Thread Anna Avdeeva
I have managed to subtract the two blocks by declaring 
SparseMatrixEZ tmp1;
Is there a more efficient way?


 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] problem adding two blocks of BlockSparseMatrix

2017-07-25 Thread Anna Avdeeva
Dear All,

I need to subtract two blocks of a BlockSparseMatrix system_matrix.
I am trying to achieve this by 
creating SpraseMatrix tmp1, then 
  tmp1.copy_from(system_matrix.block(0,0));
  tmp1.add(-1.0, system_matrix.block(0,1));

However, I am getting the following error message:
"When copying one sparse matrix into another, or when adding one sparse 
matrix to another, both matrices need to refer to the same sparsity 
pattern."

Could you please help me to overcome this problem? 

system_matrix has BlockSparsityPattern sparsity_pattern;

which was obtained in the following way:

BlockDynamicSparsityPattern dsp (2, 2);
dsp.block (0, 0).reinit (n_E_re, n_E_re);
dsp.block (1, 0).reinit (n_E_im, n_E_re);
dsp.block (0, 1).reinit (n_E_re, n_E_im);
dsp.block (1, 1).reinit (n_E_im, n_E_im);
dsp.collect_sizes ();
DoFTools::make_sparsity_pattern (dof_handler, dsp, constraints,
false);
sparsity_pattern.copy_from (dsp);


I have tried various ways of declaring tmp1. Such as 
SparseMatrix tmp1(system_matrix.block(0,0).get_sparsity_pattern())
or creating tmp1 as BlockSparseMatrix tmp1(sparsity_pattern)

Thank you in advance for your help. 


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.