Re: [deal.II] Using DataOut with MappingCollection

2017-11-27 Thread Wolfgang Bangerth

On 11/27/2017 08:13 PM, Juan Carlos Araujo Cabarcas wrote:


/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:285:16: note: 
void dealii::DataOut::build_patches(const 
dealii::Mappingspace_dimension>&, unsigned int, dealii::DataOut::CurvedCellRegion) [with int dim = 2; DoFHandlerType = 
dealii::hp::DoFHandler<2, 2>]
virtual void build_patches (const Mapping ,

 ^
/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:285:16: note:   
no known conversion for argument 3 from ‘dealii::DataOut<2, 
dealii::DoFHandler<2, 2> >::CurvedCellRegion’ to ‘dealii::DataOut<2, 
dealii::hp::DoFHandler<2, 2> >::CurvedCellRegion’


Here's your key -- you need to say
  DataOut >::curved_inner_cells
instead of
  DataOut::curved_inner_cells

Error messages are your friend :-)

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Error during p4est installation

2017-11-27 Thread Wolfgang Bangerth

On 11/27/2017 08:51 PM, feap...@gmail.com wrote:


I also get a bug when I install  p4est
*Error: Main header file missing
*
How can I solve this bugs in the installation of p4est?


That is not enough information for us to tell you what the problem may be. Can 
you elaborate?


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Error during p4est installation

2017-11-27 Thread feapman
Dear all,

I also get a bug when I install  p4est 

*Error: Main header file missing*
How can I solve this bugs in the installation of p4est?

Warm regards,
Yaakov


On Tuesday, December 9, 2014 at 9:51:34 AM UTC+1, Lev Karatun wrote:
>
> Hello!
>
> I'm trying to install p4est on a Linux cluster as described here: 
> http://www.dealii.org/developer/external-libs/p4est.html
> I'm executing the setup script, but I'm getting the following error:
>
> *Build FAST version in 
> /home/r/russ/lkaratun/distrib/aspect/p4est-build/FAST*
> *configure: error: MPI C test failed*
> *Error: Error in configure*
>
> I tested MPI by executing "mpirun -np 2 echo a", and got 2 a's as 
> intended. So I'm not quite sure how do I get the script to see MPI.
>
> Could you please help me with it?
>
> Thanks in advance!
>
> Lev.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Using DataOut with MappingCollection

2017-11-27 Thread Juan Carlos Araujo Cabarcas
Thanks for your reply. I did not know I could just pass the higher order 
mapping. If I omit DataOut::curved_inner_cells, then it works. 
However, I pursue domains with curved inner cells. Following your 
suggestion, I do:

data_out.build_patches (MappingQGeneric(max_degree), 8, DataOut::
curved_inner_cells);

and get the errors:

/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc: In instantiation of 
‘void Adaptive::LaplaceProblem::postprocess(unsigned int) [with int 
dim = 2]’:
/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc:502:27:   required from 
‘void Adaptive::LaplaceProblem::run() [with int dim = 2]’
/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc:529:28:   required from 
here
/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc:429:7: error: no 
matching function for call to ‘dealii::DataOut<2, dealii::hp::DoFHandler<2, 
2> >::build_patches(dealii::MappingQGeneric<2, 2>, int, dealii::DataOut<2, 
dealii::DoFHandler<2, 2> >::CurvedCellRegion)’
   data_out.build_patches (MappingQGeneric(max_degree), 8, 
DataOut::curved_inner_cells);
   ^
/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc:429:7: note: candidates 
are:
In file included from 
/home/ju4nk4/jc/codes/adaptivity/disk_eigs/pFEM.cc:58:0:
/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:252:16: note: 
void dealii::DataOut::build_patches(unsigned int) 
[with int dim = 2; DoFHandlerType = dealii::hp::DoFHandler<2, 2>]
   virtual void build_patches (const unsigned int n_subdivisions = 0);
^
/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:252:16: 
note:   candidate expects 1 argument, 3 provided
/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:285:16: note: 
void dealii::DataOut::build_patches(const 
dealii::Mapping&, unsigned int, dealii::DataOut::CurvedCellRegion) [with int dim = 2; DoFHandlerType = 
dealii::hp::DoFHandler<2, 2>]
   virtual void build_patches (const Mapping ,
^
/home/ju4nk4/Soft/dealii/include/deal.II/numerics/data_out.h:285:16: 
note:   no known conversion for argument 3 from ‘dealii::DataOut<2, 
dealii::DoFHandler<2, 2> >::CurvedCellRegion’ to ‘dealii::DataOut<2, 
dealii::hp::DoFHandler<2, 2> >::CurvedCellRegion’
make[2]: *** [CMakeFiles/pFEM.dir/pFEM.cc.o] Error 1
make[1]: *** [CMakeFiles/pFEM.dir/all] Error 2
make: *** [all] Error 2

Any ideas?


El lunes, 27 de noviembre de 2017, 12:45:04 (UTC-5), Wolfgang Bangerth 
escribió:
>
> On 11/17/2017 12:37 PM, Juan Carlos Araujo Cabarcas wrote: 
> > 
> > I would like to reproduce step-27 but with curved boundaries with the 
> > use of MappingCollection. 
> > Everything seems to work fine, but I noticed that data_out does not seem 
> > to be implemented for passing a MappingCollection. 
>
> Yes, that seems to be correct. There is even a @todo in the 
> documentation of the function that takes a mapping. 
>
>
> > In particular I would like to be able to use something like: 
> >data_out.build_patches (mapping_collection, 8, 
> > DataOut::curved_inner_cells); 
> > 
> > Any hints on how to achieve this are greatly appreciated! 
>
> I suspect that -- unless you are on a very coarse mesh -- the difference 
> between the different mappings are not really visible in a 
> visualization. Could you just pass the higher order mapping to the 
> function, instead of the entire mapping collection? 
>
> That's not the "correct" approach, of course, but it's likely not going 
> to lead to visible differences. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Relation between Solution Error Behavior and Polynomial Approximation Degree

2017-11-27 Thread Jaekwang Kim
No, I typically, calculate values itself and use MATLAB to draw plots. 

Thanks,

Jaekwang 

On Monday, November 27, 2017 at 3:24:46 PM UTC-6, seven wrote:
>
> Hello Jaekwang,
>
> I am trying to generate some log-log plots, and wondering if you used the 
> functions in deal.ii to generate the figure. If not, what did you use?
>
> Thanks,
> Jiaqi
>
> 在 2016年9月29日星期四 UTC-4上午11:41:48,Jaekwang Kim写道:
>>
>>
>> 
>>
>> Hi all, I have question on error behavior of FEM. 
>>
>> I thought that the order of error is O(h^p) where h is a mesh-size and p 
>> is polynomial degree we use in approximation. 
>>
>> So, I thought that if I plot an error with number of mesh in log-log 
>> scale, than the graph will show -p slope. 
>> However, I the error behaves little bit different from my expectation.
>>
>> For example, I use a step7 tutorial program (which solves Helmholtz 
>> decomposition and compares the FEM solution with exact solution.) 
>>
>> The error curve showed more steep slope whenever I increase polynomial 
>> degree approximation however, the slope is not (-p). 
>> I reached slope (-3) when I used fifth-degree polynomial approximation... 
>>   
>> You can check this behavior in attached picture. 
>>
>> Until now, I have considered, 
>>
>> 1. Mapping(From reference cell to real cell) degree (which is originally 
>> set to 1 but I used higher mapping) 
>> 2. Instead of Qgauss quadrature, I am using QgaussLobatto Quadrature for 
>> any integration over cells. 
>> 3. Shape function , again I tried to use QgaussLobatto node point for 
>> this) 
>>
>> is there any suggestion that I need to fix more? 
>> or my first prediction that the slope will show '-p' or error will just 
>> behave O(h^p) was wrong?
>>
>> I am always thank you for all guys!
>>
>> Jaekwang Kim  
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] step-22 partial boundary conditions

2017-11-27 Thread Wolfgang Bangerth

On 11/27/2017 01:16 PM, Jane Lee wrote:
I'm trying to apply some partial boundary conditions to the step-22 
stokes problem. I can't seem to find much further help on this and when 
I try and implement it, it solves but solution is clearly unstable/blows 
up.


I am trying the basics before i impose inhomogeneous quantities, and 
using no normal flux on the boundary, which constrains one component, 
and then allow no tangential stresses either, which should constrain the 
other two. Can anyone spot where I'm going wrong?


I don't think you can do it that way -- this would constrain the normal 
component in terms of the tangential components, and then somehow try to 
find a coordinate system in which to constrain the tangential 
components, but I can completely see how this leads to circular 
dependencies and all sorts of other weirdness. If you want a zero 
boundary condition, then just impose zero for all components.


I'll add that *theoretically* things should work this way -- you are 
constraining all components. But *algorithmically*, I don't think that's 
a useful approach.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Conjugate Gradient for Schur complement, serial vs parallel discrepancy in solution and effect of tolerance.

2017-11-27 Thread Dimitris Ntogkas
Hi Bruno,

Thanks for your quick response! You are right about the l2 vs max norm. 
However, the error is 1e-4 in the l2 norm too. Just a clarification to make 
sure I understand your response. I was indeed thinking of the condition 
number, that's why I checked it, but in my case the 1e-11 should lose up to 
5 more digits, which is still better than 1e-4. However, probably your 
point is that since I was using cg with tolerance of 1e-8, this is already 
a loss of accuracy that I did not take into account in the above 
calculation. Is this correct?

Thanks again,
Dimitris



On Monday, November 27, 2017 at 4:24:16 PM UTC-5, Bruno Turcksin wrote:
>
> Dimitris,
>
> The same implementation of CG will give you (slightly) different results 
> in serial an parallel because the round-off errors will be different. This 
> round-off errors will be amplified if you have a large condition number 
> (see https://en.wikipedia.org/wiki/Condition_number). So if you 
> precondition your system and the condition number decreases you can expect 
> better results. This explains why there is a difference between the serial 
> and the parallel run. Now about the maximum value of the change. I think 
> what you are doing is wrong. You are looking at the maximum difference, 
> i.e., at the L infinity norm but the tolerance is computed in the L2 norm. 
> A tolerance of 1e-8 in the L2 norm does not mean that you will also get a 
> tolerance of 1e-8 in the L infinity norm.
>
> Best,
>
> Bruno
>
> On Monday, November 27, 2017 at 3:31:07 PM UTC-5, Dimitris Ntogkas wrote:
>>
>> Dear all,
>>
>> I have a question with regards to the behavior of the Conjugate Gradient 
>> method in serial and parallel. I am using version 8.5 of dealii and I have 
>> a parallel implementation based on Trilinos. 
>> The system matrix is in a block format and sparse, with blocks A, B, B^T 
>> and 0. The right hand side has two blocks, f_0 and f_1 = 0. I am using a 
>> Schur complement similar to step 20 but in parallel to solve the system and 
>> I am facing an issue with the first step of the solve routine, where I use 
>> the conjugate gradient to solve for y_1. At this point I am not using any 
>> preconditioning. 
>>
>> I have exported and converted the matrices and vectors in appropriate 
>> format, so that I am able to work with them in Matlab too. When I compare 
>> the system matrix created serially and the one created in parallel (say 
>> mpirun -n2), their maximum difference in absolute value is of order 1e-11. 
>> The right hand sides created serially and in parallel are identical. 
>> However, the solution of the system with tolerance for CG 1e-8, has a 
>> maximum difference of order 1e-4. However, for this particular calculation 
>> the condition number of the Schur complement is of order 1e+5 (calculated 
>> in Matlab). Moreover, when I use Matlab to do the Schur solve with CG for 
>> those matrices and the same tolerance, the resulting solutions differ by an 
>> order of 1e-12. 
>>
>> The above discrepancy in the solution reduces by two orders if I make the 
>> tolerance for CG to be smaller, i.e. of order 1e-11, for both the serial 
>> and the parallel execution. 
>>
>> My question is why for this difference in the matrices and this condition 
>> number do I see such a difference in the solution? Could this be related to 
>> how CG is implemented in parallel and how the tolerance is guaranteed in 
>> parallel vs serially?
>>
>> Thanks,
>> Dimitris
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Relation between Solution Error Behavior and Polynomial Approximation Degree

2017-11-27 Thread seven
Hello Jaekwang,

I am trying to generate some log-log plots, and wondering if you used the 
functions in deal.ii to generate the figure. If not, what did you use?

Thanks,
Jiaqi

在 2016年9月29日星期四 UTC-4上午11:41:48,Jaekwang Kim写道:
>
>
> 
>
> Hi all, I have question on error behavior of FEM. 
>
> I thought that the order of error is O(h^p) where h is a mesh-size and p 
> is polynomial degree we use in approximation. 
>
> So, I thought that if I plot an error with number of mesh in log-log 
> scale, than the graph will show -p slope. 
> However, I the error behaves little bit different from my expectation.
>
> For example, I use a step7 tutorial program (which solves Helmholtz 
> decomposition and compares the FEM solution with exact solution.) 
>
> The error curve showed more steep slope whenever I increase polynomial 
> degree approximation however, the slope is not (-p). 
> I reached slope (-3) when I used fifth-degree polynomial approximation... 
>   
> You can check this behavior in attached picture. 
>
> Until now, I have considered, 
>
> 1. Mapping(From reference cell to real cell) degree (which is originally 
> set to 1 but I used higher mapping) 
> 2. Instead of Qgauss quadrature, I am using QgaussLobatto Quadrature for 
> any integration over cells. 
> 3. Shape function , again I tried to use QgaussLobatto node point for 
> this) 
>
> is there any suggestion that I need to fix more? 
> or my first prediction that the slope will show '-p' or error will just 
> behave O(h^p) was wrong?
>
> I am always thank you for all guys!
>
> Jaekwang Kim  
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Conjugate Gradient for Schur complement, serial vs parallel discrepancy in solution and effect of tolerance.

2017-11-27 Thread Bruno Turcksin
Dimitris,

The same implementation of CG will give you (slightly) different results in 
serial an parallel because the round-off errors will be different. This 
round-off errors will be amplified if you have a large condition number 
(see https://en.wikipedia.org/wiki/Condition_number). So if you 
precondition your system and the condition number decreases you can expect 
better results. This explains why there is a difference between the serial 
and the parallel run. Now about the maximum value of the change. I think 
what you are doing is wrong. You are looking at the maximum difference, 
i.e., at the L infinity norm but the tolerance is computed in the L2 norm. 
A tolerance of 1e-8 in the L2 norm does not mean that you will also get a 
tolerance of 1e-8 in the L infinity norm.

Best,

Bruno

On Monday, November 27, 2017 at 3:31:07 PM UTC-5, Dimitris Ntogkas wrote:
>
> Dear all,
>
> I have a question with regards to the behavior of the Conjugate Gradient 
> method in serial and parallel. I am using version 8.5 of dealii and I have 
> a parallel implementation based on Trilinos. 
> The system matrix is in a block format and sparse, with blocks A, B, B^T 
> and 0. The right hand side has two blocks, f_0 and f_1 = 0. I am using a 
> Schur complement similar to step 20 but in parallel to solve the system and 
> I am facing an issue with the first step of the solve routine, where I use 
> the conjugate gradient to solve for y_1. At this point I am not using any 
> preconditioning. 
>
> I have exported and converted the matrices and vectors in appropriate 
> format, so that I am able to work with them in Matlab too. When I compare 
> the system matrix created serially and the one created in parallel (say 
> mpirun -n2), their maximum difference in absolute value is of order 1e-11. 
> The right hand sides created serially and in parallel are identical. 
> However, the solution of the system with tolerance for CG 1e-8, has a 
> maximum difference of order 1e-4. However, for this particular calculation 
> the condition number of the Schur complement is of order 1e+5 (calculated 
> in Matlab). Moreover, when I use Matlab to do the Schur solve with CG for 
> those matrices and the same tolerance, the resulting solutions differ by an 
> order of 1e-12. 
>
> The above discrepancy in the solution reduces by two orders if I make the 
> tolerance for CG to be smaller, i.e. of order 1e-11, for both the serial 
> and the parallel execution. 
>
> My question is why for this difference in the matrices and this condition 
> number do I see such a difference in the solution? Could this be related to 
> how CG is implemented in parallel and how the tolerance is guaranteed in 
> parallel vs serially?
>
> Thanks,
> Dimitris
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Conjugate Gradient for Schur complement, serial vs parallel discrepancy in solution and effect of tolerance.

2017-11-27 Thread Dimitris Ntogkas
Dear all,

I have a question with regards to the behavior of the Conjugate Gradient 
method in serial and parallel. I am using version 8.5 of dealii and I have 
a parallel implementation based on Trilinos. 
The system matrix is in a block format and sparse, with blocks A, B, B^T 
and 0. The right hand side has two blocks, f_0 and f_1 = 0. I am using a 
Schur complement similar to step 20 but in parallel to solve the system and 
I am facing an issue with the first step of the solve routine, where I use 
the conjugate gradient to solve for y_1. At this point I am not using any 
preconditioning. 

I have exported and converted the matrices and vectors in appropriate 
format, so that I am able to work with them in Matlab too. When I compare 
the system matrix created serially and the one created in parallel (say 
mpirun -n2), their maximum difference in absolute value is of order 1e-11. 
The right hand sides created serially and in parallel are identical. 
However, the solution of the system with tolerance for CG 1e-8, has a 
maximum difference of order 1e-4. However, for this particular calculation 
the condition number of the Schur complement is of order 1e+5 (calculated 
in Matlab). Moreover, when I use Matlab to do the Schur solve with CG for 
those matrices and the same tolerance, the resulting solutions differ by an 
order of 1e-12. 

The above discrepancy in the solution reduces by two orders if I make the 
tolerance for CG to be smaller, i.e. of order 1e-11, for both the serial 
and the parallel execution. 

My question is why for this difference in the matrices and this condition 
number do I see such a difference in the solution? Could this be related to 
how CG is implemented in parallel and how the tolerance is guaranteed in 
parallel vs serially?

Thanks,
Dimitris


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] step-22 partial boundary conditions

2017-11-27 Thread Jane Lee
I'm trying to apply some partial boundary conditions to the step-22 stokes 
problem. I can't seem to find much further help on this and when I try and 
implement it, it solves but solution is clearly unstable/blows up. 

I am trying the basics before i impose inhomogeneous quantities, and using 
no normal flux on the boundary, which constrains one component, and then 
allow no tangential stresses either, which should constrain the other two. 
Can anyone spot where I'm going wrong? I'm unsure whether I'm just using a 
very silly test case (the same as step-22) for the conditions, or whether 
i'm imposing things incorrectly. 

I am doing:
std::set all_boundaries;
all_boundaries.insert (0);
VectorTools::compute_no_normal_flux_constraints (dof_handler, 0,
 all_boundaries,
 constraints);

then

typename FunctionMap::type tang_map;
ZeroFunction tang_stress (2);
tang_map[0] = _stress;
VectorTools::compute_nonzero_tangential_flux_constraints (dof_handler, 0,
all_boundaries,
  tang_map,
constraints);

I believe these are all the conditions you need 

thanks a lot

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Using DataOut with MappingCollection

2017-11-27 Thread Wolfgang Bangerth

On 11/17/2017 12:37 PM, Juan Carlos Araujo Cabarcas wrote:


I would like to reproduce step-27 but with curved boundaries with the 
use of MappingCollection.
Everything seems to work fine, but I noticed that data_out does not seem 
to be implemented for passing a MappingCollection.


Yes, that seems to be correct. There is even a @todo in the 
documentation of the function that takes a mapping.




In particular I would like to be able to use something like:
   data_out.build_patches (mapping_collection, 8, 
DataOut::curved_inner_cells);


Any hints on how to achieve this are greatly appreciated!


I suspect that -- unless you are on a very coarse mesh -- the difference 
between the different mappings are not really visible in a 
visualization. Could you just pass the higher order mapping to the 
function, instead of the entire mapping collection?


That's not the "correct" approach, of course, but it's likely not going 
to lead to visible differences.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem while installing deal.II (version 8.0) on a cluster running CentOS 6.7.

2017-11-27 Thread Wolfgang Bangerth

On 11/27/2017 10:03 AM, Sunder Dasika wrote:


   file cannot create directory: /usr/local/common/scripts.  Maybe need
   administrative privileges.
Call Stack (most recent call first):
   cmake_install.cmake:37 (INCLUDE)

Most probably I need administrative privileges which I don't have 
currently. I will contact the system administrator regarding this.


Most of us install deal.II into a location inside our home directories, 
rather than in /usr/local. See the readme.html file for this, using the 
CMAKE_INSTALL_PREFIX variable you pass to cmake.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem while installing deal.II (version 8.0) on a cluster running CentOS 6.7.

2017-11-27 Thread Sunder Dasika
Thanks a lot for the help. 'cmake -D DEAL_II_WITH_NETCDF=OFF' workded. The
error was successfully eliminated. At the end of 'make' I got the following
message:

Scanning dependencies of target deal_II
[100%] Building CXX object source/CMakeFiles/deal_II.dir/base/dummy.cc.o
Linking CXX shared library libdeal_II.so
[100%] Built target deal_II


However at the end of 'make install' I get this error

-- Install configuration: "DebugRelease"
CMake Error at cmake/scripts/cmake_install.cmake:42 (FILE):
  file cannot create directory: /usr/local/common/scripts.  Maybe need
  administrative privileges.
Call Stack (most recent call first):
  cmake_install.cmake:37 (INCLUDE)

Most probably I need administrative privileges which I don't have
currently. I will contact the system administrator regarding this.



On 27 November 2017 at 19:53, Timo Heister  wrote:

> > Do you know how this version of NetCDF was installed? Do you need it? If
> > not, can you uninstall it?
>
> or do
>
> cmake -D DEAL_II_WITH_NETCDF=OFF
>
> --
> Timo Heister
> http://www.math.clemson.edu/~heister/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/
> forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/dealii/1E6F-q5gfWA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] error in SolverFGMRES related destructor

2017-11-27 Thread Wolfgang Bangerth


Seb,

of course I am willing to share my code. You can find it in the file 
attached. The parameter file is configured such that the 
GrowingVectorMemory error occurs.


Thanks. I've tried this with the current development version of deal.II 
and the error disappears. I'm pretty sure I know why this is so -- the 
solver does not converge, so it throws an exception and that led to a 
vector that had been allocated not being freed (because we bypass the 
memory_pool.free(...) call due to the exception). In the next step, the 
memory pool object is being destroyed and complains that a vector that 
had been allocated had not been freed, and that's why you get that error 
before anything else. (Anything else = the convergence error.)


I fixed this a while back in the development version, though. Probably here:
  https://github.com/dealii/dealii/pull/4953

So with the current version, I only get to see the error about 
non-convergence:


An error occurred in line <1052> of file 
 
in function
void dealii::SolverGMRES::solve(const MatrixType&, 
VectorType&, const VectorType&, const PreconditionerType&) [with 
MatrixType = dealii::SparseMatrix; PreconditionerType = 
dealii::SparseILU; VectorType = dealii::Vector]

The violated condition was:
iteration_state == SolverControl::success
Additional information:
Iterative method reported convergence failure in step 1000. The residual 
in the last step was 0.0018243.


[...]


I think the error is due to a 
convergence failure of SolverGMRES inside the method 
BlockSchurPreconditioner::vmult. In this method the convection-diffusion 
system ((0,0)-block) is solved with GMRES and ILU-preconditioning.


I investigated the behaviour of the preconditioner further. If the 
Reynolds number is decrease to say 100, the iterative solver for the 
convection-diffusion system converges. I am not an expert, but does 
ILU-preconditioning not work for larger Reynolds numbers? I thought ILU 
is robust (and expensive) but it should be a first good choice.


No -- at least not with the default settings. For high-Re cases, you 
need to fill more off-diagonal entries in the ILU for it to be good. 
There is a recent discussion on exactly this issue on the mailing list.



As a second approach, I used a direct solver (SparseDirectUMFPACK) for 
the convection-diffusion matrix like in step-57. In this case, the issue 
with GMRES and ILU do not occur. Then, the FGMRES method converges for 
moderate Reynolds numbers of Re=200. However, for Re=400 convergence is 
not achieved anymore. I guess this is due to the bad approximation of 
the Schur complement by the pressure mass matrix. In step-57, using the 
pressure mass matrix somehow also works when solving the system for 
higher Reynolds numbers. Is this due to the Augmented Lagrange approach?


Probably.

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem while installing deal.II (version 8.0) on a cluster running CentOS 6.7.

2017-11-27 Thread Timo Heister
> Do you know how this version of NetCDF was installed? Do you need it? If
> not, can you uninstall it?

or do

cmake -D DEAL_II_WITH_NETCDF=OFF

-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem while installing deal.II (version 8.0) on a cluster running CentOS 6.7.

2017-11-27 Thread Guido Kanschat


-

> On 27. Nov 2017, at 10:35, Sunder Dasika  wrote:
> 
> Thank you for the response. I will contact our system administrator and reply 
> as soon as possible.  

While doing so: your gcc is more than six years old. Maybe you can convince 
your administrators to update the system to something more modern and 
consistent? Given all the improvements in compiler optimization, everybody 
would profit.

Best, Guido

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem while installing deal.II (version 8.0) on a cluster running CentOS 6.7.

2017-11-27 Thread Sunder Dasika
Thank you for the response. I will contact our system administrator and 
reply as soon as possible.  

On Monday, 27 November 2017 05:00:57 UTC+5:30, Wolfgang Bangerth wrote:
>
> On 11/26/2017 12:35 PM, Sunder Dasika wrote: 
> > I am trying to install deal.II on a cluster running CentOS 6.7. I have 
> tried 
> > all versions starting from version 8.0. I always get the same error at 
> the end 
> > of building: 
> > 
> > /usr/bin/ld: /usr/local/lib/libnetcdf_c++.a(netcdf.o): relocation 
> R_X86_64_32S 
> > against `vtable for NcTypedComponent' can not be used when making a 
> shared 
> > object; recompile with -fPIC 
> > /usr/local/lib/libnetcdf_c++.a: could not read symbols: Bad value 
>
> Do you know how this version of NetCDF was installed? Do you need it? If 
> not, 
> can you uninstall it? 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.