[deal.II] Re: component mask and Dirichlet bc application

2017-05-31 Thread Jean-Paul Pelteret
Dear Alberto,

In my opinion both of the approaches that you've outlined are plausible 
ways of implementing Dirichlet constraints, but there are a couple of key 
differences that I can quickly outline. I'll refer to the approach listed 
in your first post as method 1 and the split approach in your second post 
as method 2.

Method 1:
- You need to create a union of component masks for the field components to 
be constrained. You can do this using ComponentMask::operator| 
,
 
and there are examples of its use in the tutorials. So presumably

> ComponentMask displacements_mask = fe.component_mask (displacements);

would change to something like

> const ComponentMask displacements_and_concentration_mask = 
> fe.component_mask (displacements) |  fe.component_mask (concentration); 

- You need to implement the definition of all constraints for a boundary 
into a single class, i.e. for the function

> IncrementalDirichletBoundaryValues::vector_value (const Point & 
> p , Vector   & return_value) const

all of the constrained components (as selected via the component mask) of 
return_value need to be sensibly defined.
- With this monolithic approach (and unless you do something strange), you 
should never have conflicting definitions of a constraint on a boundary.

Method 2:
- You are first imposing one set of constraints, and then a second. This is 
now a sequential operation rather than a monolithic one as used in method 1.
- You need only have one std::map 
boundary_values, sequentially interpolate all of the Dirichlet constraint 
definitions into the map and apply them once. This is more computationally 
efficient (you only modify the linear system once).
- Applying multiple boundary conditions to subsets of DoFs on a boundary 
introduces the possibility of overwriting constraints (e.g. by accidentally 
selecting the same component of a ComponentMask twice). So you could 
introduce a bug where the second constraint definition dominates the first. 
There is a warning on this point in the documentation to 
VectorTools::interpolate_boundary_values 
.
 
This would normally only require consideration for DoFs shared between 
adjacent Dirichlet boundaries, but now some extra care must be taken in 
this situation.

If it makes any difference, my personal preference is to use method 2. With 
this approach you can define interesting boundary conditions once and then 
easily mix and match which are applied to which components of different 
boundaries. But, of course, opinions would differ based on design 
philosophies and (perhaps) mathematical rigour related to the theory on 
boundary value problems.

Best,
J-P

On Thursday, June 1, 2017 at 1:54:23 AM UTC+2, Alberto Salvadori wrote:
>
> I am adding here an attempt I made. It seems to work but since this was 
> more intuition rather than full understanding, I do appreciate your 
> comments.
> So, this is what I did: basically, I created two Dirichlet boundary 
> conditions and two masks, and I applied conditions in sequence, like this:
>
>
>   // Dirichlet bcs
>
>   // -
>
>   
>
>   PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);
>
>
>   // Dirichlet bc for displacements
>
>   {
>
>   std::map boundary_values;
>
>
>   ComponentMask displacements_mask = fe.component_mask (displacements);
>
>   VectorTools::interpolate_boundary_values (dof_handler,
>
> 0,
>
> 
> IncrementalDirichletBoundaryValues( TIDM, NRit, GPDofs ),
>
> boundary_values,
>
> displacements_mask);
>
>   
>
>   MatrixTools::apply_boundary_values (boundary_values,
>
>   system_matrix,
>
>   tmp,
>
>   system_rhs,
>
>   false);
>
>   }
>
>
>   // Dirichlet bc for concentrations
>
>   {
>
> std::map boundary_values;
>
> 
>
> ComponentMask displacements_mask = fe.component_mask (concentration);
>
> VectorTools::interpolate_boundary_values (dof_handler,
>
>   0,
>
>   
> IncrementalDirichletBoundaryValuesForConcentration( TIDM, NRit, 
> GPDofs ),
>
>   boundary_values,
>
>   displacements_mask);
>
> 
>
> PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);
>
> MatrixTools::apply_boundary_values (boundary_values,
>
> system_matrix,
>
> tmp,
>
>   

[deal.II] Re: dealii, installation error with mpi

2017-05-31 Thread Jean-Paul Pelteret
Dear Jaekwang,

A hint to what the problem might be is in the error message. These lines

--   HDF5_LIBRARIES: 
> /usr/lib64/libhdf5_hl.so;/usr/lib64/libhdf5.so;/usr/local/mpi/mvapich2/2.2/intel/17.0/lib/libmpi.so
> -- Insufficient hdf5 installation found: hdf5 has to be configured with 
> the same MPI configuration as deal.II.


indicate that HDF5 was built against the mvapich2 MPI library 
(specifically, that configured using the Intel compiler). Presumably 
deal.II is automatically detecting this MPI library 

4) openmpi/1.4-gcc


thats in your path, which would be whats conflicting with the library that 
HDF5 links against. You'd be able to confirm this by looking further back 
in the configuration logs. 

What you should try is to explicitly tell deal.II which MPI library to use 
(i.e. mvapich2) by passing the following flags:

-DDEAL_II_WITH_MPI:BOOL=ON \
> -DCMAKE_C_COMPILER= \
> -DCMAKE_CXX_COMPILER= \

 
I hope that this helps you resolve your issue!

Regards,
Jean-Paul

On Wednesday, May 31, 2017 at 11:37:26 PM UTC+2, Jaekwang Kim wrote:
>
> Hi all, 
>
> I was trying to install deal.ii with MPI on cluster, but I met error. 
>
> *-- Include 
> /home/jk12/Programs/dealii-8.5.0/cmake/configure/configure_hdf5.cmake*
>
> *-- Found HDF5_INCLUDE_DIR*
>
> *-- Found HDF5_LIBRARY*
>
> *-- Found HDF5_HL_LIBRARY*
>
> *-- Found HDF5_PUBCONF*
>
> *--   HDF5_LIBRARIES: 
> /usr/lib64/libhdf5_hl.so;/usr/lib64/libhdf5.so;/usr/local/mpi/mvapich2/2.2/intel/17.0/lib/libmpi.so*
>
> *--   HDF5_INCLUDE_DIRS: /usr/include*
>
> *--   HDF5_USER_INCLUDE_DIRS: /usr/include*
>
> *-- Found HDF5*
>
> *-- Insufficient hdf5 installation found: hdf5 has to be configured with 
> the same MPI configuration as deal.II.*
>
> *-- DEAL_II_WITH_HDF5 has unmet external dependencies.*
>
> *CMake Error at cmake/macros/macro_configure_feature.cmake:112 (MESSAGE):*
>
>   
>
>
> *  Could not find the hdf5 library!*
>
>
> *  Insufficient hdf5 installation found!*
>
>
> *  hdf5 has to be configured with the same MPI configuration as deal.II, 
> but*
>
> *  found:*
>
>
> *DEAL_II_WITH_MPI = ON*
>
> *HDF5_WITH_MPI= FALSE*
>
>
> *  Please ensure that a suitable hdf5 library is installed on your 
> computer.*
>
>
> *  If the library is not at a default location, either provide some hints 
> for*
>
> *  autodetection,*
>
>
> *  $ HDF5_DIR="..." cmake <...>*
>
> *  $ cmake -DHDF5_DIR="..." <...>*
>
>
> *  or set the relevant variables by hand in ccmake.*
>
>
> *Call Stack (most recent call first):*
>
> *  cmake/macros/macro_configure_feature.cmake:268 (FEATURE_ERROR_MESSAGE)*
>
> *  cmake/configure/configure_hdf5.cmake:48 (CONFIGURE_FEATURE)*
>
> *  cmake/macros/macro_verbose_include.cmake:19 (INCLUDE)*
>
> *  CMakeLists.txt:124 (VERBOSE_INCLUDE)*
>
>
>
> *-- Configuring incomplete, errors occurred!*
>
> while I am using following modules on my home..
>
> *  1) torque/6.0.2  4) openmpi/1.4-gcc   7) git/1.7
>   10) valgrind/3.10.1*
>
> *  2) moab/9.0.25) gcc/4.9.2 8) 
> intel/17.0   11) petsc/3.7.5*
>
> *  3) env/taub  6) cmake/3.6.2   9) 
> mvapich2/2.2-intel-17.0  12) h5utils/1.1**2*
>
>
> I do have hdf5 in h5utils... 
> so I had configured before as ...
>
> *cmake -DCMAKE_INSTALL_PREFIX=~/Programs/dealii 
> -DHDF5_DIR="/usr/local/h5utils-1.12/"*
>
> what might be the problem ? 
>
>
> Thank you 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: component mask and Dirichlet bc application

2017-05-31 Thread Alberto Salvadori
I am adding here an attempt I made. It seems to work but since this was 
more intuition rather than full understanding, I do appreciate your 
comments.
So, this is what I did: basically, I created two Dirichlet boundary 
conditions and two masks, and I applied conditions in sequence, like this:


  // Dirichlet bcs

  // -

  

  PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);


  // Dirichlet bc for displacements

  {

  std::map boundary_values;


  ComponentMask displacements_mask = fe.component_mask (displacements);

  VectorTools::interpolate_boundary_values (dof_handler,

0,


IncrementalDirichletBoundaryValues( TIDM, NRit, GPDofs ),

boundary_values,

displacements_mask);

  

  MatrixTools::apply_boundary_values (boundary_values,

  system_matrix,

  tmp,

  system_rhs,

  false);

  }


  // Dirichlet bc for concentrations

  {

std::map boundary_values;



ComponentMask displacements_mask = fe.component_mask (concentration);

VectorTools::interpolate_boundary_values (dof_handler,

  0,

  
IncrementalDirichletBoundaryValuesForConcentration( TIDM, NRit, GPDofs 
),

  boundary_values,

  displacements_mask);



PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);

MatrixTools::apply_boundary_values (boundary_values,

system_matrix,

tmp,

system_rhs,

false);

  }


  incremental_displacement = tmp;


Within the classes IncrementalDirichletBoundaryValuesForConcentration
I have returned something like:

template 

void

IncrementalDirichletBoundaryValuesForConcentration::

vector_value (const Point & p ,

  Vector   & return_value) const

{

  

  static const unsigned intc_component = dim + 2;

  return_value( c_component ) = ... ;

}


Does it make sense?

Thanks
Alberto

Il giorno mercoledì 31 maggio 2017 17:48:13 UTC-4, Alberto Salvadori ha 
scritto:
>
> Dear all,
>
> your help is appreciated about how component mask and Dirichlet bc 
> application work. 
>
> I am implementing a SmallStrainDiffusionMechanicalProblem class, with 4 
> fields: displacements, pressure, dilatation, and concentration. 
>
> template 
>
> class SmallStrainDiffusionMechanicalProblem
>
> {
>
> ...
>
>
>   // dofs definiton and dofs block enumeration
>
>   //  dim = displacements dofs
>
>   //  1 = p
>
>   //  1 = J
>
>   //  1 = c ( interstitial concentration )
>
>   const unsigned int GPDofs = dim + 3;
>
>
>   static const unsigned intfirst_u_component = 0;
>
>   static const unsigned intp_component = dim;
>
>   static const unsigned intJ_component = dim + 1;
>
>   static const unsigned intc_component = dim + 2;
>
>
>   enum
>
>   {
>
> u_dof = 0,  // displacement block ( dim components )
>
> p_dof = 1,  // pressure block ( one component )
>
> J_dof = 2,  // dilatation block ( one component )
>
> c_dof = 3   // concentration block ( one component )
>
>   };
>
>   
>
> ...
>
> }
>
>
> In a former implementation that did not include concentration fields, in 
> order to impose bc to the vector solution incremental_displacement, I 
> have edited some code from the tutorials, like this:
>
>
>   const FEValuesExtractors::Vector displacements (first_u_component);
>
>   const FEValuesExtractors::Scalar pressure(p_component);
>
>   const FEValuesExtractors::Scalar dilatation(J_component);
>
>   const FEValuesExtractors::Scalar concentration(c_component);
>
> ..
>
>   std::map boundary_values;
>
>
>   ComponentMask displacements_mask = fe.component_mask (displacements);
>
>   VectorTools::interpolate_boundary_values (dof_handler,
>
> 0,
>
> 
> IncrementalDirichletBoundaryValues( TIDM, NRit, GPDofs ),
>
> boundary_values,
>
> displacements_mask);
>
>   
>
>   PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);
>
>   MatrixTools::apply_boundary_values (boundary_values,
>
>   system_matrix,
>
>   tmp,
>
>   system_rhs,
>
>   false);
>
>   incremental_displacement = tmp;
>
> I would now impose Dirichlet bc o

[deal.II] component mask and Dirichlet bc application

2017-05-31 Thread Alberto Salvadori
Dear all,

your help is appreciated about how component mask and Dirichlet bc 
application work. 

I am implementing a SmallStrainDiffusionMechanicalProblem class, with 4 
fields: displacements, pressure, dilatation, and concentration. 

template 

class SmallStrainDiffusionMechanicalProblem

{

...


  // dofs definiton and dofs block enumeration

  //  dim = displacements dofs

  //  1 = p

  //  1 = J

  //  1 = c ( interstitial concentration )

  const unsigned int GPDofs = dim + 3;


  static const unsigned intfirst_u_component = 0;

  static const unsigned intp_component = dim;

  static const unsigned intJ_component = dim + 1;

  static const unsigned intc_component = dim + 2;


  enum

  {

u_dof = 0,  // displacement block ( dim components )

p_dof = 1,  // pressure block ( one component )

J_dof = 2,  // dilatation block ( one component )

c_dof = 3   // concentration block ( one component )

  };

  

...

}


In a former implementation that did not include concentration fields, in 
order to impose bc to the vector solution incremental_displacement, I have 
edited some code from the tutorials, like this:


  const FEValuesExtractors::Vector displacements (first_u_component);

  const FEValuesExtractors::Scalar pressure(p_component);

  const FEValuesExtractors::Scalar dilatation(J_component);

  const FEValuesExtractors::Scalar concentration(c_component);

..

  std::map boundary_values;


  ComponentMask displacements_mask = fe.component_mask (displacements);

  VectorTools::interpolate_boundary_values (dof_handler,

0,


IncrementalDirichletBoundaryValues( TIDM, NRit, GPDofs ),

boundary_values,

displacements_mask);

  

  PETScWrappers::MPI::Vector tmp (locally_owned_dofs,mpi_communicator);

  MatrixTools::apply_boundary_values (boundary_values,

  system_matrix,

  tmp,

  system_rhs,

  false);

  incremental_displacement = tmp;

I would now impose Dirichlet bc on both displacements and concentrations 
and assume that the Dirichlet boundary for both fields is the same. Shall I 
therefore change the mask, I assume. Can it be done like this? 

ComponentMask displacements_mask = fe.component_mask (displacements, 
concentrations);

or shall two masks be defined or even else?

Many thanks!
Alberto


-- 

Informativa sulla Privacy: http://www.unibs.it/node/8155

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] dealii, installation error with mpi

2017-05-31 Thread Jaekwang Kim
Hi all, 

I was trying to install deal.ii with MPI on cluster, but I met error. 

*-- Include 
/home/jk12/Programs/dealii-8.5.0/cmake/configure/configure_hdf5.cmake*

*-- Found HDF5_INCLUDE_DIR*

*-- Found HDF5_LIBRARY*

*-- Found HDF5_HL_LIBRARY*

*-- Found HDF5_PUBCONF*

*--   HDF5_LIBRARIES: 
/usr/lib64/libhdf5_hl.so;/usr/lib64/libhdf5.so;/usr/local/mpi/mvapich2/2.2/intel/17.0/lib/libmpi.so*

*--   HDF5_INCLUDE_DIRS: /usr/include*

*--   HDF5_USER_INCLUDE_DIRS: /usr/include*

*-- Found HDF5*

*-- Insufficient hdf5 installation found: hdf5 has to be configured with 
the same MPI configuration as deal.II.*

*-- DEAL_II_WITH_HDF5 has unmet external dependencies.*

*CMake Error at cmake/macros/macro_configure_feature.cmake:112 (MESSAGE):*

  


*  Could not find the hdf5 library!*


*  Insufficient hdf5 installation found!*


*  hdf5 has to be configured with the same MPI configuration as deal.II, 
but*

*  found:*


*DEAL_II_WITH_MPI = ON*

*HDF5_WITH_MPI= FALSE*


*  Please ensure that a suitable hdf5 library is installed on your 
computer.*


*  If the library is not at a default location, either provide some hints 
for*

*  autodetection,*


*  $ HDF5_DIR="..." cmake <...>*

*  $ cmake -DHDF5_DIR="..." <...>*


*  or set the relevant variables by hand in ccmake.*


*Call Stack (most recent call first):*

*  cmake/macros/macro_configure_feature.cmake:268 (FEATURE_ERROR_MESSAGE)*

*  cmake/configure/configure_hdf5.cmake:48 (CONFIGURE_FEATURE)*

*  cmake/macros/macro_verbose_include.cmake:19 (INCLUDE)*

*  CMakeLists.txt:124 (VERBOSE_INCLUDE)*



*-- Configuring incomplete, errors occurred!*

while I am using following modules on my home..

*  1) torque/6.0.2  4) openmpi/1.4-gcc   7) git/1.7
  10) valgrind/3.10.1*

*  2) moab/9.0.25) gcc/4.9.2 8) 
intel/17.0   11) petsc/3.7.5*

*  3) env/taub  6) cmake/3.6.2   9) 
mvapich2/2.2-intel-17.0  12) h5utils/1.1**2*


I do have hdf5 in h5utils... 
so I had configured before as ...

*cmake -DCMAKE_INSTALL_PREFIX=~/Programs/dealii 
-DHDF5_DIR="/usr/local/h5utils-1.12/"*

what might be the problem ? 


Thank you 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Thomas Wick



On 05/31/2017 05:00 PM, 'Seyed Ali Mohseni' via deal.II User Group wrote:



The only thing I changed a bit is the integration from quadratic to 
linear, but I am not sure, if seeting quadrature_formula(2)



In

https://github.com/tjhei/cracks/blob/master/cracks.cc

line 982,

you need to change the "degree", but it should be already "1".

And the quadrature formula - what you mentioned above - should be 
sufficiently high, "2" should do the job.



Best Thomas






alone is enough for linear shape functions. You also had 
face_quadrature_formula or this Lobatto integration.
Would you be so kind and explain what lines I have to change in order 
to obtain linear shape functions and not quadratic like set in the 
original version?

How many changes and the code would help a lot.
Just to be sure I did it correct.

Thank you.

Best,
Seyed Ali
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
++++
Dr. Thomas Wick
Maitre de conferences / Assistant Professor

Centre de Mathematiques Appliquees (CMAP)
Ecole Polytechnique
91128 Palaiseau cedex, France

Email:  thomas.w...@cmap.polytechnique.fr
Phone:  0033 1 69 33 4579
www:http://www.cmap.polytechnique.fr/~wick/
++++
--

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Thomas Wick



On 05/31/2017 05:00 PM, 'Seyed Ali Mohseni' via deal.II User Group wrote:
I did another trick: Increasing G_c has the same effect to achieve 
purely elastic behavior. So, I chose a high enough G_c value and it 
works!

I obtain the same results and the duh values grow correctly.


Excellent. This is indeed also possible. Very good idea.




Still I cannot understand fully why when cracking is initiated, there 
is no increase of the strains except the strain in the loaded direction.
Maybe you are right, I try to compare a bigger example, but I already 
compared two phase-field codes.
I have my own version with staggered scheme and it was not identical 
to your results after crack initiation.
As mentioned before, in the elastic regime everything is identical and 
fine.


I made the experience that there can be large differences between 
staggered, quasi-monolithic (Heister/Wick) and

fully monolithic (I have some recent studies on this).

Moreover, sometimes it makes a difference how you impose the 
irreversibility condition: via strain history or

penalization.



The only thing I changed a bit is the integration from quadratic to 
linear, but I am not sure, if seeting quadrature_formula(2) alone is 
enough for linear shape functions. You also had 
face_quadrature_formula or this Lobatto integration.
Would you be so kind and explain what lines I have to change in order 
to obtain linear shape functions and not quadratic like set in the 
original version?


You need to change the number in FE_Q( ) for another degree. But in 
our code,

it should be already linear if I remember well.


Best regards,

Thomas



How many changes and the code would help a lot.
Just to be sure I did it correct.

Thank you.

Best,
Seyed Ali
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
++++
Dr. Thomas Wick
Maitre de conferences / Assistant Professor

Centre de Mathematiques Appliquees (CMAP)
Ecole Polytechnique
91128 Palaiseau cedex, France

Email:  thomas.w...@cmap.polytechnique.fr
Phone:  0033 1 69 33 4579
www:http://www.cmap.polytechnique.fr/~wick/
++++
--

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Problem with postprocessor

2017-05-31 Thread 'Seyed Ali Mohseni' via deal.II User Group
I did another trick: Increasing G_c has the same effect to achieve purely 
elastic behavior. So, I chose a high enough G_c value and it works!
I obtain the same results and the duh values grow correctly. 

Still I cannot understand fully why when cracking is initiated, there is no 
increase of the strains except the strain in the loaded direction.
Maybe you are right, I try to compare a bigger example, but I already 
compared two phase-field codes.
I have my own version with staggered scheme and it was not identical to 
your results after crack initiation.
As mentioned before, in the elastic regime everything is identical and fine.

The only thing I changed a bit is the integration from quadratic to linear, 
but I am not sure, if seeting quadrature_formula(2) alone is enough for 
linear shape functions. You also had face_quadrature_formula or this 
Lobatto integration.
Would you be so kind and explain what lines I have to change in order to 
obtain linear shape functions and not quadratic like set in the original 
version?
How many changes and the code would help a lot.
Just to be sure I did it correct.

Thank you.

Best,
Seyed Ali 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Thomas Wick


I hope you could explain me what could cause such a behavior in your 
implementation that deal.II gives different values for duh although 
the same postprocessor implementation is being used.

Is it because of the coupled formulation?



I really would think so. What you could do is to disable the phase-field 
variable
such that you have in both codes really only elasticity and have a fair 
comparison.


How do you do this?

You could erase in our code (Heister/Wick) all the phase-field (pf) 
appearances in the solid

mechanics equations. Then the phase-field values do not enter any more
and led to wrong results with the fracture.


Best Thomas






Thank you for your help so far.

Kind regards,
Seyed Ali
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
++++
Dr. Thomas Wick
Maitre de conferences / Assistant Professor

Centre de Mathematiques Appliquees (CMAP)
Ecole Polytechnique
91128 Palaiseau cedex, France

Email:  thomas.w...@cmap.polytechnique.fr
Phone:  0033 1 69 33 4579
www:http://www.cmap.polytechnique.fr/~wick/
++++
--

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Problem with postprocessor

2017-05-31 Thread 'Seyed Ali Mohseni' via deal.II User Group
Hi again,

Nice words, indeed. Thank you. I will follow your advice :)

But to get back to the topic: Unfortunately, you misunderstood my problem. 
The issue is with the displacement gradient values stored in "duh" which is 
computed by deal.II and given as input for the postprocessing tasks to be 
written.
I compared the output of solution gradients "duh" for both codes (my own 
solid_mechanics code and Thomas' code) using the same implementation for 
the postprocessor. 
The fascinating thing is that my code works and it gives correct results in 
my benchmark since I validated it with another FE program of our own to be 
100 % sure.
But the code written by Thomas and Timo somehow doesn't give correct duh 
values.
In the first steps both codes agree, but the duh values won't increase 
according to the loading, only the "yy" component in loading direction 
increases.
This is strange.
I am not completely familiar with Thomas' code that's why I posted here. 
Otherwise debugging my own code is what I do daily.

I hope you could explain me what could cause such a behavior in your 
implementation that deal.II gives different values for duh although the 
same postprocessor implementation is being used.
Is it because of the coupled formulation?

Thank you for your help so far.

Kind regards,
Seyed Ali 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: get errors when installing dealii on openSUSE Leap 42.1 by using candi

2017-05-31 Thread Bruno Turcksin
Tuanny,

we need to see the log file to know why it is failing. There is also a
good chane that zlib is already installed on the cluster so you could
comment once:zlib (line 21) in
candi/deal.II-toolchain/platforms/supported/linux_cluster.platform and
try again.

Best,

Bruno

2017-05-31 6:55 GMT-04:00 Tuanny Cajuhi :
> Dear all,
>
> I am trying to install dealii on a cluster using
> ./candi.sh
> --platform=./deal.II-toolchain/platforms/supported/linux_cluster.platform
>
> and I get the same error as reported here:
> Building zlib 1.2.8
> Compiler error reporting is too harsh for ./configure (perhaps remove
> -Werror).
> ** ./configure aborting.
> Failure with exit status: 1
> Exit message: There was a problem building zlib 1.2.8.
>
> Could you please help me?
>
> Thank you!
>
> Best regards,
> Tuanny
>
>
>
> On Thursday, June 9, 2016 at 10:49:30 AM UTC+2, Roc Wang wrote:
>>
>> Hello,
>>
>>I am trying to install dealii on openSUSE Leap 42.1 by using candi
>> downloaded from https://github.com/dealii/candi?files=1. However, I got
>> error when it compiled zlib-1.2.8. The error info is like as:
>>
>> Fetching zlib 1.2.8
>> Verifying zlib-1.2.8.tar.gz
>> zlib-1.2.8.tar.gz: OK
>> zlib-1.2.8.tar.gz already downloaded and verified.
>> Unpacking zlib-1.2.8.tar.gz
>> Building zlib 1.2.8
>> Compiler error reporting is too harsh for ./configure (perhaps remove
>> -Werror).
>> ** ./configure aborting.
>> Failure with exit status: 1
>> Exit message: There was a problem building zlib 1.2.8.
>>
>>I tried to install zlib in openSUSE from repository, but, libz1
>> providing zlib was found. The info is like:
>>
>> Loading repository data...
>> Reading installed packages...
>> 'zlib' not found in package names. Trying capabilities.
>> 'libz1' providing 'zlib' is already installed.
>> Resolving package dependencies…
>>
>>
>>
>> Nothing to do.
>>
>> Should I and How to remove the -Werror flag in ./configure? Please someone
>> help me on this? Thanks!!
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/9IMUhsjSGZ8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Thomas Wick

Hi Toby,

I appreciate your additional comments.

Thanks and best,

Thomas

On 05/31/2017 02:16 PM, Tobi Young wrote:
I'm going to jump in with one of my random comments uninvited. 
Hopefully you don't mind. :-)



I already run the example with uniform mesh, hence global
refinement with 0 refinement cycles.
The problem is, a specimen with 100 or 1000 elements is difficult
to check due to the terminal output being flooded.


I think what Thomas is kindly trying to point out is, that a single 
cell is a useless test case for almost all possible test cases that 
deal with practical problems.


It seems your case is one of many examples.

Try 16 cells. Look at the data and then try 32 cells and compare. 
Maybe write a procedure that will look at the data for you and put 
useful results into a file for you to look at? Can you plot results 
with gnuplot (for example)?


Machines are stupid, they only can do what you tell them to do! So, 
why not reserve a day or so to calmly write the algorithms needed to 
extract the data you want in a way you can visualise?


You need to do your analysis in some way. You can not expect a machine 
to do things for you. You have to instruct her what it is you want to 
be done. Write the code...   :-)


That is scientific computing.

I could just output the result for one element, but then wheres
the difference.


Big difference in numerical analysis of this kind! ;-)

I am not being unkind - though, maybe its time to get dirty, write 
some code, and look at the numbers, in order to figure out where the 
problem lies. If you can do that, there are alot of dealii users and 
developers that will help you out, and you'll get there in the end. :-)


Best,
   Toby



--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
++++
Dr. Thomas Wick
Maitre de conferences / Assistant Professor

Centre de Mathematiques Appliquees (CMAP)
Ecole Polytechnique
91128 Palaiseau cedex, France

Email:  thomas.w...@cmap.polytechnique.fr
Phone:  0033 1 69 33 4579
www:http://www.cmap.polytechnique.fr/~wick/
++++
--

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Tobi Young
I'm going to jump in with one of my random comments uninvited. Hopefully
you don't mind. :-)


I already run the example with uniform mesh, hence global refinement with 0
refinement cycles.
The problem is, a specimen with 100 or 1000 elements is difficult to check
due to the terminal output being flooded.


I think what Thomas is kindly trying to point out is, that a single cell is
a useless test case for almost all possible test cases that deal with
practical problems.

It seems your case is one of many examples.

Try 16 cells. Look at the data and then try 32 cells and compare. Maybe
write a procedure that will look at the data for you and put useful results
into a file for you to look at? Can you plot results with gnuplot (for
example)?

Machines are stupid, they only can do what you tell them to do! So, why not
reserve a day or so to calmly write the algorithms needed to extract the
data you want in a way you can visualise?

You need to do your analysis in some way. You can not expect a machine to
do things for you. You have to instruct her what it is you want to be done.
Write the code...   :-)

That is scientific computing.

I could just output the result for one element, but then wheres the
difference.


Big difference in numerical analysis of this kind! ;-)

I am not being unkind - though, maybe its time to get dirty, write some
code, and look at the numbers, in order to figure out where the problem
lies. If you can do that, there are alot of dealii users and developers
that will help you out, and you'll get there in the end. :-)

Best,
   Toby

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Problem with postprocessor

2017-05-31 Thread Thomas Wick



On 05/31/2017 01:31 PM, 'Seyed Ali Mohseni' via deal.II User Group wrote:

Dear Thomas,

I already run the example with uniform mesh, hence global refinement 
with 0 refinement cycles.
The problem is, a specimen with 100 or 1000 elements is difficult to 
check due to the terminal output being flooded.


This I understand, but you could write everything into a file ...


I could just output the result for one element, but then wheres the 
difference.


See below.


Don't you ever do some simple FE patch examples?


Never. Because simply the theory of finite elements tell you that the 
discretization error is so huge

that any result is nearly meaningless.


If I am interested in stress values or the stress tensor, but I am not 
sure if this
will help you or if you are interested, is to compute typical quantities 
of interest.

That is not only a point-wise stress, but for example a line integral:

\int_{part of boundary} \sigma\cdot n \ds.

Or indeed you compute the stress in a specific point in the domain
but really using more than one element.

Many examples and benchmarks for elasticity and plasticity and also 
quantities

of interest (goal functionals) are given in the book:

http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471496502.html


For instance two examples from this book with point values of stresses 
(e.g., \sigma_yy),
again on more than one element, are provided in the deal-DOpElib 
library, see pages 42 - 45 on


http://wwwopt.mathematik.tu-darmstadt.de/dopelib/description_full.pdf


Best Thomas




Kind regards,
S. A. Mohseni
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
++++
Dr. Thomas Wick
Maitre de conferences / Assistant Professor

Centre de Mathematiques Appliquees (CMAP)
Ecole Polytechnique
91128 Palaiseau cedex, France

Email:  thomas.w...@cmap.polytechnique.fr
Phone:  0033 1 69 33 4579
www:http://www.cmap.polytechnique.fr/~wick/
++++
--

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Problem with postprocessor

2017-05-31 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear Thomas,

I already run the example with uniform mesh, hence global refinement with 0 
refinement cycles.
The problem is, a specimen with 100 or 1000 elements is difficult to check 
due to the terminal output being flooded.
I could just output the result for one element, but then wheres the 
difference. 
Don't you ever do some simple FE patch examples?

Kind regards,
S. A. Mohseni

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Problem with postprocessor

2017-05-31 Thread Thomas Wick

Dear S. A. Mohseni,




On 05/31/2017 01:11 PM, 'Seyed Ali Mohseni' via deal.II User Group wrote:

Dear Thomas Wick, Dear Timo Heister,

I wrote an additional postprocessor in your existing phase-field code 
to allow postprocessing of strain, stress or elastic energy. 
Unfortunately, it seems like the STRAIN_XX and STRAIN_XY is not 
increasing in each step while the STRAIN_YY increases correctly.


This is difficult to say and heavily depends on the loading conditions. 
The stress has different components, but not all

of them increase for a specific loading.



The numerical example is a simple 2d cube of length and width 1.0 
consisting of 1 linear element only. The material properties are the 
same from the Miehe tension example. The specimen is loaded exactly 
the same way as in the tension experiment.


Of course, I am aware of the fact that the element size is totally 
wrong for a phase-field simulation. I am merely trying to check the 
strain in a purely elastic case before crack initiation.


I stay with my previous opinion some weeks ago: it does not make sense 
to run a FE simulation only on one element !!!
Where is the problem to have 100 or 1000 elements? This simulation (see 
step-3 for Laplace) also would run

only for seconds.




Is it due to the predictor-corrector nature or something which causes 
this problem?


This is easy to check: just disable predictor-corrector and run with 
uniform mesh refinement and then you see

whether predictor-corrector causes the problem.

But anyhow: when you work only on one element - there is no 
predictor-corrector because otherwise you would have > 1 elements.




Best regards,

Thomas W.




In my solid_mechanics code written in deal.II it works correctly, the 
following postprocessor implementation:


|
template
class *Postprocessor*: public DataPostprocessor
{

public:

Postprocessor ();

void compute_derived_quantities_vector (const 
std::vector > &uh, const 
std::vector > > &duh, const 
std::vector > > &dduh, const std::vector<
Point > &normals, const std::vector > 
&evaluation_points, std::vector > &computed_quantities) 
const;


virtual std::vector get_names () const;

virtual 
std::vector 
get_data_component_interpretation () const;


virtual UpdateFlags get_needed_update_flags () const;

private:

void print_tensor (const Tensor<2, dim>& tensor, char* name) const;
};

--

template
*Postprocessor*::Postprocessor ()
{
}

--

template
void *Postprocessor*::compute_derived_quantities_vector (const 
std::vector > &uh, const 
std::vector > > &duh, const std::vector<
std::vector > > &/*dduh*/, const std::vector 
> &/*normals*/, const std::vector > &evaluation_points, 
std::vector<

Vector > &computed_quantities) const
{
//TODO: Postprocessing has not been optimized for 3D yet.

// Number of quadrature points (interior)
const unsigned int n_q_points = uh.size();

// CHECK: Evaluation points
const std::vector > EP = evaluation_points;

//std::cout << "\nEVALUATION POINTS\n" << std::endl;
//for (unsigned int i = 0; i < n_q_points; ++i)
//{
//for (unsigned int j = 0; j < dim; ++j)
//std::cout << "  " << EP[i][j] << " ";
//std::cout << std::endl;
//}
//std::cout << std::endl;

// Constitutive matrix
SymmetricTensor<4, dim> C = 
Tensors::get_elasticity_tensor(FractureMechanics::public_lambda, 
FractureMechanics::public_mu);


for (unsigned int q = 0; q < n_q_points; ++q)
{
std::cout << "\n - in EVALUATION point " << q + 1 << " - \n" << std::endl;

Tensor<2, dim> grad_u;
SymmetricTensor<2, dim> eps, sigma;
double eps_ii, eps_ij, sigma_ii, sigma_ij;

// ===[ STRAINS ]
for (unsigned int i = 0; i < dim; ++i)
{
int j = i + 1;

grad_u[i] = duh[q][i];

std::cout << "DUH 0: " << duh[q][0] << std::endl;
std::cout << "DUH 1: " << duh[q][1] << std::endl;

eps = 0.5 * (grad_u + transpose(grad_u));

eps_ii = eps[i][i];

if ( j < dim )
eps_ij = eps[i][j];
//eps_ij = 2.0 * eps[i][j];

//std::cout << "STRAIN " << i << i << ": " << eps_ii << std::endl;
//std::cout << "STRAIN " << i << j << ": " << eps_ij << std::endl;

computed_quantities[q](i) = eps_ii;
computed_quantities[q](j) = eps_ij;
}

// ==[ STRESSES ]
for (unsigned int i = 0; i < dim; ++i)
{
int j = i + 1;

sigma = C * eps;

sigma_ii = sigma[i][i];

if ( j < dim )
sigma_ij = sigma[i][j];

computed_quantities[q](dim + i + 1) = sigma_ii;
computed_quantities[q](dim + j + 1) = sigma_ij;
}

// ===[ ELASTIC ENERGY ]=
//double psi = 0.5 * FractureMechanics::public_lambda * 
trace(eps) * tra

[deal.II] Problem with postprocessor

2017-05-31 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear Thomas Wick, Dear Timo Heister,

I wrote an additional postprocessor in your existing phase-field code to 
allow postprocessing of strain, stress or elastic energy. Unfortunately, it 
seems like the STRAIN_XX and STRAIN_XY is not increasing in each step while 
the STRAIN_YY increases correctly.
The numerical example is a simple 2d cube of length and width 1.0 
consisting of 1 linear element only. The material properties are the same 
from the Miehe tension example. The specimen is loaded exactly the same way 
as in the tension experiment.

Of course, I am aware of the fact that the element size is totally wrong 
for a phase-field simulation. I am merely trying to check the strain in a 
purely elastic case before crack initiation.   

Is it due to the predictor-corrector nature or something which causes this 
problem?

In my solid_mechanics code written in deal.II it works correctly, the 
following postprocessor implementation:

template
class *Postprocessor*: public DataPostprocessor
{

public:

Postprocessor ();

void compute_derived_quantities_vector (const std::vector > 
&uh, const std::vector > > &duh, const 
std::vector > > &dduh, const std::vector<
Point > &normals, const std::vector > &evaluation_points, 
std::vector > &computed_quantities) const;

virtual std::vector get_names () const;

virtual 
std::vector 
get_data_component_interpretation () const;

virtual UpdateFlags get_needed_update_flags () const;

private:

void print_tensor (const Tensor<2, dim>& tensor, char* name) const;
};

--

template
*Postprocessor*::Postprocessor ()
{
}

--

template
void *Postprocessor*::compute_derived_quantities_vector (const 
std::vector > &uh, const std::vector > > &duh, const std::vector<
std::vector > > &/*dduh*/, const std::vector > 
&/*normals*/, const std::vector > &evaluation_points, 
std::vector<
Vector > &computed_quantities) const
{
//TODO: Postprocessing has not been optimized for 3D yet.

// Number of quadrature points (interior)
const unsigned int n_q_points = uh.size();

// CHECK: Evaluation points
const std::vector > EP = evaluation_points;

// std::cout << "\nEVALUATION POINTS\n" << std::endl;
// for (unsigned int i = 0; i < n_q_points; ++i)
// {
// for (unsigned int j = 0; j < dim; ++j)
// std::cout << "  " << EP[i][j] << " ";
// std::cout << std::endl;
// }
// std::cout << std::endl;

// Constitutive matrix
SymmetricTensor<4, dim> C = 
Tensors::get_elasticity_tensor(FractureMechanics::public_lambda, 
FractureMechanics::public_mu);

for (unsigned int q = 0; q < n_q_points; ++q)
{
std::cout << "\n - in EVALUATION point " << q + 1 << " - \n" << std::endl;

Tensor<2, dim> grad_u;
SymmetricTensor<2, dim> eps, sigma;
double eps_ii, eps_ij, sigma_ii, sigma_ij;

// ===[ STRAINS ]
for (unsigned int i = 0; i < dim; ++i)
{
int j = i + 1;

grad_u[i] = duh[q][i];

std::cout << "DUH 0: " << duh[q][0] << std::endl;
std::cout << "DUH 1: " << duh[q][1] << std::endl;

eps = 0.5 * (grad_u + transpose(grad_u));

eps_ii = eps[i][i];

if ( j < dim )
eps_ij = eps[i][j];
// eps_ij = 2.0 * eps[i][j];

// std::cout << "STRAIN " << i << i << ": " << eps_ii << std::endl;
// std::cout << "STRAIN " << i << j << ": " << eps_ij << std::endl;

computed_quantities[q](i) = eps_ii;
computed_quantities[q](j) = eps_ij;
}

// ==[ STRESSES ]
for (unsigned int i = 0; i < dim; ++i)
{
int j = i + 1;

sigma = C * eps;

sigma_ii = sigma[i][i];

if ( j < dim )
sigma_ij = sigma[i][j];

computed_quantities[q](dim + i + 1) = sigma_ii;
computed_quantities[q](dim + j + 1) = sigma_ij;
}

// ===[ ELASTIC ENERGY ]=
// double psi = 0.5 * FractureMechanics::public_lambda * trace(eps) * 
trace(eps) + FractureMechanics::public_mu * eps * eps;
double psi = 0.5 * (eps[0][0] * sigma[0][0] + eps[1][1] * sigma[1][1] + 2.0 
* eps[0][1] * sigma[0][1]);
// double psi = 0.5 * scalar_product(sigma,eps);

// std::cout << std::endl << "DISPLACEMENT GRADIENT" << std::endl;
// for (unsigned int i = 0; i < dim; ++i)
// {
// for (unsigned int j = 0; j < dim; ++j)
// std::cout << " " << std::setprecision(6) << std::fixed << grad_u[i][j] 
<< "  ";
// std::cout << std::endl;
// }
// std::cout << std::endl;

// print_tensor(eps, "STRAIN TENSOR");
// print_tensor(sigma, "STRESS TENSOR");

computed_quantities[q](dim * 2 + 2) = psi;
}
}

--

tem

[deal.II] Re: get errors when installing dealii on openSUSE Leap 42.1 by using candi

2017-05-31 Thread Tuanny Cajuhi
Dear all,

I am trying to install dealii on a cluster using
./candi.sh 
--platform=./deal.II-toolchain/platforms/supported/linux_cluster.platform

and I get the same error as reported here:




*Building zlib 1.2.8 Compiler error reporting is too harsh for ./configure 
(perhaps remove -Werror). ** ./configure aborting. Failure with exit 
status: 1 Exit message: There was a problem building zlib 1.2.8.*

Could you please help me? 

Thank you!
 
Best regards,
Tuanny 



On Thursday, June 9, 2016 at 10:49:30 AM UTC+2, Roc Wang wrote:
>
> Hello,
>
>I am trying to install dealii on openSUSE Leap 42.1 by using candi 
> downloaded from https://github.com/dealii/candi?files=1. However, I got 
> error when it compiled zlib-1.2.8. The error info is like as:
>
>
>
>
>
>
>
>
>
>
> *Fetching zlib 1.2.8 Verifying zlib-1.2.8.tar.gz zlib-1.2.8.tar.gz: OK 
> zlib-1.2.8.tar.gz already downloaded and verified. Unpacking 
> zlib-1.2.8.tar.gz Building zlib 1.2.8 Compiler error reporting is too harsh 
> for ./configure (perhaps remove -Werror). ** ./configure aborting. Failure 
> with exit status: 1 Exit message: There was a problem building zlib 1.2.8.*
>
> *   I tried to install zlib in openSUSE from repository, but, libz1 
> providing zlib was found. The info is like:*
>
>
>
>
>
> *Loading repository data... Reading installed packages... 'zlib' not found 
> in package names. Trying capabilities. 'libz1' providing 'zlib' is already 
> installed. Resolving package dependencies…*
>
>  
> *Nothing to do.*
>
> *Should I and How to remove the -Werror flag in ./configure? Please 
> someone help me on this? Thanks!!*
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.