[deal.II] Re: Hanging node constraints and periodic constraints together causing an issue

2018-01-23 Thread Sambit Das
Hello Dr. Arndt,

The above fix resolved the issue in the minimal example. Thanks a lot for 
providing the fix.

Best,
Sambit

On Tuesday, January 23, 2018 at 6:16:02 AM UTC-6, Daniel Arndt wrote:
>
> Sambit,
>
> Please try if https://github.com/dealii/dealii/pull/5779 fixes the issue 
> for you.
>
> Best,
> Daniel
>  
>
> Am Dienstag, 16. Januar 2018 22:06:55 UTC+1 schrieb Sambit Das:
>>
>> Thank you, Dr. Arndt.
>>
>> Best,
>> Sambit
>>
>> On Tuesday, January 16, 2018 at 11:16:08 AM UTC-6, Daniel Arndt wrote:
>>>
>>> Sambit,
>>>
>>> I created an issue at https://github.com/dealii/dealii/issues/5741 with 
>>> a modified version of your example.
>>>
>>> Best,
>>> Daniel
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Parallel DoFRenumbering does not work on mac

2018-01-23 Thread Jie Cheng
Hi Wolfgang

Here is the program, I did not but adding a
DoFRenumbering::Cuthill_McKee(dof_handler) call in 251. I tried to debug
with lldb but did not gain any useful information. I will check the video
again to make sure I was doing it right.

Thank you very much!
Jie



On Tue, Jan 23, 2018 at 11:28 PM Wolfgang Bangerth 
wrote:

> On 01/23/2018 01:16 PM, Jie Cheng wrote:
> > These are just warnings -- what happens if you run the executable?
> >
> >
> > If I do not modify step-40.cc, it runs fine both in serial and parallel.
> After
> > I added DoFRenumbering in it, it crashes in parallel. Does
> DoFRenumbering have
> > any dependency?
>
> Not that I know of. Remind which function in that namespace you are using?
>
>
> > As I posted in previous messages, the debugger does not help
> > much. The same problem happens on another mac in my lab which has Mac OS
> > 10.13.2 installed.
> This is certainly strange. I wouldn't know what might cause this, and
> without
> having a backtrace of where things go wrong, it's just really hard to tell.
>
> If you send me the program again, I can try to run it on my linux boxes to
> see
> whether it's something I can reproduce. Otherwise, have you tried the
> technique for running a parallel program in a debugger I discuss in one of
> my
> video lectures?
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/qYSv_95ay6I/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


step-40.cc
Description: Binary data


Re: [deal.II] Re: Parallel DoFRenumbering does not work on mac

2018-01-23 Thread Wolfgang Bangerth

On 01/23/2018 01:16 PM, Jie Cheng wrote:

These are just warnings -- what happens if you run the executable?


If I do not modify step-40.cc, it runs fine both in serial and parallel. After 
I added DoFRenumbering in it, it crashes in parallel. Does DoFRenumbering have 
any dependency?


Not that I know of. Remind which function in that namespace you are using?


As I posted in previous messages, the debugger does not help 
much. The same problem happens on another mac in my lab which has Mac OS 
10.13.2 installed.
This is certainly strange. I wouldn't know what might cause this, and without 
having a backtrace of where things go wrong, it's just really hard to tell.


If you send me the program again, I can try to run it on my linux boxes to see 
whether it's something I can reproduce. Otherwise, have you tried the 
technique for running a parallel program in a debugger I discuss in one of my 
video lectures?


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: installation error

2018-01-23 Thread Wolfgang Bangerth

On 01/23/2018 02:13 PM, Bruno Turcksin wrote:


mypath/dealii/source/lac/scalapack.cc:243:91: error: there are no
arguments to ‘MPI_Comm_create_group’ that depend on a template parameter,
so a declaration of ‘MPI_Comm_create_group’ must be available [-fpermissive]
ierr = MPI_Comm_create_group(MPI_COMM_WORLD, group_union, 5,
_communicator_union);

Do you need scalapack? If you don't, just add DEAL_II_WITH_SCALAPACK=OFF


Out of curiosity, can you also send the output of the detailed.log file in 
your build directory?


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Run function on one node, make result accessible for all nodes

2018-01-23 Thread Wolfgang Bangerth

On 01/23/2018 01:59 PM, 'Maxi Miller' via deal.II User Group wrote:
Assumed I want to use either MPI_Bcast or a function from Boost::MPI, what do 
I have to do to initialize them, and what is already done by deal.II?


If you initialize the MPI system as we do, for example, in the main() function 
of step-40, then everything should already be set up.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: installation error

2018-01-23 Thread Bruno Turcksin
Juan Carlos

On Tuesday, January 23, 2018 at 3:12:20 PM UTC-5, Juan Carlos Araujo 
Cabarcas wrote:
>
> [ 50%] Building CXX object 
> source/fe/CMakeFiles/obj_fe_debug.dir/fe_poly.cc.o
> mypath/dealii/source/lac/scalapack.cc: In member function ‘void 
> dealii::ScaLAPACKMatrix::copy_to(dealii::ScaLAPACKMatrix&)
>  
> const’:
> mypath/dealii/source/lac/scalapack.cc:243:91: error: there are no 
> arguments to ‘MPI_Comm_create_group’ that depend on a template parameter, 
> so a declaration of ‘MPI_Comm_create_group’ must be available [-fpermissive]
>ierr = MPI_Comm_create_group(MPI_COMM_WORLD, group_union, 5, 
> _communicator_union);
>
> Do you need scalapack? If you don't, just add DEAL_II_WITH_SCALAPACK=OFF

Last time I had errors was due to Trilinos version, but I updated that one 
> to the current in use.
> Any ideas for resolving the error?
>
> Aditionally, how can I set from GIT the last stable (tested) version?
>
Every commit is tested see 
https://cdash.kyomu.43-1.org/index.php?project=deal.II It happens that we 
break something but it's pretty rare so you should be fine using master

Best,

Bruno

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Run function on one node, make result accessible for all nodes

2018-01-23 Thread 'Maxi Miller' via deal.II User Group
Assumed I want to use either MPI_Bcast or a function from Boost::MPI, what 
do I have to do to initialize them, and what is already done by deal.II?
Thanks!

Am Dienstag, 23. Januar 2018 19:18:27 UTC+1 schrieb Wolfgang Bangerth:
>
> On 01/23/2018 06:40 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > So, a solution would be calling MPI_Bcast() after every call in the 
> if()-loop 
> > in the run()-function? Thanks! 
>
> Yes. After each if-statement, process 0 has to broadcast the information 
> it 
> has computed to all of the processors that have not computed it. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Parallel DoFRenumbering does not work on mac

2018-01-23 Thread Jie Cheng
Hi Wolfgang
 

> These are just warnings -- what happens if you run the executable? 
>

If I do not modify step-40.cc, it runs fine both in serial and parallel. 
After I added DoFRenumbering in it, it crashes in parallel. Does 
DoFRenumbering have any dependency? As I posted in previous messages, the 
debugger does not help much. The same problem happens on another mac in my 
lab which has Mac OS 10.13.2 installed. I wonder if there is anyone can 
reproduce this problem on Mac OS X 10.13.2?

Thanks
Jie

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] installation error

2018-01-23 Thread Juan Carlos Araujo Cabarcas
Dear all,

I am trying to install deal.II from the GIT repository with the following 
features:

petsc_ver='3.6.0';
trilinos_ver='12.4.2';

git clone https://github.com/dealii/dealii.git dealii

cmake \
-DTRILINOS_DIR=${install_dir}/trilinos-${trilinos_ver} \
-DP4EST_DIR=${install_dir}/FAST \
-DDEAL_II_WITH_METIS=ON \
-DMETIS_DIR=$METIS_DIR \
-DDEAL_II_WITH_MPI=ON \
-DDEAL_II_WITH_THREADS=OFF \
-DDEAL_II_WITH_UMFPACK=ON \
-DDEAL_II_WITH_LAPACK=ON \
-DDEAL_II_WITH_PETSC=ON \
-DPETSC_ARCH=$PETSC_ARCH \
-DPETSC_DIR=$PETSC_DIR \
-DDEAL_II_WITH_SLEPC=ON \
-DSLEPC_DIR=$SLEPC_DIR \
-DDEAL_II_WITH_P4EST=ON \
-DDEAL_II_WITH_ARPACK=ON \
-DDEAL_II_WITH_TRILINOS=ON \
-DCMAKE_INSTALL_PREFIX=${install_dir}/dealii ${install_dir}/dealii;

make -j${virtual_proc} install;

I get the following installation error:

[ 50%] Building CXX object 
source/fe/CMakeFiles/obj_fe_debug.dir/fe_poly.cc.o
mypath/dealii/source/lac/scalapack.cc: In member function ‘void 
dealii::ScaLAPACKMatrix::copy_to(dealii::ScaLAPACKMatrix&)
 
const’:
mypath/dealii/source/lac/scalapack.cc:243:91: error: there are no arguments 
to ‘MPI_Comm_create_group’ that depend on a template parameter, so a 
declaration of ‘MPI_Comm_create_group’ must be available [-fpermissive]
   ierr = MPI_Comm_create_group(MPI_COMM_WORLD, group_union, 5, 
_communicator_union);

   
^
mypath/dealii/source/lac/scalapack.cc:243:91: note: (if you use 
‘-fpermissive’, G++ will accept your code, but allowing the use of an 
undeclared name is deprecated)
mypath/dealii/source/lac/scalapack.cc: In instantiation of ‘void 
dealii::ScaLAPACKMatrix::copy_to(dealii::ScaLAPACKMatrix&)
 
const [with NumberType = double]’:
mypath/dealii/source/lac/scalapack.cc:760:16:   required from here
mypath/dealii/source/lac/scalapack.cc:243:91: error: 
‘MPI_Comm_create_group’ was not declared in this scope
mypath/dealii/source/lac/scalapack.cc: In instantiation of ‘void 
dealii::ScaLAPACKMatrix::copy_to(dealii::ScaLAPACKMatrix&)
 
const [with NumberType = float]’:
mypath/dealii/source/lac/scalapack.cc:761:16:   required from here
mypath/dealii/source/lac/scalapack.cc:243:91: error: 
‘MPI_Comm_create_group’ was not declared in this scope
[ 50%] Building CXX object 
source/grid/CMakeFiles/obj_grid_debug.dir/grid_reordering.cc.o
make[2]: *** [source/lac/CMakeFiles/obj_lac_debug.dir/scalapack.cc.o] Error 
1
make[1]: *** [source/lac/CMakeFiles/obj_lac_debug.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs

Last time I had errors was due to Trilinos version, but I updated that one 
to the current in use.
Any ideas for resolving the error?

Aditionally, how can I set from GIT the last stable (tested) version?

Thanks in advance,
Juan Carlos Araújo, 
Umeå Universitet

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Usage of the laplace-matrix in example 23

2018-01-23 Thread Wolfgang Bangerth

On 01/23/2018 10:35 AM, Dulcimer0909 wrote:


If I do go ahead and replace code, so that it does a cell by cell assembly, I 
am a bit lost on how I would store the old_solution (U^(n-1)) for each cell 
and retrieve it during the assembly for the Rhs.


Dulcimer -- can you elaborate? It's not clear to me how the two are related. 
In the thread you comment on, the question is about how to assemble the 
matrices, but these matrices do not actually depend on the previous solution. 
In any case, the program of course stores the old solution and so any code you 
write will have access to it.


So it's not entirely clear to me what you mean in your question :-(

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Deal.ii installation problem:- Step 40 runtime error

2018-01-23 Thread RAJAT ARORA
Hello Professor,

Thanks for the reply. 
I had been struggling with this issue for 5 days now. I raised the ticket 
on XSEDE forum on 18th Jan.
The technical team was expecting everything was fine from their side and 
advised me to reinstall with some different modules loaded. They were also 
expecting mixing of different mpi components.

I actually tried every possible combination (over 20) of modules. When 
nothing worked, I finally decided that to seek help from this forum. I was 
hesitating at first as I was sure that this is not the problem with library 
or its installation but rather it is the problem with the system. I needed 
some guidance to prove this thing.

However, I finally received an email yesterday (after posting this), that 
they made some security update on the system last week which caused the 
issue.  After the change was corrected by them, everything works fine just 
using candi.

Thanks a lot for the help. I really appreciate it.

I am still posting the answers to the questions you asked.

1. It always used to happen at a different position when run with different 
processors on a single node.

2. Even using multiple nodes was giving this error.










On Monday, January 22, 2018 at 11:09:41 PM UTC-5, Wolfgang Bangerth wrote:
>
> On 01/22/2018 08:48 AM, RAJAT ARORA wrote: 
> > 
> > Running with PETSc on 2 MPI rank(s)... 
> > Cycle 0: 
> > Number of active cells:   1024 
> > Number of degrees of freedom: 4225 
> > Solved in 10 iterations. 
> > 
> > 
> > 
> +-+++ 
> > | Total wallclock time elapsed since start| 0.222s |   
>  | 
> > | ||   
>  | 
> > | Section | no. calls |  wall time | % of total 
> | 
> > 
> +-+---+++ 
> > | assembly| 1 | 0.026s |12% 
> | 
> > | output  | 1 |0.0448s |20% 
> | 
> > | setup   | 1 |0.0599s |27% 
> | 
> > | solve   | 1 |0.0176s |   7.9% 
> | 
> > 
> +-+---+++ 
> > 
> > 
> > Cycle 1: 
> > Number of active cells:   1960 
> > Number of degrees of freedom: 8421 
> > r001.pvt.bridges.psc.edu.27927Assertion failure at 
> > 
> /nfs/site/home/phcvs2/gitrepo/ifs-all/Ofed_Delta/rpmbuild/BUILD/libpsm2-10.3.3/ptl_am/ptl.c:152:
>  
>
> > nbytes == req->recv_msglen 
> > r001.pvt.bridges.psc.edu.27927step-40: Reading from remote process' 
> memory 
> > failed. Disabling CMA support 
> > [r001:27927] *** Process received signal *** 
>
> These error messages suggest that the first cycle actually worked. So your 
> MPI 
> installation is not completely broken apparently. 
>
> Is the error message reproducible? Is it always in the same place and with 
> the 
> same message? When you run two processes, are they on separate machines or 
> on 
> the same one? 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Parallel DoFRenumbering does not work on mac

2018-01-23 Thread Wolfgang Bangerth

On 01/22/2018 09:17 PM, Jie Cheng wrote:
I've reinstalled MPICH, and did a clean build of p4est, petsc and dealii, this 
problem still exists. At the linking stage of building dealii, I got warnings:


[526/579] Linking CXX shared library lib/libdeal_II.9.0.0-pre.dylib
ld: warning: could not create compact unwind for _dgehrd_: stack subq 
instruction is too different from dwarf stack size


These are just warnings -- what happens if you run the executable?

The function referenced here is a LAPACK function. It looks like it has been 
compiled for a different system than the one you're on. But the error message 
you show doesn't give away how this library got onto your system, so I don't 
know what to suggest.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Run function on one node, make result accessible for all nodes

2018-01-23 Thread Wolfgang Bangerth

On 01/23/2018 06:40 AM, 'Maxi Miller' via deal.II User Group wrote:
So, a solution would be calling MPI_Bcast() after every call in the if()-loop 
in the run()-function? Thanks!


Yes. After each if-statement, process 0 has to broadcast the information it 
has computed to all of the processors that have not computed it.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Usage of the laplace-matrix in example 23

2018-01-23 Thread Dulcimer0909
Hello all,

An additional question regarding this thread:

If I do go ahead and replace code, so that it does a cell by cell assembly, 
I am a bit lost on how I would store the old_solution (U^(n-1)) for each 
cell and retrieve it during the assembly for the Rhs.

grateful if anyone could help.

Dulcimer

On Tuesday, August 22, 2017 at 4:50:03 PM UTC+2, Daniel Arndt wrote:
>
> Maxi,
>
> citing from the documentation of step-23[1]:
> "In a very similar vein, we are also too lazy to write the code to 
> assemble mass and Laplace matrices, although it would have only taken 
> copying the relevant code from any number of previous tutorial programs. 
> Rather, we want to focus on the things that are truly new to this program 
> and therefore use the MatrixCreator::create_mass_matrix and 
> MatrixCreator::create_laplace_matrix functions."
> It comes just handy that we have an easy way to get the required matrices 
> without having to write the assembly loops ourselves.
> Mutiplying the finite element vector old_solution_u by the laplace matrix 
> is the same as assembling the right-hand side (\nabla phi, \nabla 
> old_solution_u).
>
> The point in this example is that you don't have to assemble the laplace 
> matrix for each time step anew. How you do this is totally up to you. In 
> create MatrixCreator::create_laplace_matrix we apply some more optimizations
> so this might be faster then writing the assembly loop yourself. In the 
> end, you should have a lot of time steps and assembling this matrix once 
> should negligible in comparison to the time you spend in the solver.
> Hence, my advice would be to use whatever is easier for you.
>
> Best,
> Daniel
>
> [1] https://www.dealii.org/8.5.0/doxygen/deal.II/step_23.html#Includefiles
>
>
> Am Freitag, 18. August 2017 17:26:37 UTC+2 schrieb Maxi Miller:
>>
>> I am trying to develop my own code based on example 23 (\partial_t U = 
>> i/(2k)\nabla^2U), and am trying to understand the purpose of the 
>> laplace-matrix. Why is it used in order to multiply in both sides, in 
>> comparison with the weak formulation (\nabla\phi_i,\nabla\phi_j) and the 
>> assembly of the usual cell matrix? Assumed I have to split my U into a real 
>> and an imaginary value, is the approach with the Laplace-matrix better, or 
>> the split into the weak formulation and the accompanying matrices on both 
>> sides?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Using compute_nonzero_normal_flux_constraints and local refinement

2018-01-23 Thread markus . loewenstein1990
Hello my name is Markus,

last week I started my first project with deal.II. I used step-8 
(linear-elasticity) as starting point, built my own grid_geometry and it 
worked fine. Then I added new boundary conditions:

hanging_node_constraints.condense (system_matrix);
hanging_node_constraints.condense (system_rhs);

//std::map boundary_values;
//VectorTools::interpolate_boundary_values (dof_handler,
//  0,
//  ZeroFunction<3>(3),
//  boundary_values);
//MatrixTools::apply_boundary_values (boundary_values,
//system_matrix,
//solution,
//system_rhs);




normal_boundary_fixation_constraints.clear ();

DoFTools::make_hanging_node_constraints (dof_handler,
 
normal_boundary_fixation_constraints);

VectorTools::compute_no_normal_flux_constraints(dof_handler,
0,

hub_cell_fixation_id,

normal_boundary_fixation_constraints);
normal_boundary_fixation_constraints.close();

normal_boundary_fixation_constraints.condense(system_matrix, 
system_rhs);


normal_boundary_gab_constraints.clear ();

DoFTools::make_hanging_node_constraints (dof_handler,
 
normal_boundary_gab_constraints);

const BoundaryValues<3> normal_boundary_value;

typename FunctionMap<3>::type normal_boundary_functions;

normal_boundary_functions[1]= & normal_boundary_value;

VectorTools::compute_nonzero_normal_flux_constraints(dof_handler,
 0,
 
hub_cell_contact_id,
 
normal_boundary_functions,
 
normal_boundary_gab_constraints);

normal_boundary_gab_constraints.close();

normal_boundary_gab_constraints.condense(system_matrix, system_rhs);


and later I distributed my solution:

normal_boundary_fixation_constraints.distribute(solution);
normal_boundary_gab_constraints.distribute(solution);
hanging_node_constraints.distribute (solution);

I have two versions of my grid. A simple, without any refinement. There it 
works fine. But if I use the grid, with an refinement on the boundary 
(nonzero_normal_flux), the program still runs/converges, but the 

   normal_boundary_fixation_constraints.distribute(solution);
   normal_boundary_gab_constraints.distribute(solution);
 
have no effect. (So there must be something rotten.) I know that the 
documentation says:

When combining adaptively refined meshes with hanging node constraints and 
boundary conditions like from the current function within one 
ConstraintMatrix 
 
object, the hanging node constraints should always be set first, and then 
the boundary conditions since boundary conditions are not set in the second 
operation on degrees of freedom that are already constrained. This makes 
sure that the discretization remains conforming as is needed. See the 
discussion on conflicting constraints in the module on Constraints on 
degrees of freedom 
.

But I think, I am doing this. What do I misunderstand?

Greetings
Markus

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Run function on one node, make result accessible for all nodes

2018-01-23 Thread 'Maxi Miller' via deal.II User Group
So, a solution would be calling MPI_Bcast() after every call in the 
if()-loop in the run()-function? Thanks!

Am Dienstag, 23. Januar 2018 14:31:58 UTC+1 schrieb Bruno Turcksin:
>
> Hi,
>
> On Tuesday, January 23, 2018 at 7:53:16 AM UTC-5, Maxi Miller wrote:
>
>> But now it looks like as if only the first node gets the result of the 
>> calculations, but the others do not, instead defaulting to the default 
>> values of the calculation function when not initialized. Is there a way I 
>> can still run the functions module.propagate_pulse() and 
>> module.prepare_interpolation() still only from the first node, but getting 
>> the result to all other nodes? Or do I have to run the function on every 
>> node (i.e. remove the if()-clauses)? How should I then handle the fact that 
>> my custom function can benefit from being run on multiple nodes at once?
>>
> Since you only do the calculations on one process, there is no way for the 
> other processes to know the results. So you have to choose, run the code 
> for every rank (remove the if) or use MPI_Bcast to broadcast the result of 
> the calculation to all the other processes. If you want to run the function 
> on multiple nodes, you will need to deal with raw MPI yourself.
>
> Best,
>
> Bruno
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Run function on one node, make result accessible for all nodes

2018-01-23 Thread Bruno Turcksin
Hi,

On Tuesday, January 23, 2018 at 7:53:16 AM UTC-5, Maxi Miller wrote:

> But now it looks like as if only the first node gets the result of the 
> calculations, but the others do not, instead defaulting to the default 
> values of the calculation function when not initialized. Is there a way I 
> can still run the functions module.propagate_pulse() and 
> module.prepare_interpolation() still only from the first node, but getting 
> the result to all other nodes? Or do I have to run the function on every 
> node (i.e. remove the if()-clauses)? How should I then handle the fact that 
> my custom function can benefit from being run on multiple nodes at once?
>
Since you only do the calculations on one process, there is no way for the 
other processes to know the results. So you have to choose, run the code 
for every rank (remove the if) or use MPI_Bcast to broadcast the result of 
the calculation to all the other processes. If you want to run the function 
on multiple nodes, you will need to deal with raw MPI yourself.

Best,

Bruno

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Hanging node constraints and periodic constraints together causing an issue

2018-01-23 Thread Daniel Arndt
Sambit,

Please try if https://github.com/dealii/dealii/pull/5779 fixes the issue 
for you.

Best,
Daniel
 

Am Dienstag, 16. Januar 2018 22:06:55 UTC+1 schrieb Sambit Das:
>
> Thank you, Dr. Arndt.
>
> Best,
> Sambit
>
> On Tuesday, January 16, 2018 at 11:16:08 AM UTC-6, Daniel Arndt wrote:
>>
>> Sambit,
>>
>> I created an issue at https://github.com/dealii/dealii/issues/5741 with 
>> a modified version of your example.
>>
>> Best,
>> Daniel
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Compile trilinos with Intel MKL

2018-01-23 Thread Mark Ma
Hello everyone,

For this discussion, I just want to post the working setup in some clusters 
that uses intel MKL instead of BLAS ans LAPACK. Here is the code, 
"thrilino_setup.sh", 

mkdir build
cd build

cmake\
-DTrilinos_ENABLE_Amesos=ON  \
-DTrilinos_ENABLE_Epetra=ON  \
-DTrilinos_ENABLE_Ifpack=ON  \
-DTrilinos_ENABLE_AztecOO=ON \
-DTrilinos_ENABLE_Sacado=ON  \
-DTrilinos_ENABLE_Teuchos=ON \
-DTrilinos_ENABLE_MueLu=ON   \
-DTrilinos_ENABLE_ML=ON  \
-DTrilinos_VERBOSE_CONFIGURE=OFF \
-DTPL_ENABLE_MPI=ON  \





*-DTPL_ENABLE_BLAS:BOOL=ON  \
-DTPL_ENABLE_LAPACK:BOOL=ON  \
-DBLAS_LIBRARY_NAMES="mkl_intel_lp64;mkl_sequential;mkl_core"   
   \
-DBLAS_LIBRARY_DIRS="/opt/software/common/intel/compilers_and_libraries_2017.0.098/linux/mkl/lib/intel64"
 
 \-DLAPACK_LIBRARY_NAMES="" 
 \
-DLAPACK_LIBRARY_DIRS="/opt/software/common/intel/compilers_and_libraries_2017.0.098/linux/mkl/lib/intel64"
 
 \*
-DBUILD_SHARED_LIBS=ON   \
-DCMAKE_VERBOSE_MAKEFILE=OFF \
-DCMAKE_BUILD_TYPE=RELEASE   \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/installed_prog/trilinos \
../

make install

Best,
Mark

*Reference:*
[1]. https://trilinos.org/pipermail/trilinos-users/2011-May/002502.html

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.