Re: [deal.II] Installation error on Haswell nodes on Cori at NERSC (failed AVX512 support)

2019-07-12 Thread Stephen DeWitt
Dear Martin,
Thank you for your response. I deleted everything in my build directory and 
then it worked. I thought that deleting the CMakeCache.txt file and 
CMakeFiles would be enough (in other cases it has been), but evidently it 
wasn't enough here.

Thanks again,
Steve

On Wednesday, July 10, 2019 at 10:04:32 AM UTC-4, Martin Kronbichler wrote:
>
> Dear Steve,
>
> From what I can see the failure is for the expand_instantiations script of 
> deal.II, which is compiled as part of deal.II. It uses slightly different 
> flags as the full install, but assuming that you either passed -xHASWELL or 
> -xCORE-AVX2 to CMAKE_CXX_FLAGS it should not generate that code. Before we 
> look into the flags used for compilation, a basic question: Did you start 
> with a clean build directory without any file left over from a more 
> advanced architecture with AVX512 support?
>
> Best,
> Martin
> On 10.07.19 15:56, Stephen DeWitt wrote:
>
> Hello, 
> I'm trying to install deal.II on the Haswell nodes on Cori at NERSC using 
> the Intel compiler. I'm using deal.II version 9.0, because support for a 
> few of the function calls I make were dropped in v9.1 and I haven't had a 
> chance to modify the sections of the code. In my CMake command, I'm adding 
> either the -xHASWELL or -xCORE-AVX2 flags to get the right level of 
> vectorization for this architecture (AVX). The CMake output is what I 
> expect:
>
> -- Performing Test DEAL_II_HAVE_SSE2
> -- Performing Test DEAL_II_HAVE_SSE2 - Success
> -- Performing Test DEAL_II_HAVE_AVX
> -- Performing Test DEAL_II_HAVE_AVX - Success
> -- Performing Test DEAL_II_HAVE_AVX512
> -- Performing Test DEAL_II_HAVE_AVX512 - Failed
> -- Performing Test DEAL_II_HAVE_OPENMP_SIMD
> -- Performing Test DEAL_II_HAVE_OPENMP_SIMD - Success
>
> However, when I try to compile I get this error:
>
> [ 32%] Built target obj_umfpack_DL_STORE_release
> [ 34%] Built target obj_amd_global_release
> [ 35%] Built target obj_amd_long_release
> [ 36%] Built target obj_amd_int_release
> [ 37%] Built target obj_muparser_release
> [ 37%] Built target obj_sundials_inst
> Scanning dependencies of target obj_sundials_release
> [ 37%] Building CXX object 
> source/sundials/CMakeFiles/obj_sundials_release.dir/arkode.cc.o
> [ 37%] Building CXX object 
> source/sundials/CMakeFiles/obj_sundials_release.dir/ida.cc.o
> [ 37%] Building CXX object 
> source/sundials/CMakeFiles/obj_sundials_release.dir/copy.cc.o
> [ 38%] Building CXX object 
> source/sundials/CMakeFiles/obj_sundials_release.dir/kinsol.cc.o
> [ 38%] Built target obj_sundials_release
> [ 38%] Generating data_out_dof_data.inst
>
> Please verify that both the operating system and the processor support 
> Intel(R) AVX512F, ADX, AVX512ER, AVX512PF and AVX512CD instructions.
>
> source/numerics/CMakeFiles/obj_numerics_inst.dir/build.make:91: recipe for 
> target 'source/numerics/data_out_dof_data.inst' failed
> make[2]: *** [source/numerics/data_out_dof_data.inst] Error 1
> CMakeFiles/Makefile2:1860: recipe for target 
> 'source/numerics/CMakeFiles/obj_numerics_inst.dir/all' failed
> make[1]: *** [source/numerics/CMakeFiles/obj_numerics_inst.dir/all] Error 2
> Makefile:129: recipe for target 'all' failed
> make: *** [all] Error 2
>
> It seems to me that it is looking for AVX512 support and generating an 
> error when it doesn't find it. But why would it look for AVX512 support if 
> DEAL_II_HAVE_AVX512 failed? I haven't had this issue when installing on 
> other machines that support AVX but not AVX512.
>
> Thanks for any insight you have,
> Steve
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dea...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/b36fd46d-2dff-41f4-8762-8ffad6ad4da1%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/dealii/b36fd46d-2dff-41f4-8762-8ffad6ad4da1%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4064831c-921d-4cd0-9fde-a726db3619d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Installation error on Haswell nodes on Cori at NERSC (failed AVX512 support)

2019-07-10 Thread Stephen DeWitt
Hello,
I'm trying to install deal.II on the Haswell nodes on Cori at NERSC using 
the Intel compiler. I'm using deal.II version 9.0, because support for a 
few of the function calls I make were dropped in v9.1 and I haven't had a 
chance to modify the sections of the code. In my CMake command, I'm adding 
either the -xHASWELL or -xCORE-AVX2 flags to get the right level of 
vectorization for this architecture (AVX). The CMake output is what I 
expect:

-- Performing Test DEAL_II_HAVE_SSE2
-- Performing Test DEAL_II_HAVE_SSE2 - Success
-- Performing Test DEAL_II_HAVE_AVX
-- Performing Test DEAL_II_HAVE_AVX - Success
-- Performing Test DEAL_II_HAVE_AVX512
-- Performing Test DEAL_II_HAVE_AVX512 - Failed
-- Performing Test DEAL_II_HAVE_OPENMP_SIMD
-- Performing Test DEAL_II_HAVE_OPENMP_SIMD - Success

However, when I try to compile I get this error:

[ 32%] Built target obj_umfpack_DL_STORE_release
[ 34%] Built target obj_amd_global_release
[ 35%] Built target obj_amd_long_release
[ 36%] Built target obj_amd_int_release
[ 37%] Built target obj_muparser_release
[ 37%] Built target obj_sundials_inst
Scanning dependencies of target obj_sundials_release
[ 37%] Building CXX object 
source/sundials/CMakeFiles/obj_sundials_release.dir/arkode.cc.o
[ 37%] Building CXX object 
source/sundials/CMakeFiles/obj_sundials_release.dir/ida.cc.o
[ 37%] Building CXX object 
source/sundials/CMakeFiles/obj_sundials_release.dir/copy.cc.o
[ 38%] Building CXX object 
source/sundials/CMakeFiles/obj_sundials_release.dir/kinsol.cc.o
[ 38%] Built target obj_sundials_release
[ 38%] Generating data_out_dof_data.inst

Please verify that both the operating system and the processor support 
Intel(R) AVX512F, ADX, AVX512ER, AVX512PF and AVX512CD instructions.

source/numerics/CMakeFiles/obj_numerics_inst.dir/build.make:91: recipe for 
target 'source/numerics/data_out_dof_data.inst' failed
make[2]: *** [source/numerics/data_out_dof_data.inst] Error 1
CMakeFiles/Makefile2:1860: recipe for target 
'source/numerics/CMakeFiles/obj_numerics_inst.dir/all' failed
make[1]: *** [source/numerics/CMakeFiles/obj_numerics_inst.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

It seems to me that it is looking for AVX512 support and generating an 
error when it doesn't find it. But why would it look for AVX512 support if 
DEAL_II_HAVE_AVX512 failed? I haven't had this issue when installing on 
other machines that support AVX but not AVX512.

Thanks for any insight you have,
Steve

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b36fd46d-2dff-41f4-8762-8ffad6ad4da1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: deal.II User and Developer Workshop: August 6-9, 2019

2019-06-20 Thread Stephen DeWitt
Dear Wolfgang and Timo,
Would it be possible to record some or all of the presentations at the 
workshop? Due to a conflict I won't be able to attend, but I would still be 
very interested in hearing the talks/discussions at the workshop.

Best,
Steve

On Wednesday, April 10, 2019 at 12:18:43 AM UTC-4, Wolfgang Bangerth wrote:
>
>
> Dear deal.II users and developers, 
>
> The 
> Seventh deal.II Users' and Developers' workshop 
> will be held August 6-9, 2019 in Fort Collins, Colorado, USA. The 
> intent of this workshop is to discuss applications and tools using 
> deal.II, as well as future directions of the library itself. 
>
> We invite proposals for talks and discussion topics by users and 
> existing developers in the following areas: 
>   - Use cases and applications of the library. 
>   - What users think would be useful directions for the library to go 
> into, 
> things that are missing, and possibly getting people together who can 
> help 
> implement those parts. 
>   - Newer parts of the library (e.g., geometry descriptions, large 
> parallel 
> computations, etc.) and how these could help in your programs. 
> A significant part of the time will be set aside for "hackathon"- or "code 
> jam"-style sessions where people can ask questions, work on their codes 
> with others, and receive help from experienced users. 
>
> Registration: 
> For workshop and registration information, see 
>https://dealii.org/workshop-2019/ 
>
> Deadline for registration: May 5, 2019. Participation is capped at 
> around 60. 
>
> Travel support: 
> A limited amount of (domestic) travel support is available courtesy of the 
> National Science Foundation, and will be given on a first come first serve 
> basis, with preference to undergraduate and graduate students. If you need 
> support, please state so in your registration. 
>
> The organizers 
>Wolfgang Bangerth (Colorado State University, bang...@colostate.edu 
> ) 
>Timo Heister (University of Utah, hei...@sci.utah.edu ) 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c4e5da4d-1103-4a1c-9f44-abc3fb009981%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Number of DOFs after refinement depends on number of MPI processes (p:d:Triangulation)

2019-06-20 Thread Stephen DeWitt
Hello,
I'm running matrix free calculations using a 
parallel::distributed::Triangulation (loosely based on step-48). One thing 
that I've noticed is the the final number of DOFs at the end of a 
simulation vary slightly depending on my number of MPI processes. For 
example, one run on 16 cores finished with 2,988,700 DOF and nominally the 
same run on 512 cores finished with 2,988,864. Is this to be expected? My 
working assumption has been that there are a number of heuristics that go 
into "smoothing" the refinement of the mesh and that some/all of these only 
operate on the local part of the mesh. Is that the case? Or should I be 
concerned that I have a subtle issue in my code? Nothing changes 
qualitatively between simulations with different numbers of cores, but of 
course there is a very small difference in the solutions due to the 
different meshes.

Thanks,
Steve


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/986ee9b7-6d3d-4991-a919-53d1bc117d26%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Role of CUDAWrappers::MatrixFree::Data and CUDAWrappers::SharedData in CUDAWrappers::MatrixFree:cell_loop

2019-02-06 Thread Stephen DeWitt
Hello all,
Following up from a previous post 

 
about where the GPU capabilities are, I've been going through the CUDA 
matrix-free matrix-vector product tests as Bruno suggested. For the most 
part I've been able to make sense of it and have found parallels between 
what is going on there versus step-48.

The one area that I don't really understand for 
matrix_free_matrix_vector_01.cu is how information flows into the 
HelmholtzOperator 
::operator()
 
method. Inside the vmult method in MatrixFreeTest a HelmholtzOperator 
object is created and then it is passed into CUDAWrapper::MatrixFree 
cell_loop method along with two CUDAVectors. However, when I look at the 
HelmholtzOperator::operator() definition it 
takes CUDAWrappers::MatrixFree::Data * and CUDAWrappers::SharedData * types 
as inputs. Do the CUDAWrapper::MatrixFree internals take care of building 
objects with those types in a way that I can just ignore? I thought from 
the previous discussion that the user had to package what they needed into 
CUDAWrappers::MatrixFree::Data * and CUDAWrappers::SharedData * objects but 
maybe I misunderstood it. Obviously it would be great if this happened in 
the background.
 
On a related topic, I can see that the signature for 
HelmholtzOperator::operator() is notably different than 
SineGordonOperation:: local_apply in step-48. Where in the documentation 
should I be looking for the requirements for the functor passed into 
CUDAWrapper::MatrixFree::cell_loop? I'm reaching a dead end at the 
documentation entry for CUDAWrapper::MatrixFree::cell_loop. I can just copy 
the form from HelmholtzOperator::operator(), but I'm worried that I could 
get myself into trouble if I don't quite understand it.

Thanks in advance,
Steve DeWitt

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Status of matrix-free CUDA support

2018-11-06 Thread Stephen DeWitt
Bruno and Martin,
Thank you for your responses. I'll try to get a CUDA version of step-48 up 
and running.

Thanks!
Steve

On Monday, November 5, 2018 at 3:18:25 PM UTC-5, Martin Kronbichler wrote:
>
> Steve,
>
> Just to add on what Bruno said: Explicit time integration typically 
> requires only a subset of what you need for running an iterative solver (no 
> decision making like in convergence test of conjugate gradient, no 
> preconditioner), so running that should be pretty straight-forward.
>
> Best,
> Martin
> On 05.11.18 21:06, Bruno Turcksin wrote:
>
> Steve
>
> On Monday, October 29, 2018 at 10:43:48 AM UTC-4, Stephen DeWitt wrote: 
>>
>> So far I've read through the "CUDA Support" issue 
>> <https://github.com/dealii/dealii/issues/7037>, the "Roadmap for 
>> inclusion of GPU implementation of matrix free in Deal.II" issue 
>> <https://github.com/dealii/dealii/issues/2351>, the Doxygen 
>> documentation for classes in the CUDAWrappers namespace 
>> <https://www.dealii.org/9.0.0/doxygen/deal.II/group__CUDAWrappers.html>, 
>> and the manuscript by Karl Ljungkvist 
>> <http://scs.org/wp-content/uploads/2017/06/4_Final_Manuscript-1.pdf>. 
>> Are there any other pages I should be looking at?
>>
> No that's pretty much it. 
>
> My understanding from these pages is that deal.II has partial support for 
>> using CUDA with matrix free calculations. Currently, calculations can be 
>> done with scalar variables (but not vector variables) and adaptive meshes.
>>
> That's correct with the weird exception that you cannot impose constraints 
> (and thus have hanging nodes) if dim equals 2 and fe_degree is even. There 
> is a bug in that case but we don't know why.
>
> A few (somewhat inter-related) questions:
>> 1). Do all of the tools exist to create a GPU version of step-48? Has 
>> anyone done so?
>>
> I think so but nobody as tried so I am not 100% sure. Note that we don't 
> have MPI support yet. 
>
>> 2). What exactly would be involved in creating a GPU version of step-48? 
>> Is it just changing the CPU Vector, MatrixFree, and FEEvaluation classes to 
>> their GPU counterparts, plus packaging some data (plus local apply 
>> functions?) into a CUDAWrappers::MatrixFree< dim, Number >::Data 
>> <https://www.dealii.org/9.0.0/doxygen/deal.II/structCUDAWrappers_1_1MatrixFree_1_1Data.html>
>>  struct?
>>
> Yes, pretty much. You can take a look at the matrix_free_matrix_vector 
> tests here <https://github.com/dealii/dealii/tree/master/tests/cuda>. 
> These tests are doing a matrix-vector multiplication using an Helmholtz 
> operator. As you can see here 
> <https://github.com/dealii/dealii/blob/master/tests/cuda/matrix_vector_mf.h> 
> the main difference between the GPU and the CPU code is that the body loop 
> over the quadrature points needs to be encapsulated in a functor when using 
> GPU. The tests are based on the CPU tests matrix_vector here 
> <https://github.com/dealii/dealii/tree/master/tests/matrix_free>. So you 
> should have a good idea of the modifications required to use the GPU code.
>
>> 3). Most of the discussions seemed to revolve around linear solves. For 
>> something like step-48 with explicit updates, will the current paradigm 
>> work well? Or would that require shuttling data between the GPU and CPU 
>> every time step, causing too much overhead? (I know that in general GPUs 
>> can work very well for explicit codes.)
>>
> That should work fine. Most people are interested in using matrix-free 
> with a solver but I don't see a reason why it would be slow in step 48. The 
> communication between the CPU and the GPU should be limited in step-48.
>
> Best,
>
> Bruno
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Status of matrix-free CUDA support

2018-11-05 Thread Stephen DeWitt
I thought that I'd re-up this thread, since it didn't get any responses.

I apologize if the questions are a bit broad. Typically, I spend a bit of 
time trying to get some code working myself before posting a question here, 
but I don't currently have a machine with a NVIDIA GPU. If the CUDA support 
for deal.II is at the point where our code could take advantage of it, I'd 
be able to justify the investment to get access to a compatible machine.

Thanks!
Steve

On Monday, October 29, 2018 at 10:43:48 AM UTC-4, Stephen DeWitt wrote:
>
> Hello all,
> I'm interested to know what the status is for using CUDA with matrix free 
> calculations. We have a PRISMS-PF user who is interested in GPU 
> calculations, and I'd like to get a better idea of what would be involved 
> in adding CUDA support on our end.
>
> So far I've read through the "CUDA Support" issue 
> <https://github.com/dealii/dealii/issues/7037>, the "Roadmap for 
> inclusion of GPU implementation of matrix free in Deal.II" issue 
> <https://github.com/dealii/dealii/issues/2351>, the Doxygen documentation 
> for classes in the CUDAWrappers namespace 
> <https://www.dealii.org/9.0.0/doxygen/deal.II/group__CUDAWrappers.html>, 
> and the manuscript by Karl Ljungkvist 
> <http://scs.org/wp-content/uploads/2017/06/4_Final_Manuscript-1.pdf>. Are 
> there any other pages I should be looking at?
>
> My understanding from these pages is that deal.II has partial support for 
> using CUDA with matrix free calculations. Currently, calculations can be 
> done with scalar variables (but not vector variables) and adaptive meshes.
>
> A few (somewhat inter-related) questions:
> 1). Do all of the tools exist to create a GPU version of step-48? Has 
> anyone done so?
> 2). What exactly would be involved in creating a GPU version of step-48? 
> Is it just changing the CPU Vector, MatrixFree, and FEEvaluation classes to 
> their GPU counterparts, plus packaging some data (plus local apply 
> functions?) into a CUDAWrappers::MatrixFree< dim, Number >::Data 
> <https://www.dealii.org/9.0.0/doxygen/deal.II/structCUDAWrappers_1_1MatrixFree_1_1Data.html>
>  struct?
> 3). Most of the discussions seemed to revolve around linear solves. For 
> something like step-48 with explicit updates, will the current paradigm 
> work well? Or would that require shuttling data between the GPU and CPU 
> every time step, causing too much overhead? (I know that in general GPUs 
> can work very well for explicit codes.)
>
> Thanks!
> Steve
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] How to restart a time dependent code from its breakpoint?

2018-11-05 Thread Stephen DeWitt
Dear Chucui,
The file in the production code for ASPECT with checkpoint/restart is here:
https://github.com/bangerth/aspect/blob/master/source/simulator/checkpoint_restart.cc

If a second example would be helpful, we implemented checkpoint/restart in 
PRISMS-PF using a similar approach:
https://github.com/prisms-center/phaseField/blob/master/src/matrixfree/checkpoint.cc

Best,
Steve

On Monday, November 5, 2018 at 4:01:06 AM UTC-5, Jean-Paul Pelteret wrote:
>
> Dear Chucui,
>
> I don’t use this functionality myself, so I can only point you to the 
> test suite 
>  where 
> its functionality is tested. Here 
>  is 
> one example where a distributed triangulation is saved and loaded, and 
> here 
>  is 
> one where a distributed triangulation + solution is saved and loaded with a 
> different number of MPI processes.
>
> Best,
> Jean-Paul
>
> On 05 Nov 2018, at 09:13, chucu...@gmail.com  wrote:
>
> Dear Jean-Paul,
>
> Thanks for your reply! And I am interested in 'the serialization 
> capabilities', can you give me a little example or code to show me how it 
> work. For me the ASPECT library 
> 
>  is 
> an entile new one, I cannot find the example that shows the serialization 
> capabilities exactly, so I need some help. And is the point of  'the 
> serialization capabilities' a fundamental point of C++? Does it only work 
> at some projects such as  the ASPECT library 
> 
>  you 
> mentioned above? Thank you very much!
>
> Best,
> Chucui
>
> 在 2018年11月5日星期一 UTC+8下午3:36:59,Jean-Paul Pelteret写道:
>>
>> Dear Chucui,
>>
>> The Triangulation 
>> 
>> , DoFHandler 
>> 
>>  and Vector 
>> 
>>  classes 
>> support serialization, that is to say that they have built-in save() / 
>> load() functions. The GridIn::read_vtk() 
>> 
>>  function 
>> can supposedly construct a triangulation from a solution vector, but I’m 
>> not sure if this is working at the moment (see this GitHub issue 
>> ). I don’t think that 
>> there exists a mechanism to read in the numerical data from VTK file as of 
>> yet. So I would suggest that you utilise the serialization capabilities, 
>> which should allow a complete restart of a simulation from any tilmestep. I 
>> understand that this is what the ASPECT library 
>>  does.
>>
>> Although it does not directly help you restart a simulation using VTK 
>> data, I hope that you find this information useful.
>>
>> Best,
>> Jean-Paul
>>
>>
>>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Status of matrix-free CUDA support

2018-10-29 Thread Stephen DeWitt
Hello all,
I'm interested to know what the status is for using CUDA with matrix free 
calculations. We have a PRISMS-PF user who is interested in GPU 
calculations, and I'd like to get a better idea of what would be involved 
in adding CUDA support on our end.

So far I've read through the "CUDA Support" issue 
, the "Roadmap for inclusion 
of GPU implementation of matrix free in Deal.II" issue 
, the Doxygen documentation 
for classes in the CUDAWrappers namespace 
, and 
the manuscript by Karl Ljungkvist 
. Are 
there any other pages I should be looking at?

My understanding from these pages is that deal.II has partial support for 
using CUDA with matrix free calculations. Currently, calculations can be 
done with scalar variables (but not vector variables) and adaptive meshes.

A few (somewhat inter-related) questions:
1). Do all of the tools exist to create a GPU version of step-48? Has 
anyone done so?
2). What exactly would be involved in creating a GPU version of step-48? Is 
it just changing the CPU Vector, MatrixFree, and FEEvaluation classes to 
their GPU counterparts, plus packaging some data (plus local apply 
functions?) into a CUDAWrappers::MatrixFree< dim, Number >::Data 

 struct?
3). Most of the discussions seemed to revolve around linear solves. For 
something like step-48 with explicit updates, will the current paradigm 
work well? Or would that require shuttling data between the GPU and CPU 
every time step, causing too much overhead? (I know that in general GPUs 
can work very well for explicit codes.)

Thanks!
Steve


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Installation error using Ubuntu 17.10 package (missing libsmumps.so)

2018-04-13 Thread Stephen DeWitt
Hi Matthias,
Thanks! I'll pass along the advice to fetch and build the development 
version and otherwise wait for the 18.04 release.

Best,
Steve

On Thursday, April 12, 2018 at 4:36:53 PM UTC-4, Matthias Maier wrote:
>
> Status update: 
>
>  - We missed the opportunity to do a rebuild for 17.10 [1] 
>  - In the upcoming 18.04 release everything should be fine again 
>(release should be at the end of this month). 
>
> Best, 
> Matthias 
>
> [1] https://bugs.launchpad.net/ubuntu/+source/deal.ii/+bug/1729454 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Installation error using Ubuntu 17.10 package (missing libsmumps.so)

2018-04-12 Thread Stephen DeWitt
Hi Matthias,
Ok, that makes sense. 

I think I have both libdeal.ii and libdeal.ii-dev, at least that's what 
apt-get tells me if I try to install libdeal.ii-dev seperately, although I 
haven't figured out how to use the dev version rather than the 
auto-detected release version.

Thanks,
Steve

On Thursday, April 12, 2018 at 3:02:51 PM UTC-4, Matthias Maier wrote:
>
> Hi there, 
>
> The problem is that deal.II's CMake system records the full link 
> interface of all shared libraries - if a library changes location the 
> package has to be rebuild (in Ubuntu). If this is indeed the case I will 
> ask for this to happen. 
>
> Did you install just the library libdeal.ii, or also the development 
> package libdeal.ii-dev? 
>
> Best, 
> Matthias 
>
>
>
> On Thu, Apr 12, 2018, at 13:19 CDT, Stephen DeWitt  > wrote: 
>
> > Hello all, 
> > A PRISMS-PF user recently posted to our forum 
> > <https://groups.google.com/forum/#!topic/prisms-pf-users/t1TTaKbm4H0> 
> that 
> > he was having an issue with the deal.ii package he got through apt-get 
> on 
> > Ubuntu 17.10. The issue seems to be related to a missing shared object 
> file 
> > for MUMPS. I booted up a Ubuntu virtual machine on VirtualBox and saw 
> the 
> > same error. 
> > 
> > What I did: 
> > 
> >1. Installed Ubuntu 17.10 on the VM 
> >2. Updated apt 
> >3. Retrieved the deal.ii package ($ sudo apt-get install libdeal.ii) 
> >4. Found the deal.ii example directory to test the installation and 
> >copied it into my home directory 
> >5. Ran CMake for step-1 
> >6. Attempted to compile step-1 (both "make release" and "make debug" 
> had 
> >the same outcome) 
> > 
> > The result: 
> > 
> > <
> https://lh3.googleusercontent.com/-CCdJka7yFUU/Ws-hCuOODDI/BAc/kvG048lCyeYJeocApENLE-toFvoR2hkoQCLcBGAs/s1600/terminal_window.tiff>
>  
>
> > 
> > I checked the '/usr/lib/' directory and found a number of MUMPS shared 
> > objects (libesmumps-5.1.so, libesmumps.a, libesmumps.so, 
> > libptesmumps-5.1.so, libptesmumps.a, and libptesmumps.so), but not 
> > "libsmumps.so". I thought that maybe there was a file naming error and 
> > renamed a copy of "libesmumps.so" to "libsmumps.so", but then I got a 
> > similar error with a different MUMPS shared object. After a few rounds 
> of 
> > that, I gave up. 
> > 
> > Does anyone know what the issue is and how to solve it? 
> > 
> > Thanks, 
> > Steve 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Installation error using Ubuntu 17.10 package (missing libsmumps.so)

2018-04-12 Thread Stephen DeWitt
Hello all,
A PRISMS-PF user recently posted to our forum 
 that 
he was having an issue with the deal.ii package he got through apt-get on 
Ubuntu 17.10. The issue seems to be related to a missing shared object file 
for MUMPS. I booted up a Ubuntu virtual machine on VirtualBox and saw the 
same error.

What I did:

   1. Installed Ubuntu 17.10 on the VM
   2. Updated apt
   3. Retrieved the deal.ii package ($ sudo apt-get install libdeal.ii)
   4. Found the deal.ii example directory to test the installation and 
   copied it into my home directory
   5. Ran CMake for step-1
   6. Attempted to compile step-1 (both "make release" and "make debug" had 
   the same outcome)

The result:



I checked the '/usr/lib/' directory and found a number of MUMPS shared 
objects (libesmumps-5.1.so, libesmumps.a, libesmumps.so, 
libptesmumps-5.1.so, libptesmumps.a, and libptesmumps.so), but not 
"libsmumps.so". I thought that maybe there was a file naming error and 
renamed a copy of "libesmumps.so" to "libsmumps.so", but then I got a 
similar error with a different MUMPS shared object. After a few rounds of 
that, I gave up.

Does anyone know what the issue is and how to solve it? 

Thanks,
Steve

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Possible bug in the Jacobi preconditioner for MatrixFreeOperators::LaplaceOperator for vector fields

2017-10-27 Thread Stephen DeWitt
Hello,
I've been trying to refactor my code to use the new MatrixFreeOperators, 
but I've run into a problem trying to use the Jacobi preconditioner with 
the MatrixFreeOperators::LaplaceOperator with a vector-valued variable. 

In short, I wrote a short code to solve a simple 2D Poisson problem. For a 
scalar field, it works well both when using the identity matrix 
preconditioner and the Jacobi preconditioner. However, for a vector field, 
it works when using the identity matrix preconditioner but won't converge 
with the Jacobi preconditioner.

Aside from all of the standard book-keeping (creating an FESystem with 
multiple components, applying vector-valued ICs and BCs, etc.), my only 
change between the scalar case and the vector case is the template 
parameter for the MatrixFreeOperators::LaplaceOperator object.

Digging into it, I found that approximately every other element of the 
diagonal vector was zero, excluding a few that I believe are from the BCs 
(that is, in "compute_diagonal()", half of the elements in 
"inverse_diagonal_vector" are zero after the cell loop but before the 
reciprocal is taken). In the scalar case, all of the diagonal elements are 
non-zero, as one would expect. The zero elements of the diagonal seem to 
come from the "local_diagonal_cell" method, where "phi.dofs_per_cell" and 
"phi.tensor_dofs_per_cell" don't change between the scalar and vector case 
(4 in this case, with linear elements), when I'd expect the dofs per cell 
in the vector case to be the number of components times the dofs per cell 
in the scalar case.

If I multiply  "phi.dofs_per_cell" and "phi.tensor_dofs_per_cell" by the 
number of components everywhere in the "local_diagonal_cell" method, the 
preconditioner works (see the highlighted edits at the bottom of the post).

So, does anyone have an idea of what's going on here? Is this a bug, or is 
there an extra step I'm missing to use MatrixFreeOperators with a vector 
that I'm compensating for?

Thanks,
Steve

template 
  void
  CustomLaplaceOperator::
  local_diagonal_cell (const MatrixFree::value_type> &data,
   VectorType   &dst
,
   const unsigned int &,
   const std::pair   &
cell_range) const
  {
typedef typename 
dealii::MatrixFreeOperators::Base::value_type 
Number;
FEEvaluation phi (data, 
this->selected_rows[0]);

for (unsigned int cell=cell_range.first; cell local_diagonal_vector[phi.
tensor_dofs_per_cell* n_components];
for (unsigned int i=0; i();
phi.begin_dof_values()[i] = 1.;
do_operation_on_cell(phi,cell);
local_diagonal_vector[i] = phi.begin_dof_values()[i];
  }
  for (unsigned int i=0; ihttp://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Research faculty opening at the University of Michigan

2017-10-24 Thread Stephen DeWitt
Sorry to anyone who attempted to follow the link in the previous post and 
landed on an empty page. The opening number changed, so the appropriate 
link is now:
http://careers.umich.edu/job_detail/149216/asst_res_scientist

Best,
Steve

On Monday, October 23, 2017 at 4:43:18 PM UTC-4, Stephen DeWitt wrote:
>
> Hello, 
> We have a research faculty opening at the University of Michigan that may 
> be of interest to people on this list. 
>
> The PRISMS Center at the University of Michigan is looking to hire an 
> Assistant Research Scientist (a junior research faculty position) to 
> continue development on PRISMS-Plasticity 
> <https://github.com/prisms-center/plasticity>, a deal.II-based code for 
> crystal and continuum plasticity. PRISMS-Plasticity is one of the four 
> major simulation codes being developed by the PRISMS Center (three of which 
> are deal.II based). The job flyer is attached to this message, and the 
> application link is given below. We are hoping to hire someone with 
> significant deal.II experience, so please look over the flyer and consider 
> applying. Please feel free to email me at stvd...@umich.edu with any 
> questions.
>
> http://careers.umich.edu/job_detail/148101/asst_res_scientist 
>
> Best, 
>
> Steve DeWitt 
> Asst. Research Scientist
> PRISMS Center
> Dept. of Materials Science and Engineering
> University of Michigan
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Research faculty opening at the University of Michigan

2017-10-23 Thread Stephen DeWitt
Hello, 
We have a research faculty opening at the University of Michigan that may 
be of interest to people on this list. 

The PRISMS Center at the University of Michigan is looking to hire an 
Assistant Research Scientist (a junior research faculty position) to 
continue development on PRISMS-Plasticity 
, a deal.II-based code for 
crystal and continuum plasticity. PRISMS-Plasticity is one of the four 
major simulation codes being developed by the PRISMS Center (three of which 
are deal.II based). The job flyer is attached to this message, and the 
application link is given below. We are hoping to hire someone with 
significant deal.II experience, so please look over the flyer and consider 
applying. Please feel free to email me at stvd...@umich.edu with any 
questions.

http://careers.umich.edu/job_detail/148101/asst_res_scientist 

Best, 

Steve DeWitt 
Asst. Research Scientist
PRISMS Center
Dept. of Materials Science and Engineering
University of Michigan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Computational Materials Scientist PRISMS Plasticity Posting.pdf
Description: Adobe PDF document


Re: [deal.II] Artifacts at the refinement level boundary for Step 48

2017-09-08 Thread Stephen DeWitt
Dear Martin,
Thank you for your quick reply.

I tried as you suggested, and set the initial condition by computing a 
right hand side, and it lowered the difference between the initial 
condition and the first time step by about a factor of 5. So it helped, but 
didn't make a qualitative change.

While doing that test, I realized that I was wrong in my first post where I 
said that the artifacts are only present between the initial condition and 
the first time step -- I missed that the field is only outputted every 200 
time steps. Changing it so that it outputs every time step, it turns out 
that the solution changes by up to 1e-8 between time steps for the first 40 
time steps or so before settling down.

Assuming that this is just intrinsic error in the approximation of the mass 
matrix, given how you said that the implementation has been repeatedly 
verified, it must just be that I'm being too aggressive in choosing where 
to coarsen my mesh. 

Thank you for sending me that paper -- I'll read through it to better 
understand the approximations that go into the diagonal mass matrix.

Thanks again,
Steve




On Friday, September 8, 2017 at 1:03:48 PM UTC-4, Martin Kronbichler wrote:
>
> Dear Stephen, 
>
> At hanging nodes, there is definitely going to be a larger error due to 
> the approximation of the diagonal mass matrix. I do not remember the 
> exact details but to get a diagonal mass matrix you need to assume an 
> interpolation in addition to the approximation leading to the mass 
> lumping. If I remember correctly my co-worker Katharina Kormann 
> describes this in her paper: 
> https://doi.org/10.4208/cicp.101214.021015a 
>
> So in general, I would assume that a change of 1e-8 as you have in your 
> picture is simply the approximation error that occurs when going from 
> the initially interpolated solution to the new mesh. I would assume that 
> it goes away if you compute an "approximate" L2 projection where you 
> compute a right hand side (v, u_init) for test functions v and the 
> initial field u_init from ExactSolution of step-48 rather than 
> VectorTools::interpolate. 
>
> Have you tried that? 
>
> If the change is larger in other cases as you describe, I would be more 
> worried. Have you tried to investigate more? Does the projection instead 
> of interpolation help in that case, too? 
>
> The implementation should be correct as far as I can tell - we have 
> verified it on a large series of applications. In my experience, the 
> most tricky thing to get right regarding constraints and boundary 
> conditions is for nonlinear problems or implicit solvers with 
> inhomogeneous boundary data, but that appears to not be the case here... 
>
> Best, 
> Martin 
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Artifacts at the refinement level boundary for Step 48

2017-09-08 Thread Stephen DeWitt
Hello,
I've been working on a phase field code using the matrix free approach from 
Step 48. When I use adaptive meshing, I notice some artifacts at the 
boundary between levels of refinement and I'm trying to understand their 
origin.

For example, if I make the following modification to the "local_apply" 
function in Step 48, the solution should remain unchanged from time step to 
time step:
template 
  void SineGordonOperation::
  local_apply (const MatrixFree  &data,
   parallel::distributed::Vector  &dst,
   const std::vector*> &
src,
   const std::pair &cell_range) const
  {
AssertDimension (src.size(), 2);
FEEvaluation current (data), old (data);
for (unsigned int cell=cell_range.first; cell current_value = current.get_value(
q);
const VectorizedArray old_value = old.get_value(q);


// ORIGINAL
// current.submit_value (2.*current_value - old_value -
//   delta_t_sqr * 
std::sin(current_value),q);
// current.submit_gradient (- delta_t_sqr *
//  current.get_gradient(q), q);


 // CHANGES
 current.submit_value (current_value,q);
 current.submit_gradient (- delta_t_sqr *
  current.get_gradient(q)*0.0, q);
  }


current.integrate (true,true);
current.distribute_local_to_global (dst);
  }
  }

However, there is a non-trivial change in the solution (same plot with and 
without the mesh):

<https://lh3.googleusercontent.com/-3PmzXmsz0WM/WbKjJ3tOnZI/A50/jPaQpAd8uW82BTfgtN6MiO3ZuLkycer2QCLcBGAs/s1600/step-48_artifact.tiff>
<https://lh3.googleusercontent.com/-p_c2uy9fmno/WbKjYO-dJyI/A54/1wmh__EsxOc2WYJ3smlia6qUEdd9gfvZQCLcBGAs/s1600/step-48_artifact_n0_mesh.tiff>

Has anyone noticed this before? In this example, the magnitude of the 
artifacts is low, but in some of my other tests, the magnitude has been on 
the order of the "real" change in the solution. When doing implicit time 
stepping (still in the matrix free framework), the artifacts don't show up, 
which suggests that they aren't just a consequence of having an adapted 
mesh. In the Step 48 description, it's noted that the inverse of the mass 
matrix isn't computed exactly. Could the artifacts be related to that?

For Step 48, the artifacts only appear from the initial time step to the 
first time step (in my application, I'm getting the artifacts every time 
step and I'm still in the midst of finding what the source of that 
difference is).

Thank you for any insight you can give.

Best,
Steve

Stephen DeWitt
Asst. Research Scientist
PRISMS Center
Dept. of Materials Science and Engineering
University of Michigan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] imposing symmetric boundary conditions on the axis of symmetry of a 2D domain

2017-03-28 Thread Stephen DeWitt
In that case, you should just be able to use zero-flux boundary conditions 
along the mirror plane. The way I typically picture it is that if you have 
the same thing happening on either side of a mirror plane, you can't have 
in-flow or out-flow. 

On a side note, depending on how far along you are on your code 
development, we have a general phase field code built on deal.II called 
PRISMS-PF, that you might be interested in. Our GitHub repo is here 
, and feel free to email me at 
stvd...@umich.edu if you have any questions about it.


Steve

On Tuesday, March 28, 2017 at 5:03:55 AM UTC-4, Tongzhao Gong wrote:
>
> Hi Bangerth,
>   Thanks for your answering. Here I have a 2D domain and the 
> phase-field model is complicated, so I have simplify the equations just 
> like as follow: 
>
>
> 
>
>
>   For both variable \phi and u, zero-flux Neumann boundary conditions 
> are imposed. Owing to the symmetic domain like the following picture, I 
> want to just computation on half of the domain and the results can be 
> mirrored to the other half. However, I have no idea on how to impose the 
> symmetric boundary condition on the y-axis if I only use half of the whole 
> domain to carry out the simulation. For example, the whole domain is a 
> square whose center is located at origin, here I just computation on the 
> domain satisfying x > 0 and the symmetric boundary condition should be 
> imposed on the boundary satisfying x = 0.
>
>
> 
>
>
>
> 在 2017年3月28日星期二 UTC+8上午2:38:20,Wolfgang Bangerth写道:
>>
>> On 03/26/2017 11:52 PM, tzgo...@gmail.com wrote: 
>> >   Recently I'm working on the phase-field simulation of dendritic 
>> > growth by deal.II and I want to make the simulation carried out on only 
>> > half of the domain due to the symmetry of the problem with respect to 
>> > the vertical axis to improve computational efficiency. So how to impose 
>> > such sysmmetric boundary conditions on the axis of symmetric of a 2D 
>> > comptation domain? 
>>
>> That depends on the equation and on what you actually want to do. Do you 
>> have a 2d domain and only want to simulate half of it, or a 3d 
>> axisymmetric domain for which you only want to simulate in r-z space? 
>>
>> Best 
>>   W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Installation using Spack fails during 'ncurses' installation

2017-03-17 Thread Stephen DeWitt
Bruno, Jean-Paul, and Denis,
Thank you all for your responses. As suggested, I'll send a message to the 
Spack forum and/or open an issue on their GitHub site. In the meantime, 
I'll continue debugging the manual PETSc installation to see if I can get 
that to work. I think the filesystem I'm trying to install this to is a bit 
of an edge case in that I neither have root access nor are there many 
useful libraries preinstalled.

@Denis,
Thanks for taking the time to redo your Spack installations, I appreciate 
it. 

I want close saying that I appreciate the effort you have put into making 
deal.II easy/easier to install, between the Mac packages, Spack, candi 
(although, unfortunately, I haven't had much luck with that either), and 
all of the detailed installation documentation.

Thanks,
Steve

On Friday, March 17, 2017 at 5:15:42 AM UTC-4, Denis Davydov wrote:
>
> p.p.s.  Just to be sure, i wiped my installations of Spack on 
> Ubuntu16.04+gcc5.4.0 PC and Centos7+gcc4.8.5 cluster and reinstalled 
> everything from scratch using 
> commit 1124bdc99ee84c26201c40536d9b04dac74d7f6a (this is the current HEAD 
> in Spack, I updated dealii's wiki to mention it).
>
> I still think it's good to report cluster-specific issues on Spack's 
> github page so that they are (eventually) fixed.
>
> Regards,
> Denis.
>
> On Thursday, March 16, 2017 at 6:43:13 PM UTC+1, Bruno Turcksin wrote:
>>
>> Steve,
>>
>> I have tried to use spack several times on two clusters and it never 
>> worked for me (but it works fine on my own machine). I usually have to 
>> patch a bunch of things and at the end, I still have problems when I load 
>> the modules. I find it a lot easier to install everything manually. You can 
>> also try candi, there is an option to build deal.II on a cluster.
>>
>> Best,
>>
>> Bruno
>>
>> On Thursday, March 16, 2017 at 12:19:36 PM UTC-4, Stephen DeWitt wrote:
>>>
>>> Hello,
>>> I'm trying to install dealii on a shared AFS file system. Originally, I 
>>> tried to install everything manually, which I had done successfully on our 
>>> HPC cluster, but ran into a PETSc link error and decided to try Spack.
>>>
>>> I followed the instructions on the deal.II wiki, but installation 
>>> stopped during the "install" phase for "ncurses". The packages "bzip2" and 
>>> "muparser" installed without a problem.
>>>
>>> Here's the error message:
>>>
>>>  
>>> 100.0%
>>>
>>> *==>* Staging archive: /afs/
>>> umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/var/spack/stage/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/ncurses-6.0.tar.gz
>>>
>>> *==>* Created stage in /afs/
>>> umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/var/spack/stage/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m
>>>
>>> *==>* Applied patch sed_pgi.patch
>>>
>>> *==>* Ran patch() for ncurses
>>>
>>> *==>* Building ncurses [AutotoolsPackage]
>>>
>>> *==>* Executing phase : 'autoreconf'
>>>
>>> *==>* Executing phase : 'configure'
>>>
>>> *==>* Executing phase : 'build'
>>>
>>> *==>* Executing phase : 'install'
>>>
>>> *==>* Error: ProcessError: Command exited with status 2:
>>>
>>> 'make' '-j2' 'install'
>>>
>>> /afs/
>>> umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/lib/spack/spack/build_systems/autotools.py:282,
>>>  
>>> in install:
>>>
>>>  277  def install(self, spec, prefix):
>>>
>>>  278  """Makes the install targets specified by
>>>
>>>  279  :py:attr:``~.AutotoolsPackage.install_targets``
>>>
>>>  280  """
>>>
>>>  281  with working_dir(self.build_directory):
>>>
>>>   >> 282  inspect.getmodule(self).make(*self.install_targets)
>>>
>>>
>>> See build log for details:
>>>
>>>   /tmp/stvdwtt/spack-stage/spack-stage-7oak9b/ncurses-6.0/spack-build.out
>>>
>>> I went to the build log (which is quite long), and see several errors 
>>> like this:
>>>
>>> cd ../lib && (ln -s -f libpanel.so.6.0 libpanel.so.6; ln -s -f 
>>> libp

[deal.II] Installation using Spack fails during 'ncurses' installation

2017-03-16 Thread Stephen DeWitt
Hello,
I'm trying to install dealii on a shared AFS file system. Originally, I 
tried to install everything manually, which I had done successfully on our 
HPC cluster, but ran into a PETSc link error and decided to try Spack.

I followed the instructions on the deal.II wiki, but installation stopped 
during the "install" phase for "ncurses". The packages "bzip2" and 
"muparser" installed without a problem.

Here's the error message:

 
100.0%

*==>* Staging archive: 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/var/spack/stage/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/ncurses-6.0.tar.gz

*==>* Created stage in 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/var/spack/stage/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m

*==>* Applied patch sed_pgi.patch

*==>* Ran patch() for ncurses

*==>* Building ncurses [AutotoolsPackage]

*==>* Executing phase : 'autoreconf'

*==>* Executing phase : 'configure'

*==>* Executing phase : 'build'

*==>* Executing phase : 'install'

*==>* Error: ProcessError: Command exited with status 2:

'make' '-j2' 'install'

/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/lib/spack/spack/build_systems/autotools.py:282,
 
in install:

 277  def install(self, spec, prefix):

 278  """Makes the install targets specified by

 279  :py:attr:``~.AutotoolsPackage.install_targets``

 280  """

 281  with working_dir(self.build_directory):

  >> 282  inspect.getmodule(self).make(*self.install_targets)


See build log for details:

  /tmp/stvdwtt/spack-stage/spack-stage-7oak9b/ncurses-6.0/spack-build.out

I went to the build log (which is quite long), and see several errors like 
this:

cd ../lib && (ln -s -f libpanel.so.6.0 libpanel.so.6; ln -s -f 
libpanel.so.6 libpanel.so; )

/usr/bin/ld: total time in link: 0.021996

/usr/bin/ld: data size 29224512

cd 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/opt/spack/linux-rhel6-x86_64/gcc-4.4.7/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/lib
 
&& (ln -s -f libpanel.so.6.0 libpanel.so.6; ln -s -f libpanel.so.6 
libpanel.so; )

test -z "" && /sbin/ldconfig

/sbin/ldconfig: Can't create temporary cache file /etc/ld.so.cache~: 
Permission denied

make[1]: 
[/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/opt/spack/linux-rhel6-x86_64/gcc-4.4.7/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/lib/libpanel.so.6.0]
 
Error 1 (ignored)

The last few lines of the build log (which I'm not sure are relevant) are:

Running sh 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/var/spack/stage/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/ncurses-6.0/misc/shlib
 
tic to install 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/opt/spack/linux-rhel6-x86_64/gcc-4.4.7/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/share/terminfo
 
...


You may see messages regarding extended capabilities, e.g., AX.

These are extended terminal capabilities which are compiled

using

tic -x

If you have ncurses 4.2 applications, you should read the INSTALL

document, and install the terminfo without the -x option.


** creating form.pc

** creating ncurses++.pc

touch pc-files

/bin/sh -c 'for name in *.pc; do /usr/bin/install -c -m 644 $name 
/afs/umich.edu/user/s/t/stvdwtt/Public/PRISMS/software/spack/spack/opt/spack/linux-rhel6-x86_64/gcc-4.4.7/ncurses-6.0-4wkexyzgaxdwrfs6wqje2ppcm5di263m/lib/pkgconfig/$name;
 
done'

Judging from the error, I'm assuming that it's a permissions issue. I 
double-checked that my $SPACK_ROOT environment variable is set correctly. 
The first line of the build log includes a '--prefix=' statement, which 
correctly picks up the $SPACK_ROOT path.

Does anyone have ideas on what the problem is? Digging around the Spack 
documentation didn't turn up anything.

Thanks!
Steve

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Segfault in EvaluatorTensorProduct::apply when AVX is enabled

2017-02-23 Thread Stephen DeWitt
Hi Martin,
Thanks! I tried with the updated deal.II version and it worked like a 
charm. Thanks for the help and the bugfix!

Best,
Steve

On Thursday, February 23, 2017 at 1:55:30 AM UTC-5, Martin Kronbichler 
wrote:
>
> Dear Stephen,
>
> The first problem you are seeing is a bug in AlignedVector::push_back. We 
> merged a pull request last night, 
> https://github.com/dealii/dealii/pull/3993, which should fix this issue. 
> The problem was that we incorrectly invoked placement new with default 
> constructor rather than the copy constructor.
>
> Regarding your second approach: I am not sure what prevents this from 
> working. I have tried a similar approach and it does work in principle, at 
> least with the latest developer version. Apart from the fact that you 
> actually access invalid memory since you only call reserve(1) and never 
> extend the size to one.
>
> Could you please try with the updated deal.II version and report back in 
> case you have further problems?
>
> Thanks!
>
> Best,
> Martin
>
> On 02/20/2017 11:01 PM, Stephen DeWitt wrote:
>
> Dear Martin, 
> Thank you, that makes sense. 
>
> If I could trouble you with one more question -- I'm having a hard time 
> using AlignedVector, and I can't find any examples of its use online.
>
> I'd like to build the AlignedVector with a series of push backs, but I get 
> compiler errors if I try this:
>
> dealii::AlignedVector scalar_vars;
> typeScalar var1(data, 0);
> scalar_vars.push_back(var1);
>
> tellling me that:
> /Applications/deal.II.app/Contents/Resources/include/deal.II/base/aligned_vector.h:640:21:
>  
> error: no matching constructor for initialization of 
> 'dealii::FEEvaluation<2, 1, 2, 1, double>' 
> new (_end_data) T;
>
> and:
> /Applications/deal.II.app/Contents/Resources/include/deal.II/base/aligned_vector.h:641:16:
>  
> error: object of type 'dealii::FEEvaluation<2, 1, 2, 1, double>' cannot be 
> assigned because its copy assignment operator is implicitly deleted 
> *_end_data++ = in_data; 
>
> I also tried:
>
> dealii::AlignedVector scalar_vars;
> scalar_vars1.reserve(1);
> typeScalar var1(data, 0);
> scalar_vars1[0] = var1;
>
> but still got the "error: object of type 'dealii::FEEvaluation<2, 1, 2, 1, 
> double>' cannot be assigned because its copy assignment operator is 
> implicitly deleted" error.
>
> How should I be making my AlignedVector?
>
> Thanks,
> Steve
>
>
>
> On Monday, February 20, 2017 at 3:23:09 PM UTC-5, Martin Kronbichler 
> wrote: 
>>
>> Dear Stephen,
>>
>> The problem is data alignment: You create an std::vector 
>> that internally arranges its data under the assumption that the start 
>> address of FEEvaluation is divisible by 32 (the length of the 
>> vectorization). If you put an FEEvaluation object on the stack, the 
>> compiler will automatically do it right. However, inside an std::vector it 
>> would be up to the std::vector to ensure this, but on usual x86-64 machines 
>> it only aligns to 16 byte boundaries. This is also why SSE2 works because 
>> it only needs 16-byte alignment.
>>
>> The solution is to use AlignedVector scalar_vars instead of 
>> std::vector. The alternative is to wait until the pull request 3980 (
>> https://github.com/dealii/dealii/pull/3980) gets merged, grab the newest 
>> developer version because with that we will start using an external scratch 
>> data array that always has the correct alignment.
>>
>> Best,
>> Martin
>>
>> On 20.02.2017 20:50, Stephen DeWitt wrote:
>>
>> Hello, 
>> I recently realized that I should be using "-march=native" flag for 
>> optimal performance of matrix-free codes. The application that I've been 
>> using works fine with just SSE2, but with AVX enabled I'm getting a 
>> segfault. Step-48 works fine, so I don't think it is an installation issue.
>>
>> The function where it occurs is similar to the "local_apply" function in 
>> step 48:
>> template 
>> void getRHS(const MatrixFree &data, 
>>std::vector*> &dst, 
>>const std::vector*> 
>> &src, 
>>const std::pair &cell_range) const{ 
>>
>>   //initialize FEEvaulation objects 
>>   std::vector scalar_vars; 
>>
>>   for (unsigned int i=0; i>   typeScalar var(data, i); 
>>   scalar_vars.push_back(var); 
>>   } 
>>
>>   //loop over cells 
>>   for (unsigned int cell=cell_range.first; cell> ){ 
>>
>>  // Initialize, read DOFs

Re: [deal.II] Segfault in EvaluatorTensorProduct::apply when AVX is enabled

2017-02-20 Thread Stephen DeWitt
Dear Martin,
Thank you, that makes sense. 

If I could trouble you with one more question -- I'm having a hard time 
using AlignedVector, and I can't find any examples of its use online.

I'd like to build the AlignedVector with a series of push backs, but I get 
compiler errors if I try this:

dealii::AlignedVector scalar_vars;
typeScalar var1(data, 0);
scalar_vars.push_back(var1);

tellling me that:
/Applications/deal.II.app/Contents/Resources/include/deal.II/base/aligned_vector.h:640:21:
 
error: no matching constructor for initialization of 
'dealii::FEEvaluation<2, 1, 2, 1, double>' 
new (_end_data) T;

and:
/Applications/deal.II.app/Contents/Resources/include/deal.II/base/aligned_vector.h:641:16:
 
error: object of type 'dealii::FEEvaluation<2, 1, 2, 1, double>' cannot be 
assigned because its copy assignment operator is implicitly deleted 
*_end_data++ = in_data;

I also tried:

dealii::AlignedVector scalar_vars;
scalar_vars1.reserve(1);
typeScalar var1(data, 0);
scalar_vars1[0] = var1;

but still got the "error: object of type 'dealii::FEEvaluation<2, 1, 2, 1, 
double>' cannot be assigned because its copy assignment operator is 
implicitly deleted" error.

How should I be making my AlignedVector?

Thanks,
Steve



On Monday, February 20, 2017 at 3:23:09 PM UTC-5, Martin Kronbichler wrote:
>
> Dear Stephen,
>
> The problem is data alignment: You create an std::vector 
> that internally arranges its data under the assumption that the start 
> address of FEEvaluation is divisible by 32 (the length of the 
> vectorization). If you put an FEEvaluation object on the stack, the 
> compiler will automatically do it right. However, inside an std::vector it 
> would be up to the std::vector to ensure this, but on usual x86-64 machines 
> it only aligns to 16 byte boundaries. This is also why SSE2 works because 
> it only needs 16-byte alignment.
>
> The solution is to use AlignedVector scalar_vars instead of 
> std::vector. The alternative is to wait until the pull request 3980 (
> https://github.com/dealii/dealii/pull/3980) gets merged, grab the newest 
> developer version because with that we will start using an external scratch 
> data array that always has the correct alignment.
>
> Best,
> Martin
>
> On 20.02.2017 20:50, Stephen DeWitt wrote:
>
> Hello, 
> I recently realized that I should be using "-march=native" flag for 
> optimal performance of matrix-free codes. The application that I've been 
> using works fine with just SSE2, but with AVX enabled I'm getting a 
> segfault. Step-48 works fine, so I don't think it is an installation issue.
>
> The function where it occurs is similar to the "local_apply" function in 
> step 48:
> template 
> void getRHS(const MatrixFree &data, 
>std::vector*> &dst, 
>const std::vector*> &
> src, 
>const std::pair &cell_range) const{ 
>
>   //initialize FEEvaulation objects 
>   std::vector scalar_vars; 
>
>   for (unsigned int i=0; i   typeScalar var(data, i); 
>   scalar_vars.push_back(var); 
>   } 
>
>   //loop over cells 
>   for (unsigned int cell=cell_range.first; cell ){ 
>
>  // Initialize, read DOFs, and set evaulation flags  
>  scalar_vars[varInfoListRHS[i].index].reinit(cell); 
>  scalar_vars[varInfoListRHS[i].index].read_dof_values_plain(*src[
> varInfoListRHS[i].global_var_index]); 
>  scalar_vars[varInfoListRHS[i].index].evaluate(need_value[i], 
> need_gradient[i], need_hessian[i]);   // <--- segfault happens here!
>
>   }
>
>   unsigned int num_q_points; 
>   num_q_points = scalar_vars[0].n_q_points; 
>
>   //loop over quadrature points 
>   for (unsigned int q=0; q   (etc.)
>
>
> The segfault happens during the "evaluate" call. GDB tells me that it 
> happens on line 5478 of /include/deal.II/matrix_free/fe_evaluation.h, in 
> EvaluatorTensorProduct::apply:
> xp[i] = in[stride*i] - in[stride*(mm-1-i)]; 
>
> Using the debugger to step through EvaluatorTensorProduct::apply, nothing 
> seems obviously wrong. As expected, all of the vectorized arrays are four 
> doubles long. The line above evaluates to xp[0]=in[0]-in[1].
>
> Has anyone else had this issue? Does anyone have ideas what the problem 
> could be, or what I should be looking for?
>
> Thanks!
> Steve
>
> System: Cluster running CentOS 7, with Intel Xeon E5-2670 processors, GCC 
> v5.4.0
>
>
>
>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> 

[deal.II] Segfault in EvaluatorTensorProduct::apply when AVX is enabled

2017-02-20 Thread Stephen DeWitt
Hello,
I recently realized that I should be using "-march=native" flag for optimal 
performance of matrix-free codes. The application that I've been using 
works fine with just SSE2, but with AVX enabled I'm getting a segfault. 
Step-48 works fine, so I don't think it is an installation issue.

The function where it occurs is similar to the "local_apply" function in 
step 48:
template 
void getRHS(const MatrixFree &data, 
   std::vector*> &dst, 
   const std::vector*> &
src, 
   const std::pair &cell_range) const{ 

  //initialize FEEvaulation objects 
  std::vector scalar_vars; 

  for (unsigned int i=0; ihttp://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] deal.II binary package won't open on Mac OS

2017-01-23 Thread Stephen DeWitt
Hello all,
I've recently seen some issues opening the version 8.4.1 Mac OS binary 
package. After downloading the .dmg file and copying the application to the 
Applications folder, double-clicking it doesn't launch the terminal window 
like it should. The "verifying..." dialog pops up and then the deal.II.app 
freezes. If I open the application in Finder with "Show Package Contents" 
and open "/Contents/Resources/bin/dealii-terminal" everything is fine. Both 
computers I saw this on has OS X El Capitan (unfortunately it was at an 
event where I was helping users install deal.II, so I don't have those 
computers handy anymore). On my own laptop (also running El Capitan), it 
works fine.

Has anyone else had a similar issue? Any ideas on what the problem could 
be? Two different users on two different machines had the same issue.

Thanks!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: How to apply boundary values for a particular point on the boundary instead of the whole boundary surface

2016-10-22 Thread Stephen DeWitt


In case that link dies, here's the snippet of code:


// Set constraints to pin the solution if there are no Dirichlet BCs for a 
component of a variable in an elliptic equation
template 
void MatrixFreePDE::setRigidBodyModeConstraints( std::vector 
rigidBodyModeComponents, ConstraintMatrix * constraints, DoFHandler* 
dof_handler){

if ( rigidBodyModeComponents.size() > 0 ){

// Choose the point where the constraint will be placed. Must be the 
coordinates of a vertex.
dealii::Point target_point(0,0);

unsigned int vertices_per_cell=GeometryInfo::vertices_per_cell;

// Loop over each locally owned cell
typename DoFHandler::active_cell_iterator cell= 
dof_handler->begin_active(), endc = dof_handler->end();

for (; cell!=endc; ++cell){
if (cell->is_locally_owned()){
for (unsigned int i=0; ivertex(i)) < 
1e-2 * cell->diameter()){

// Loop through the list of components with 
rigid body modes and add an inhomogeneous constraint for each
for (unsigned int component_num = 0; 
component_num < rigidBodyModeComponents.size(); component_num++){
unsigned int 
nodeID=cell->vertex_dof_index(i,component_num);
constraints->add_line(nodeID);

constraints->set_inhomogeneity(nodeID,0.0);
}
   }
   }
   }
   }
}
}





On Saturday, October 22, 2016 at 6:49:41 AM UTC-4, Stephen DeWitt wrote:
>
> Hi Hamed,
> I was in the same boat a few weeks ago. After reading through some of the 
> other posts on this list (which Jean-Paul linked), I wrote the following 
> method that you and maybe others will find useful.
>
> It takes in a vector containing the list of components of a field that 
> need point constraints (rigidBodyModeComponents) and adds a constraint 
> where needed:
>
> https://github.com/prisms-center/phaseField/blob/next/src/matrixfree/boundaryConditions.cc
> (lines 43-75)
>
> Cheers!
> Steve
>
>
> On Thursday, October 20, 2016 at 7:38:19 PM UTC-4, Hamed Babaei wrote:
>>
>> Hi friends,
>>
>> For an elastic problem, I am going to apply zero boundary displacements 
>> for three specific points on the center of -x, -y and -z planes of a cubic 
>> domain.
>> I have already done this but for the boundary surface not a boundary 
>> point (the same as incremental_boundary_displacement in step-18). 
>> The following is what I wrote to do so which doesn't work properly. In 
>> fact, it fixes the whole -x, -y and -z surfaces, not just for the three 
>> points on them that I intended. 
>>
>>   template 
>>   class BoundaryCondition :  public Function
>>   {
>>   public:
>>  BoundaryCondition (const int boundary_id);
>> virtual void vector_value (const Point &p,
>> Vector   &values) const;
>> virtual void vector_value_list (const std::vector 
>> > &points,
>>std::vector >   
>> &value_list) const;
>>   private:
>> const int boundary_id;
>>
>>   };
>>   template 
>>   BoundaryCondition::BoundaryCondition (const int 
>> boundery_id)
>> :
>> Function (dim),
>> boundary_id(boundary_id)
>>
>>   {}
>>   template 
>>   inline
>>   void
>> BoundaryCondition::vector_value (const Point &p,
>> Vector   &values) const
>>   {
>> Assert (values.size() == dim,
>> ExcDimensionMismatch (values.size(), dim));
>>
>> Point point_x;
>> point_x(1) = 5;
>> point_x(2) = 5;
>>
>> Point point_y;
>> point_y(0) = 5;
>> point_y(2) = 5;
>>
>> Point point_z;
>> point_z(0) = 5;
>> point_z(1) = 5;
>>
>>
>> if  (boundary_id ==0 && ((p-point_x).norm_square() 
>> <(0.5e-9)*(0.5e-9)))
>>values(0) = 0;
>> else if (boundary_id ==2 && ((p-point_y).norm_square() < 
>> (0.5e-9)*(0.5e-9)))
>>values(1)= 0;
>> else if (boundary_id ==4 && ((p-point_z).norm_s

[deal.II] Re: How to apply boundary values for a particular point on the boundary instead of the whole boundary surface

2016-10-22 Thread Stephen DeWitt
Hi Hamed,
I was in the same boat a few weeks ago. After reading through some of the 
other posts on this list (which Jean-Paul linked), I wrote the following 
method that you and maybe others will find useful.

It takes in a vector containing the list of components of a field that need 
point constraints (rigidBodyModeComponents) and adds a constraint where 
needed:
https://github.com/prisms-center/phaseField/blob/next/src/matrixfree/boundaryConditions.cc
(lines 43-75)

Cheers!
Steve


On Thursday, October 20, 2016 at 7:38:19 PM UTC-4, Hamed Babaei wrote:
>
> Hi friends,
>
> For an elastic problem, I am going to apply zero boundary displacements 
> for three specific points on the center of -x, -y and -z planes of a cubic 
> domain.
> I have already done this but for the boundary surface not a boundary point 
> (the same as incremental_boundary_displacement in step-18). 
> The following is what I wrote to do so which doesn't work properly. In 
> fact, it fixes the whole -x, -y and -z surfaces, not just for the three 
> points on them that I intended. 
>
>   template 
>   class BoundaryCondition :  public Function
>   {
>   public:
>  BoundaryCondition (const int boundary_id);
> virtual void vector_value (const Point &p,
> Vector   &values) const;
> virtual void vector_value_list (const std::vector > 
> &points,
>std::vector >   
> &value_list) const;
>   private:
> const int boundary_id;
>
>   };
>   template 
>   BoundaryCondition::BoundaryCondition (const int boundery_id)
> :
> Function (dim),
> boundary_id(boundary_id)
>
>   {}
>   template 
>   inline
>   void
> BoundaryCondition::vector_value (const Point &p,
> Vector   &values) const
>   {
> Assert (values.size() == dim,
> ExcDimensionMismatch (values.size(), dim));
>
> Point point_x;
> point_x(1) = 5;
> point_x(2) = 5;
>
> Point point_y;
> point_y(0) = 5;
> point_y(2) = 5;
>
> Point point_z;
> point_z(0) = 5;
> point_z(1) = 5;
>
>
> if  (boundary_id ==0 && ((p-point_x).norm_square() 
> <(0.5e-9)*(0.5e-9)))
>values(0) = 0;
> else if (boundary_id ==2 && ((p-point_y).norm_square() < 
> (0.5e-9)*(0.5e-9)))
>values(1)= 0;
> else if (boundary_id ==4 && ((p-point_z).norm_square() < 
> (0.5e-9)*(0.5e-9)))
>values(2)= 0;
>
>   }
>   template 
>   void
> BoundaryCondition::vector_value_list (const 
> std::vector > &points,
>  std::vector > 
>   &value_list) const
>   {
> const unsigned int n_points = points.size();
> Assert (value_list.size() == n_points,
> ExcDimensionMismatch (value_list.size(), n_points));
> for (unsigned int p=0; pBoundaryCondition::vector_value (points[p],
> value_list[p]);
>   }
>
> .and in the constraint I have 
> added VectorTools::interpolate_boundary_values for every boundary plane, 
> for example for the -x plane it looks like :
>
>
> VectorTools::interpolate_boundary_values(dof_handler,
>  boundary_id,
> 
>  BoundaryCondition (boundary_id),
>  constraints,
> 
>  fe.component_mask(x_displacement));
>
>
> I don't know why it doesn't recognize the if condition "  if 
>  (boundary_id ==0 && ((p-point_x).norm_square() <(0.5e-9)*(0.5e-9)))" !!!
>
> I was wondering if you know where I am making mistake, or if there is any 
> step in which this boundary condition has applied.
>
> Thanks,
> Hamed
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: a few questions about matrix-free

2016-09-29 Thread Stephen DeWitt
Hi Denis,
I'm sure there is someone more knowledgable than me, but I'll take a crack 
at a couple of these:

(2/3) Maybe I'm misunderstanding your circumstances, but if all you need is 
different update flags for different fields you can do that with a single 
MatrixFree object, but set different flags for each field with the 
FEEvaluation::evaluate 
method 
.
 
I'm not quite sure what you mean with regards to solving a problem in a 
staggered way, so you maybe you do actually need multiple MatrixFree 
objects.

(5) That's how I do it and it certainly works, although if there's a more 
elegant way built into the code, I'd be interested to hear it.

Best,
Steve

On Thursday, September 29, 2016 at 7:25:47 AM UTC-4, Denis Davydov wrote:
>
> Dear all,
>
> I have accumulated a few matrix-free related questions which i would like 
> to clarify ask.
>
> (1) FEEvaluation has read_dof_values (const std::vector< VectorType > 
> &src, const unsigned int first_index=0),
> but i don’t see a function to get to values of each DoF vector at 
> quadrature points.
>
> (2) For non-linear problems solved in staggered way one has several 
> matrix-free operators which need to exchange
> some data on quadrature points. Each of those operators has a separate 
> MatrixFree object assigned as they generally
> may require different update flags (values, gradients, etc). In order to 
> loop over cells during update of quadrature point
> data one needs to use MatrixFree object together with FEEvaluation. So the 
> questions is about this loop (cell/quadrature points)
> guaranteed to be the same for slightly different MatrixFree data objects.
>
> Supposedly if 
>
> MPI_Comm mpi_communicator
> TasksParallelScheme tasks_parallel_scheme
> unsigned int   tasks_block_size
>
> as well as DoFHandler, Constraints and Quadrature 
> are the same among all the MatrixFree objects, then the loop over block of 
> local cells and quadrature points 
> is guaranteed to be the same, right?
>
> As an example, say one operator needs shape values only, another -- only 
> gradients, finally in order to initialize quadrature point data i 
> need the locations of quadrature points in real space. I suppose in this 
> case I can keep and use an auxiliary MatrixFree object 
> to setup quadrature point data granted that all three objects use the same 
> mpi_comm, task parallel scheme, block size,
> dof_handler, constraints and quadrature.
>
> (3) It appears that vectors which are used with matrix-free need to be 
> initialized with MatrixFree I suppose if the only difference between two different MatrixFree objects 
> is UpdateFlags, it does not matter which one to use for initialization?
> Supposedly only mpi_communicator and DoFHandler influence the result.
>
> (4) LinearAlgebra::distributed::Vector seems to be different from 
> those in Trilinos and PETSc in that it can be used both 
> during solution of SLAE (writing into) and for evaluation of FE fields 
> (i.e. error estimator). That is, it behaves both as fully distributed 
> vector and the one with
> ghost entries. 
>
> My understanding is the following should work:
>
> // solve SLAE
> constraints.distribute (solution);
> solution.update_ghost_values();
> // do something like evaluate solution at quadrature points
> // solve SLAE again with the same vector
> constraints.distribute (solution);
> ...
>
> (5) I guess if evaluation of a function with VectorizedArray has some 
> branches (ie if x > 0 ... else...), then this part has to be evaluated 
> manually without SIMD
> by looping over VectorizedArray::n_array_elements. At least that's 
> what i would expect.
>
> Regards,
> Denis.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.