Re: [petsc-users] configure error on Titan with Intel

2015-05-12 Thread Mark Adams
Notes: ./configure ran fine and detected -lhwloc in some standard system install location under normal circumstances it couldn't just disappear for a different example. I configured in an interactive shell, so on a compute node. I tried to 'make ex56' on a login node, as usual. So I am

Re: [petsc-users] configure error on Titan with Intel

2015-05-12 Thread Satish Balay
On Tue, 12 May 2015, Mark Adams wrote: Notes: ./configure ran fine and detected -lhwloc in some standard system install location under normal circumstances it couldn't just disappear for a different example. I configured in an interactive shell, so on a compute node. I tried to

Re: [petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-12 Thread Barry Smith
On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote: Hi All, I recently have some time-dependent cases that have difficulty in convergence. It needs a lot of linear iterations during a specific time, e.g., more than 100 linear iterations for every newton iteration. In

Re: [petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-12 Thread Danyang Su
On 15-05-11 07:19 PM, Hong wrote: Danyang: I recently have some time-dependent cases that have difficulty in convergence. It needs a lot of linear iterations during a specific time, e.g., more than 100 linear iterations for every newton iteration. In PETSc parallel version,

Re: [petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-12 Thread Danyang Su
On 15-05-12 11:13 AM, Barry Smith wrote: On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote: Hi All, I recently have some time-dependent cases that have difficulty in convergence. It needs a lot of linear iterations during a specific time, e.g., more than 100 linear

[petsc-users] segfault on MatCreateSNESMF ?

2015-05-12 Thread Mark Lohry
I'm getting a segfault when trying to set up a matrix-free solver: Program received signal SIGSEGV, Segmentation fault. 0x2be7b7de in MatCreateSNESMF (snes=0x0, J=0x7fffc340) at /home/mlohry/dev/petsc-3.5.3/src/snes/mf/snesmfj.c:151 151 if (snes-vec_func) { (gdb) bt #0

Re: [petsc-users] segfault on MatCreateSNESMF ?

2015-05-12 Thread Mark Lohry
sorry, disregard that. I had forgotten to do TSCreate earlier. On 05/12/2015 04:38 PM, Mark Lohry wrote: I'm getting a segfault when trying to set up a matrix-free solver: Program received signal SIGSEGV, Segmentation fault. 0x2be7b7de in MatCreateSNESMF (snes=0x0, J=0x7fffc340)

[petsc-users] Hypre + openMP

2015-05-12 Thread Michele Rosso
Hi, is it possible to use the openmp capabilities of hypre via petsc? Thanks, Michele

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Barry Smith
You could compile hypre yourself with the OpenMP feature turned on and then configure PETSc to use that version of hypre; of course the rest of the PETSc code would not utilize the OpenMP threads. Barry BTW: I don't know of any evidence that using hypre with OpenMP is superior to using

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Matthew Knepley
On Tue, May 12, 2015 at 11:41 PM, Michele Rosso mro...@uci.edu wrote: Barry, thanks for your answer. The reason I'm asking is that multigrid limits the number of MPI tasks I can use for a given grid, since k multigrid levels require at least 2^(k-1) grid nodes per direction. I was wondering

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Michele Rosso
Thanks Matt, Could you give me an example of not-native partitioning? I have a cubic domain and a 3D domain decomposition. I use dmda3d to create the partitioning. Thanks, Michele On May 12, 2015 9:51 PM, Matthew Knepley knep...@gmail.com wrote: On Tue, May 12, 2015 at 11:41 PM, Michele Rosso

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Michele Rosso
Hi Jed, I see what you mean. I am using geometric multigrid and therefore I have a limit on the number of processors I can use. I was looking for an alternative and found hypre, but I did not realize that since it's algebraic multigrid there is no such limitation. Anyway, let's say I want to keep

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Jed Brown
Michele Rosso mro...@uci.edu writes: Thanks Matt, Could you give me an example of not-native partitioning? I have a cubic domain and a 3D domain decomposition. I use dmda3d to create the partitioning. Are you talking about geometric multigrid (AMG coursening is typically irregular)? Anyway,

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Michele Rosso
Barry, thanks for your answer. The reason I'm asking is that multigrid limits the number of MPI tasks I can use for a given grid, since k multigrid levels require at least 2^(k-1) grid nodes per direction. I was wondering if using OpenMP together with MPI could help circumventing the problem. If

Re: [petsc-users] Hypre + openMP

2015-05-12 Thread Matthew Knepley
On Wed, May 13, 2015 at 12:01 AM, Michele Rosso mro...@uci.edu wrote: Thanks Matt, Could you give me an example of not-native partitioning? I have a cubic domain and a 3D domain decomposition. I use dmda3d to create the partitioning. naive, and I was talking about coarse grids. Coarse grids