Notes: ./configure ran fine and detected -lhwloc in some standard system
install location under normal circumstances it couldn't just disappear for
a different example.
I configured in an interactive shell, so on a compute node. I tried to
'make ex56' on a login node, as usual. So I am
On Tue, 12 May 2015, Mark Adams wrote:
Notes: ./configure ran fine and detected -lhwloc in some standard system
install location under normal circumstances it couldn't just disappear for
a different example.
I configured in an interactive shell, so on a compute node. I tried to
On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I recently have some time-dependent cases that have difficulty in
convergence. It needs a lot of linear iterations during a specific time,
e.g., more than 100 linear iterations for every newton iteration. In
On 15-05-11 07:19 PM, Hong wrote:
Danyang:
I recently have some time-dependent cases that have difficulty in
convergence. It needs a lot of linear iterations during a specific
time, e.g., more than 100 linear iterations for every newton
iteration. In PETSc parallel version,
On 15-05-12 11:13 AM, Barry Smith wrote:
On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote:
Hi All,
I recently have some time-dependent cases that have difficulty in convergence.
It needs a lot of linear iterations during a specific time, e.g., more than 100
linear
I'm getting a segfault when trying to set up a matrix-free solver:
Program received signal SIGSEGV, Segmentation fault.
0x2be7b7de in MatCreateSNESMF (snes=0x0, J=0x7fffc340)
at /home/mlohry/dev/petsc-3.5.3/src/snes/mf/snesmfj.c:151
151 if (snes-vec_func) {
(gdb) bt
#0
sorry, disregard that. I had forgotten to do TSCreate earlier.
On 05/12/2015 04:38 PM, Mark Lohry wrote:
I'm getting a segfault when trying to set up a matrix-free solver:
Program received signal SIGSEGV, Segmentation fault.
0x2be7b7de in MatCreateSNESMF (snes=0x0, J=0x7fffc340)
Hi,
is it possible to use the openmp capabilities of hypre via petsc?
Thanks,
Michele
You could compile hypre yourself with the OpenMP feature turned on and then
configure PETSc to use that version of hypre; of course the rest of the PETSc
code would not utilize the OpenMP threads.
Barry
BTW: I don't know of any evidence that using hypre with OpenMP is superior to
using
On Tue, May 12, 2015 at 11:41 PM, Michele Rosso mro...@uci.edu wrote:
Barry,
thanks for your answer. The reason I'm asking is that multigrid limits the
number of MPI tasks I can use for a given grid, since k multigrid levels
require at least 2^(k-1) grid nodes per direction. I was wondering
Thanks Matt,
Could you give me an example of not-native partitioning? I have a cubic
domain and a 3D domain decomposition. I use dmda3d to create the
partitioning.
Thanks,
Michele
On May 12, 2015 9:51 PM, Matthew Knepley knep...@gmail.com wrote:
On Tue, May 12, 2015 at 11:41 PM, Michele Rosso
Hi Jed,
I see what you mean. I am using geometric multigrid and therefore I have a
limit on the number of processors I can use. I was looking for an
alternative and found hypre, but I did not realize that since it's
algebraic multigrid there is no such limitation.
Anyway, let's say I want to keep
Michele Rosso mro...@uci.edu writes:
Thanks Matt,
Could you give me an example of not-native partitioning? I have a cubic
domain and a 3D domain decomposition. I use dmda3d to create the
partitioning.
Are you talking about geometric multigrid (AMG coursening is typically
irregular)? Anyway,
Barry,
thanks for your answer. The reason I'm asking is that multigrid limits the
number of MPI tasks I can use for a given grid, since k multigrid levels
require at least 2^(k-1) grid nodes per direction. I was wondering if using
OpenMP together with MPI could help circumventing the problem. If
On Wed, May 13, 2015 at 12:01 AM, Michele Rosso mro...@uci.edu wrote:
Thanks Matt,
Could you give me an example of not-native partitioning? I have a cubic
domain and a 3D domain decomposition. I use dmda3d to create the
partitioning.
naive, and I was talking about coarse grids. Coarse grids
15 matches
Mail list logo