[petsc-users] How to specify different MPI communication patterns.

2024-05-21 Thread Randall Mackie
Dear PETSc team, A few years ago we were having some issue with MPI communications with large numbers of processes and subcomms, see this thread here: https://urldefense.us/v3/__https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-April/040976.html__;!!G_uCfscf7eWS!fyPrzMKC4KZmxGO-HI0xUlOCbg

[petsc-users] valgrind errors

2023-12-12 Thread Randall Mackie
It now seems to me that petsc+mpich is no longer valgrind clean, or I am doing something wrong. A simple program: Program test #include "petsc/finclude/petscsys.h" use petscsys PetscInt :: ierr call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscFinalize(ierr) end program

Re: [petsc-users] Better solver and preconditioner to use multiple GPU

2023-11-09 Thread Randall Mackie
Hi Ramoni, All EM induction methods solved numerically like finite differences are difficult already because of the null-space of the curl-curl equations and then adding air layers on top of your model also introduce another singularity. These have been dealt with in the past by adding in some

Re: [petsc-users] PETSc on GCC

2022-08-04 Thread Randall Mackie
Hi Simon, I think you might actually need CFLAGS=’-std=gnu99’ Randy M. > On Aug 4, 2022, at 1:51 PM, Satish Balay via petsc-users > wrote: > >> Configure Options: --configModules=PETSc.Configure >> --optionsModule=config.compilerOptions CFLAGS=-std=c99 --with-cc=mpicc >> --with-cxx=g++ --d

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
> local mappings. > > Barry > > >> On Jul 29, 2022, at 5:33 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Why do I have to set the block size to 3? I’m not trying to create a block >> matrix. >> >> In fact j

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
without the MatPreallocator routines. Randy > On Jul 29, 2022, at 2:23 PM, Barry Smith wrote: > > > I'm hoping that it is as simple as that you did not set the block size of > your newly created matrix to 3? > > > >> On Jul 29, 2022, at 4:04 PM

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
.<> On Jul 28, 2022, at 2:49 PM, Barry Smith <bsm...@petsc.dev> wrote:   I am not sure what you are asking exactly but I think so, so long have you have called MatSetLocalToGlobalMapping() and the "stencil" idea makes sense for your discretization.  BarryOn Jul 28, 2022, at 5

[petsc-users] MatPrealloctor

2022-07-28 Thread Randall Mackie
Dear PETSc users: Can one use a MatPreallocator and then call MatPreAlloctorPreallocate if using MatStencil routines (which seem to call MatSetValuesLocal). Thanks, Randy

Re: [petsc-users] PETSc / AMRex

2022-07-18 Thread Randall Mackie
> On Jul 15, 2022, at 12:26 PM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 2:12 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > > >> On Jul 15, 2022, at 11:58 AM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote: >> &

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 12:40 PM, Jed Brown wrote: > > Matthew Knepley writes: > >>> I currently set up a 3D DMDA using a box stencil and a stencil width of 2. >>> The i,j,k coordinates refer both to the cell (where there is a physical >>> value assigned) and to the 3 edges of the cell at the

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 11:58 AM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 1:46 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: >> On Jul 15, 2022, at 11:20 AM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote: >> >>

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 11:20 AM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 11:01 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > I am also interested in converting my DMDA code to DMPlex so that I can use > OcTree grids. > > Is there a simp

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
I am also interested in converting my DMDA code to DMPlex so that I can use OcTree grids. Is there a simple example that would show how to do a box grid in DMPlex, or more information about how to convert a DMDA grid to DMPlex? Thanks, Randy > On Jun 21, 2022, at 10:57 AM, Mark Adams wrote: >

Re: [petsc-users] strumpack in ilu mode

2022-05-25 Thread Randall Mackie
ps://gitlab.com/petsc/petsc/-/merge_requests/4543/> and try its branch. > It has more features and may provide more of what you need. > > Barry > > >> On May 23, 2022, at 1:59 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Dear PETSc

[petsc-users] strumpack in ilu mode

2022-05-23 Thread Randall Mackie
Dear PETSc team: I am trying to use Strumpack in ILU mode, which is suppose to activate it’s low rank approximation as described on the man page: https://petsc.org/release/docs/manualpages/Mat/MATSOLVERSSTRUMPACK.html an

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
> On May 3, 2022, at 12:37 PM, Mark Adams wrote: > > >> Are you saying that now you have to explicitly set each 3x3 dense block, >> even if they are not used and that was not the case before? > > That was always the case before, you may have misinterpreted the meaning of a > Mat block size?

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
block, even if they are not used and that was not the case before? Randy > On May 3, 2022, at 10:09 AM, Pierre Jolivet wrote: > > > >> On 3 May 2022, at 6:54 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Hi Pierre, >>

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
nt block size for the column and row > distributions, see MatSetBlockSizes(). > > Thanks, > Pierre > >> On 3 May 2022, at 5:39 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Dear PETSc team: >> >> A part of our code that has worked

[petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
Dear PETSc team: A part of our code that has worked for years and previous versions is now failing with the latest version 3.17.1, on the KSP solve with the following error: [0]PETSC ERROR: - Error Message -- [0]PE

Re: [petsc-users] DMDA with 0 in lx, ly, lz ... or how to create a DMDA for a subregion

2022-01-25 Thread Randall Mackie
Take a look at these posts from last year and see if they will help you at least get a slice: https://lists.mcs.anl.gov/pipermail/petsc-users/2021-January/043037.html https://lists.mcs.anl.gov/pipermail/petsc-users/2021-

Re: [petsc-users] Concatenating DM vectors

2021-08-11 Thread Randall Mackie
Hi Alfredo Take a look at VecStrideGather and VecStrideScatter….maybe these are what you want? https://petsc.org/release/docs/manualpages/Vec/VecStrideGather.html https://petsc.org/release/docs/manualpages/Vec/VecStrideScatt

[petsc-users] val grind suppression file

2021-06-07 Thread Randall Mackie
It seems that starting with petsc-3.15.0 the command petscmpiexec, when invoking Valgrind, now looks for a Valgrind suppression file, but there is no such file (at least in petsc-3.15.0.tar.gz from which I compiled the la

Re: [petsc-users] Fortran initialization and XXXDestroy

2021-02-02 Thread Randall Mackie
Hi Mark, I don’t know what the XGC code is, but the way I do this in my Fortran code is that I initialize all objects I later want to destroy, for example: mat11=PETSC_NULL_MAT vec1=PETSC_NULL_VEC etc Then I check and destroy like: if (mat11 /= PETSC_NULL_MAT) call MatDestroy(mat11, ierr) et

Re: [petsc-users] Convert a 3D DMDA sub-vector to a natural 2D vector

2021-01-23 Thread Randall Mackie
Several years ago I asked a similar question and this is what Barry told me: >> Randy, Take a look at DMDAGetRay() in src/dm/impls/da/dasub.c (now called DMDACreateRay()) this takes a row or column from a 2d DADM. You can use the same kind of approach to get a slice from a 3d DMDA

Re: [petsc-users] trouble compiling MPICH on cluster

2020-12-16 Thread Randall Mackie
'--enable-fast=no' > '--enable-error-messages=all' '--enable-g=meminit' > > > So a manual build with equivalent options [with the above fix - i.e CC=gcc > CXX=g++ FC=gfortran F77=gfortran] should also provide equivalent [valgrind > clean] MPICH. > >

[petsc-users] Is there a way to estimate total memory prior to solve

2020-12-10 Thread Randall Mackie
Dear PETSc users: While I can calculate the amount of memory any vector arrays I allocate inside my code (and probably get pretty close to any matrices), what I don’t know how to estimate is how much memory internal PETSc iterative solvers will take. Is there some way to get a reasonable estima

Re: [petsc-users] using real and complex together

2020-09-28 Thread Randall Mackie
Sam, you can solve a complex matrix using a real version of PETSc by doubling the size of your matrix and spitting out real/imaginary parts. See this paper: https://epubs.siam.org/doi/abs/10.1137/S1064827500372262?mobileUi=0

Re: [petsc-users] question about creating a block matrix

2020-08-11 Thread Randall Mackie
> On Aug 10, 2020, at 9:00 PM, Jed Brown wrote: > > Randall Mackie writes: > >> Dear PETSc users - >> >> I am trying to create a block matrix but it is not clear to me what is the >> right way to do this. >> >> First, I create 2 sparse matri

[petsc-users] question about creating a block matrix

2020-08-10 Thread Randall Mackie
Dear PETSc users - I am trying to create a block matrix but it is not clear to me what is the right way to do this. First, I create 2 sparse matrices J1 and J2 using two different DMDAs. Then I compute the products J1^T J1, and J2^T J2, which are different sized matrices. Since the matrices ar

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-30 Thread Randall Mackie
tiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > >Thanks > --Junchao Zhang > > > On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Hi Junchao, > > I te

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-27 Thread Randall Mackie
worked fine. >Please try it in your environment and let me know the result. Since the > failure is random, you may need to run multiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > >Thanks > --Junchao Zhang > > > On

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-24 Thread Randall Mackie
applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI).  Thanks.--Junchao ZhangOn Fri, Apr 17, 2020 at 10:47 AM Randall Mackie <rlmackie...@gmail.com> wrote:Hi Junchao,Thank you for your efforts.We t

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-17 Thread Randall Mackie
his task, see case 3 in > https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c > <https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c> > BTW, it is good to use petsc master so we are on the same page. > --Junchao Zhang > > > On Wed, A

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-15 Thread Randall Mackie
me to debug?--Junchao ZhangOn Tue, Apr 14, 2020 at 12:13 PM Randall Mackie <rlmackie...@gmail.com> wrote:Hi Junchao,We have tried your two suggestions but the problem remains.And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce.We have now trie

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-14 Thread Randall Mackie
etsc option -build_twosided allreduce, which is a workaround for Intel > MPI_Ibarrier bugs we met. > Thanks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Dear PETSc users, > > We are trying

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-13 Thread Randall Mackie
anks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Dear PETSc users, > > We are trying to understand an issue that has come up in running our code on > a large cloud cluster with a large nu

[petsc-users] MPI error for large number of processes and subcomms

2020-04-13 Thread Randall Mackie
Dear PETSc users, We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. The run

[petsc-users] Question on VecScatter options

2020-04-10 Thread Randall Mackie
The VecScatter man page says that the default vecscatter type uses PetscSF, and that one can use PetscSF options to control the communication. PetscSFCreate lists 3 different types, including MPI-3 options. So I’m wondering is it enough to just add, for example, -sf_type neighbor to the list

Re: [petsc-users] duplicate PETSC options

2020-03-30 Thread Randall Mackie
Hi Matt, Yes I just submitted an issue. Thanks very much. Randy M. > On Mar 30, 2020, at 9:15 AM, Matthew Knepley wrote: > > On Mon, Mar 30, 2020 at 11:46 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > When PETSc reads in a list of options (like PetscOpt

[petsc-users] duplicate PETSC options

2020-03-30 Thread Randall Mackie
When PETSc reads in a list of options (like PetscOptionsGetReal, etc), we have noticed that if there are duplicate entries, that PETSc takes the last one entered as the option to use. This can happen if the user didn’t notice there were two lines with the same options name (but different values

[petsc-users] PETSc 3.12 with .f90 files

2019-10-29 Thread Randall Mackie via petsc-users
Dear PETSc users: In our code, we have one or two small .f90 files that are part of the software, and they have always compiled without any issues with previous versions of PETSc, using standard PETSc make files. However, starting with PETSc 3.12, they no longer compile. Was there some reasons

[petsc-users] call to PetscViewerAsciiOpen

2018-12-21 Thread Randall Mackie via petsc-users
The attached simple test program (when compiled in debug mode with gfortran and mpich) throws a lot of valgrind errors on the call to PetscViewerAsciiOpen, as well as the call to MPI_Comm_split. I can’t figure out if it is a problem in the code, or a bug somewhere else. Any advice is appreciate

Re: [petsc-users] Problem compiling with 64bit PETSc

2018-09-05 Thread Randall Mackie
You can use PetscMPIInt for integers in MPI calls. Check petscsys.h for definitions of all of these. Randy > On Sep 5, 2018, at 8:56 PM, TAY wee-beng wrote: > > Hi, > > My code has some problems now after converting to 64bit indices. > > After debugging, I realised that I'm using: > > call

Re: [petsc-users] Problem compiling with 64bit PETSc

2018-08-30 Thread Randall Mackie
Don’t forget that not all integers passed into PETSc routines are 64 bit. For example, the error codes when called from Fortran should be defined as PetscErrorCode and not PetscInt. It’s really good to get into the habit of correctly declaring all PETSc variables according to the web pages, so th

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-05-15 Thread Randall Mackie
t; >> Again, there is clearly a bug here, but it helps to localize the problem as >> much as possible. > >>>>>> On Thu, 5 Apr 2018, Randall Mackie wrote: > >>>>> so I assume this is an Intel bug, but before we submit a bug >>>>> report I wan

Re: [petsc-users] configuration option for PETSc on cluster

2018-04-24 Thread Randall Mackie
I haven’t followed this thread completely, but have you sourced the Intel mklvars.sh file? I do the following, for example, source …/intel_2018/mkl/bin/mklvars.sh intel64 Then in our configure it’s simply: —with-blas-lapack-dir=…/intel_2018/mkl and this works fine and picks up all the right I

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-12 Thread Randall Mackie
> On Apr 12, 2018, at 3:47 PM, Satish Balay wrote: > > On Thu, 12 Apr 2018, Victor Eijkhout wrote: > >> >> >> On Apr 12, 2018, at 3:30 PM, Satish Balay >> mailto:ba...@mcs.anl.gov>> wrote: >> >> I can reproudce this issue with 'icc (ICC) 18.0.2 20180210'. Is there a >> newer version? >> >

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-11 Thread Randall Mackie
. But if your machine supports AVX-512, it is >> definitely beneficial to use AVX-512. >> >> Hong (Mr.) >> >>> On Apr 5, 2018, at 10:03 AM, Randall Mackie wrote: >>> >>> Dear PETSc users, >>> >>> I’m curious if anyone else ex

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-11 Thread Randall Mackie
ecific issue. Is that incorrect? > > Hardware details weren't mentioned in this thread. > >> Again, there is clearly a bug here, but it helps to localize the problem as >> much as possible. > >>>>>> On Thu, 5 Apr 2018, Randall Mackie wrote: &g

Re: [petsc-users] PETSc 3.9 release

2018-04-10 Thread Randall Mackie
Zhang > ilya > Jaroslaw Piwonski > Jean Philippe François > Jed Brown > Jin Chen > Jørgen Dokken > Jose E. Roman > Karl Rupp > Keith Lindsay > "Klaij, Christiaan" > Lawrence Mitchell > Lisandro Dalcin > Manav Bhatia > Marco Schauer > Marius Buerkl

[petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-05 Thread Randall Mackie
Dear PETSc users, I’m curious if anyone else experiences problems using DMDAVecGetArrayF90 in conjunction with Intel compilers? We have had many problems (typically 11 SEGV segmentation violations) when PETSc is compiled in optimize mode (with various combinations of options). These same codes r

[petsc-users] valgrind errors on vecscattercreatetoAll

2018-01-18 Thread Randall Mackie
The very simple attached program throws lots of valgrind errors.I am using pets 3.8.3, compiled with the following options: ./configure \  --with-debugging=1 \  --with-fortran=1 \  --download-mpich=../mpich-3.3a2.tar.gz \ The makefile, run file, and valgrind output are also attached.Randy M. cmd_t

[petsc-users] Intel MKL

2017-11-20 Thread Randall Mackie
Dear PETSc team: On upgrading to version 3.8, we have discovered an inconsistency in the python configuration scripts for using Intel MKL for BLAS/LAPACK. It seems that these options were changed between 3.7 and 3.8: Version 3.8: --with-blaslapack-lib=libsunperf.a --with-blas-lib=libblas.a --wi

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
M, Matthew Knepley wrote: >> >> On Wed, Aug 24, 2016 at 5:01 PM, Randall Mackie >> wrote: >> I already create my own matrix with the appropriate size and layout. The >> problem seems to be the local to global mapping from >> DMGetLocalToGlobalMapping, whic

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
ttp://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetStencil.html#MatSetStencil>(). Randy > On Aug 24, 2016, at 2:52 PM, Barry Smith wrote: > > >> On Aug 24, 2016, at 4:45 PM, Randall Mackie wrote: >> >> Well, I only need this particular matr

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
the DMDA? Randy > On Aug 24, 2016, at 2:39 PM, Barry Smith wrote: > > >> On Aug 24, 2016, at 4:27 PM, Randall Mackie wrote: >> >> I’ve run into a situation where MatSetValues seems to be dropping non-local >> entries. Most of the entries that are set

[petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
I’ve run into a situation where MatSetValues seems to be dropping non-local entries. Most of the entries that are set are local, but a few are possibly non-local, and are only maximum a few grid points off the local part of the grid. Specifically, I get the local to global mapping, and the indi

[petsc-users] Hypre - Euclid

2016-08-17 Thread Randall Mackie
It seems that Euclid is not available as a Hypre PC unless it is called as part of BoomerAMG. However, there are many older posts that mention -pc_hypre_type euclid, so I’m wondering why, or if there is some other way to access the parallel ILU(k) of Euclid? Thanks, Randy

[petsc-users] MatSolve_SeqAIJ_NaturalOrdering

2016-04-22 Thread Randall Mackie
After profiling our code, we have found that most of the time is spent in MatSolve_SeqAIJ_NaturalOrdering, which upon inspection is just doing simple forward and backward solves of already factored ILU matrices. We think that we should be able to see improvement by replacing these with optimize

Re: [petsc-users] question about VecScatter from one global vector to another

2016-02-19 Thread Randall Mackie
t 4:39 PM, Matthew Knepley wrote: > > On Fri, Feb 19, 2016 at 6:33 PM, Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > I am trying to do a VecScatter of a subset of elements from a global vector > on one DMDA to a global vector on a different DMDA (different sized DMD

[petsc-users] question about VecScatter from one global vector to another

2016-02-19 Thread Randall Mackie
I am trying to do a VecScatter of a subset of elements from a global vector on one DMDA to a global vector on a different DMDA (different sized DMDAs). I thought what made sense was to create a parallel IS using the local to global mapping obtained from the two DMDAs so that the local portion of

Re: [petsc-users] memory scalable AO

2016-02-18 Thread Randall Mackie
it recompile the file zdaindexf.c (in fact that file >> and daindexf.c should be the only files that changed and hence the only >> files that got recompiled). >> >> Barry >> >> >> >>> On Feb 17, 2016, at 11:41 PM, Randall Mackie wrote: >

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
PM, Satish Balay wrote: > > Attached are the modified src/dm/impls/da/ftn-auto/daindexf.c and > src/dm/impls/da/ftn-custom/zdaindexf.c files. > > Satish > > On Wed, 17 Feb 2016, Jed Brown wrote: > >> Randall Mackie writes: >> >>> this leads to the er

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
this leads to the error ‘bin/maint/generatefortranstubs.py’ …No such file. there is no maint directory under bin. Randy > On Feb 17, 2016, at 7:05 PM, Jed Brown wrote: > > Randall Mackie writes: >> So it seems to be ignoring the dmdasetaotype_ in >> /src/dm/impls/da/f

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
ce it is there in the debugger you can make sure it is the > right function and step until it crashes to see what I have done wrong. > > Baryr > >> On Feb 17, 2016, at 7:29 PM, Randall Mackie wrote: >> >> Unfortunately I am getting exactly the same result.

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
wrote: > > > Here is patch. > > If it works for you I'll put it in maint and master tomorrow. > > Barry > >> On Feb 17, 2016, at 3:46 PM, Randall Mackie wrote: >> >> The attached test program demonstrates the problem. When I run it,

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
ote: > > > Should be ok. Do you have implicit none and the correct include files so > AOMEMORYSCALABLE is defined? > > I think you need to run in the debugger next to track why this happens. > > Barry > >> On Feb 17, 2016, at 11:33 AM, Randall Mackie wrot

[petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
What is the correct way to set the AO for a DMDA to be the memory scalable version? I have tried this: call DMDASetAOType(da,AOMEMORYSCALABLE,ierr) call DMDAGetAO(da,ao,ierr) The code compiles fine, but I simply get a Segmentation Violation when I run it: [3]PETSC ERROR: -

[petsc-users] possible to save direct solver factorization for later use

2015-12-09 Thread Randall Mackie
Is it possible to save a direct solver factorization (like Mumps) for use in later parts of a code? In general, I’m thinking of a scenario like this: mat A - do lots of solves using Mumps mat B - do lots of solves using Mumps do other stuff mat A - do lots of solves using Mumps mat

[petsc-users] running applications with 64 bit indices

2015-11-27 Thread Randall Mackie
I’ve been struggling to get an application running, which was compiled with 64 bit indices. It runs fine locally on my laptop with a petsc-downloaded mpich (and is Valgrind clean). On our cluster, with Intel MPI, it crashes immediately. When I say immediately, I put a goto end of program right

Re: [petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
Thanks Barry and Jose. > On Nov 27, 2015, at 10:27 AM, Barry Smith wrote: > > > Use MPIU_INTEGER for Fortran > > >> On Nov 27, 2015, at 12:09 PM, Jose E. Roman wrote: >> >> >>> El 27 nov 2015, a las 19:00, Randall Mackie >>> esc

[petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
If my program is compiled using 64-bit-indices, and I have an integer variable defined as PetscInt, what is the right way to broadcast that using MPI_Bcast? I currently have: call MPI_Bcast(n, 1, MPI_INTEGER, … which is the right way to do it for regular integers, but what do I use in place of

Re: [petsc-users] problem compiling

2015-11-26 Thread Randall Mackie
> On Thu, 26 Nov 2015, Randall Mackie wrote: > >> I was trying to recompile PETSc using superlu_dist on a linux system, and I >> had configure download the necessary packages, and the configure went fine. >> >> Compilation bombed out with the error message: &g

[petsc-users] problem compiling

2015-11-26 Thread Randall Mackie
I was trying to recompile PETSc using superlu_dist on a linux system, and I had configure download the necessary packages, and the configure went fine. Compilation bombed out with the error message: /usr/bin/ld cannot find -ldat this came immediately after the ztaulinesearch compile in the make

Re: [petsc-users] problem using 64-bit-indices

2015-11-17 Thread Randall Mackie
meters > > Satish > > On Tue, 17 Nov 2015, Randall Mackie wrote: > >> I ran into a problem yesterday where a call to DMDACreate3d gave an error >> message about the size being too big and that I should use >> —with-64-bit-indices. >> >> So I reco

[petsc-users] problem using 64-bit-indices

2015-11-17 Thread Randall Mackie
I ran into a problem yesterday where a call to DMDACreate3d gave an error message about the size being too big and that I should use —with-64-bit-indices. So I recompiled PETSc (latest version, 3.6.2) with that option, but when I recompiled my code, I found the following errors: call VecGetA

[petsc-users] strange FPE divide by zero

2015-09-14 Thread Randall Mackie
I’ve run into a strange error, which is that when I compile my Fortran code with -ffpe-trap=invalid it bombs out and gives the backtrace below. If I don’t include the ffpe-trap switch, the code runs fine and gives the expected results. I’ve even run the code through Valgrind, and no issues were

Re: [petsc-users] Question about GAMG and memory use

2015-03-06 Thread Randall Mackie
> On Mar 5, 2015, at 2:41 PM, Barry Smith wrote: > > > Randy, > > I've not been able to reproduce this; let us know if get to the point of > having something we can run and debug. > > Barry > >> On Mar 4, 2015, at 7:45 PM, Randall Mackie wrote

Re: [petsc-users] Question about GAMG and memory use

2015-03-05 Thread Randall Mackie
> On Mar 4, 2015, at 7:30 PM, Barry Smith wrote: > > >> On Mar 4, 2015, at 7:45 PM, Randall Mackie wrote: >> >> In my application, I am repeatedly calling KSPSolve with the following >> options: >> >> -ksp_type gmres \ >> -pc_type gamg

[petsc-users] Question about GAMG and memory use

2015-03-04 Thread Randall Mackie
In my application, I am repeatedly calling KSPSolve with the following options: -ksp_type gmres \ -pc_type gamg \ -pc_gamg_type agg \ -pc_gamg_agg_nsmooths 1\ each call is after the matrix and right hand side have been updated. This works well in the sense that it solves the system in a reasona

[petsc-users] include/finclude/petscsysdef.h and daimag

2014-09-29 Thread Randall Mackie
I recently ran into an issue with include/finclude/petscsysdef.h and the definition of PetscImaginaryPart, which is defined as daimag(a) in the case PETSC_MISSING_DREAL is not defined. 1) As far as I know, daimag is not a valid fortran statement, and I suspect that here you might want dimag.

Re: [petsc-users] valgrind error messages with PETSc V3.5.1

2014-08-04 Thread Randall Mackie
} \ > } > > in include/petsc-private/fortranimpl.h and recompile the PETSc libraries does > the problem go away? > > Let me know and I’ll fix the code. > > Thanks > > Barry > > > > On Aug 2, 2014, at 2:20 PM, Randall Mackie wrote: >

[petsc-users] valgrind error messages with PETSc V3.5.1

2014-08-02 Thread Randall Mackie
The attached small program, basically a call to PetscPrintf, gives the following valgrind errors: [rmackie ~/tst_petsc_problem] ./cmd_test ==24812== Invalid read of size 1 ==24812==at 0x4C2E500: __GI_strncpy (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==24812==by 0x4EF5B4D:

[petsc-users] inserting vector into row of dense matrix

2014-04-23 Thread Randall Mackie
I have a 3D rectangular grid over which I am doing some computations. I have created a 3D DA for that grid, and I can put the results of the computations into a global vector obtained via DMGetGlobalVector. Now, I need to do these computations for several realizations, and I want to keep all th

Re: [petsc-users] printing from some processors to a file

2013-11-27 Thread Randall Mackie
TSc > https://bitbucket.org/petsc/petsc/wiki/Home > >Because it changes APIs I cannot merge it into the current release > (though it doesn’t change APIs for your code). > > Thanks for reporting the problem, > >Barry > On Nov 26, 2013, at 3:45 PM, Randall Macki

[petsc-users] printing from some processors to a file

2013-11-26 Thread Randall Mackie
I am trying to print a character string to a file from one or more processors in a communicator. I thought that I could do this using PetscViewerASCIISynchronizedPrintf, but it prints to the screen instead of to the file opened as a viewer. The attached simple program illustrates the issue. If I re

Re: [petsc-users] VecScatter from slice of global vector to sequential vector + AO errors

2013-10-03 Thread Randall Mackie
On Oct 3, 2013, at 12:24 PM, Matthew Knepley wrote: > On Thu, Oct 3, 2013 at 2:07 PM, Randall Mackie wrote: > I am trying to create a VecScatter that will scatter all the values from a > horizontal slice of a 3D DMDA global vector (with dof=3) to a sequential > vector on every p

[petsc-users] VecScatter from slice of global vector to sequential vector + AO errors

2013-10-03 Thread Randall Mackie
I am trying to create a VecScatter that will scatter all the values from a horizontal slice of a 3D DMDA global vector (with dof=3) to a sequential vector on every processor. So far I have been unsuccessful, most likely because I don't completely understand how to get the appropriate IS to do this.

Re: [petsc-users] MATMPIBAIJ

2013-09-19 Thread Randall Mackie
I'm not sure I understand your statement about matrix shells being time consuming to implement. They are easy, see this example: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex14f.F.html Randy M. On Sep 19, 2013, at 12:01 PM, Reza Yaghmaie wrote: > > Thanks Mat

[petsc-users] possible error

2013-03-14 Thread Randall Mackie
Nachiket, I've run into a similar situation before, where my code ran fine in debug mode, bombed out with segmentation violation in optimize mode, and yet I couldn't find anything in valgrind. In my case, I eventually tracked it down to some variable not being broadcast to all processors, and so

[petsc-users] output after updating to petsc-3.3-p4 from p3

2012-11-26 Thread Randall Mackie
I just updated from patch level 3 to patch level 4, and after recompiling and rerunning, I am getting output like the following: [0]Total space allocated 1091174672 bytes [ 0]48 bytes PetscObjectComposedDataIncreaseReal() line 170 in /home/MackieR/PETSc/petsc-3.3-p4/src/sys/objects/state.c

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
to test this with the GAMG preconditioner, without having to do too much work on interpolation operators, etc. Randy On Sep 14, 2012, at 11:07 AM, Mark F. Adams wrote: > I just pushed a fix in petsc-dev that should fix the problem. > > Mark > > On Sep 14, 2012, at 1:58 PM,

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
Thanks, I will try and let you know. Randy On Sep 14, 2012, at 11:07 AM, "Mark F. Adams" wrote: > I just pushed a fix in petsc-dev that should fix the problem. > > Mark > > On Sep 14, 2012, at 1:58 PM, Randall Mackie wrote: > >> Hi Mark, >> >

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
s code. > > Mark > > On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: > >> For quite some time I've been solving my problems using BCGS with ASM and >> that works quite well. >> I was curious to try gamg, but when I try, I get error messages about >&

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
For quite some time I've been solving my problems using BCGS with ASM and that works quite well. I was curious to try gamg, but when I try, I get error messages about a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically turn this off with:

[petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran

2012-07-10 Thread Randall Mackie
On Jul 10, 2012, at 11:29 AM, Jed Brown wrote: > On Tue, Jul 10, 2012 at 1:22 PM, TAY wee-beng wrote: > Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it gives the error > during linking: > > 1>dm_test2d.obj : error LNK2019: unresolved external symbol > DMDAVECGETARRAYDOFF90 refer

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
:02, Randall Mackie > wrote: > I am still calling from Fortran, AND using my original Fortran > ShellKSPMonitor that calls KSPMonitorTrueResidual. > Seems to work okay now. > > Did you create a PetscViewer context? I don't think it can work correctly > passing PETSC_NULL

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
Jed, I am still calling from Fortran, AND using my original Fortran ShellKSPMonitor that calls KSPMonitorTrueResidual. Seems to work okay now. Thanks again, Randy On Apr 19, 2012, at 12:00 PM, Jed Brown wrote: > On Thu, Apr 19, 2012 at 11:53, Randall Mackie > wrote: > It is wo

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
? > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/05b5b9325f55 > > On Thu, Apr 19, 2012 at 10:12, Randall Mackie > wrote: > Hi Matt and Barry, > > I tried this again, but this time I used a c subroutine like Barry suggested, > which is this: > > #include "petsc

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
wrote: > It looks like this null object check was missing from the Fortran bindings. > Can you try with this patch included? > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/05b5b9325f55 > > On Thu, Apr 19, 2012 at 10:12, Randall Mackie > wrote: > Hi Matt and Barry, >

  1   2   3   >