[petsc-users] How to specify different MPI communication patterns.

2024-05-21 Thread Randall Mackie
Dear PETSc team, A few years ago we were having some issue with MPI communications with large numbers of processes and subcomms, see this thread here:

[petsc-users] valgrind errors

2023-12-12 Thread Randall Mackie
It now seems to me that petsc+mpich is no longer valgrind clean, or I am doing something wrong. A simple program: Program test #include "petsc/finclude/petscsys.h" use petscsys PetscInt :: ierr call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscFinalize(ierr) end program

Re: [petsc-users] Better solver and preconditioner to use multiple GPU

2023-11-09 Thread Randall Mackie
Hi Ramoni, All EM induction methods solved numerically like finite differences are difficult already because of the null-space of the curl-curl equations and then adding air layers on top of your model also introduce another singularity. These have been dealt with in the past by adding in some

Re: [petsc-users] PETSc on GCC

2022-08-04 Thread Randall Mackie
Hi Simon, I think you might actually need CFLAGS=’-std=gnu99’ Randy M. > On Aug 4, 2022, at 1:51 PM, Satish Balay via petsc-users > wrote: > >> Configure Options: --configModules=PETSc.Configure >> --optionsModule=config.compilerOptions CFLAGS=-std=c99 --with-cc=mpicc >> --with-cxx=g++

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
> local mappings. > > Barry > > >> On Jul 29, 2022, at 5:33 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Why do I have to set the block size to 3? I’m not trying to create a block >> matrix. >> >> In fact j

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
without the MatPreallocator routines. Randy > On Jul 29, 2022, at 2:23 PM, Barry Smith wrote: > > > I'm hoping that it is as simple as that you did not set the block size of > your newly created matrix to 3? > > > >> On Jul 29, 2022, at 4:04 PM, Ran

Re: [petsc-users] MatPrealloctor

2022-07-29 Thread Randall Mackie
.<> On Jul 28, 2022, at 2:49 PM, Barry Smith <bsm...@petsc.dev> wrote:   I am not sure what you are asking exactly but I think so, so long have you have called MatSetLocalToGlobalMapping() and the "stencil" idea makes sense for your discretization.  BarryOn Jul 28, 2022, at 5

[petsc-users] MatPrealloctor

2022-07-28 Thread Randall Mackie
Dear PETSc users: Can one use a MatPreallocator and then call MatPreAlloctorPreallocate if using MatStencil routines (which seem to call MatSetValuesLocal). Thanks, Randy

Re: [petsc-users] PETSc / AMRex

2022-07-18 Thread Randall Mackie
> On Jul 15, 2022, at 12:26 PM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 2:12 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > > >> On Jul 15, 2022, at 11:58 AM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote: >> &

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 12:40 PM, Jed Brown wrote: > > Matthew Knepley writes: > >>> I currently set up a 3D DMDA using a box stencil and a stencil width of 2. >>> The i,j,k coordinates refer both to the cell (where there is a physical >>> value assigned) and to the 3 edges of the cell at the

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 11:58 AM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 1:46 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: >> On Jul 15, 2022, at 11:20 AM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote: >> >>

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
> On Jul 15, 2022, at 11:20 AM, Matthew Knepley wrote: > > On Fri, Jul 15, 2022 at 11:01 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > I am also interested in converting my DMDA code to DMPlex so that I can use > OcTree grids. > > Is there a simp

Re: [petsc-users] PETSc / AMRex

2022-07-15 Thread Randall Mackie
I am also interested in converting my DMDA code to DMPlex so that I can use OcTree grids. Is there a simple example that would show how to do a box grid in DMPlex, or more information about how to convert a DMDA grid to DMPlex? Thanks, Randy > On Jun 21, 2022, at 10:57 AM, Mark Adams wrote:

Re: [petsc-users] strumpack in ilu mode

2022-05-25 Thread Randall Mackie
ps://gitlab.com/petsc/petsc/-/merge_requests/4543/> and try its branch. > It has more features and may provide more of what you need. > > Barry > > >> On May 23, 2022, at 1:59 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Dear PE

[petsc-users] strumpack in ilu mode

2022-05-23 Thread Randall Mackie
Dear PETSc team: I am trying to use Strumpack in ILU mode, which is suppose to activate it’s low rank approximation as described on the man page: https://petsc.org/release/docs/manualpages/Mat/MATSOLVERSSTRUMPACK.html

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
> On May 3, 2022, at 12:37 PM, Mark Adams wrote: > > >> Are you saying that now you have to explicitly set each 3x3 dense block, >> even if they are not used and that was not the case before? > > That was always the case before, you may have misinterpreted the meaning of a > Mat block size?

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
block, even if they are not used and that was not the case before? Randy > On May 3, 2022, at 10:09 AM, Pierre Jolivet wrote: > > > >> On 3 May 2022, at 6:54 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Hi Pierre, >>

Re: [petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
nt block size for the column and row > distributions, see MatSetBlockSizes(). > > Thanks, > Pierre > >> On 3 May 2022, at 5:39 PM, Randall Mackie > <mailto:rlmackie...@gmail.com>> wrote: >> >> Dear PETSc team: >> >> A part of our code that has worked

[petsc-users] error with version 3.17.1

2022-05-03 Thread Randall Mackie
Dear PETSc team: A part of our code that has worked for years and previous versions is now failing with the latest version 3.17.1, on the KSP solve with the following error: [0]PETSC ERROR: - Error Message --

Re: [petsc-users] DMDA with 0 in lx, ly, lz ... or how to create a DMDA for a subregion

2022-01-25 Thread Randall Mackie
Take a look at these posts from last year and see if they will help you at least get a slice: https://lists.mcs.anl.gov/pipermail/petsc-users/2021-January/043037.html

Re: [petsc-users] Concatenating DM vectors

2021-08-11 Thread Randall Mackie
Hi Alfredo Take a look at VecStrideGather and VecStrideScatter….maybe these are what you want? https://petsc.org/release/docs/manualpages/Vec/VecStrideGather.html

[petsc-users] val grind suppression file

2021-06-07 Thread Randall Mackie
It seems that starting with petsc-3.15.0 the command petscmpiexec, when invoking Valgrind, now looks for a Valgrind suppression file, but there is no such file (at least in petsc-3.15.0.tar.gz from which I compiled the

Re: [petsc-users] Fortran initialization and XXXDestroy

2021-02-02 Thread Randall Mackie
Hi Mark, I don’t know what the XGC code is, but the way I do this in my Fortran code is that I initialize all objects I later want to destroy, for example: mat11=PETSC_NULL_MAT vec1=PETSC_NULL_VEC etc Then I check and destroy like: if (mat11 /= PETSC_NULL_MAT) call MatDestroy(mat11, ierr)

Re: [petsc-users] Convert a 3D DMDA sub-vector to a natural 2D vector

2021-01-23 Thread Randall Mackie
Several years ago I asked a similar question and this is what Barry told me: >> Randy, Take a look at DMDAGetRay() in src/dm/impls/da/dasub.c (now called DMDACreateRay()) this takes a row or column from a 2d DADM. You can use the same kind of approach to get a slice from a 3d

Re: [petsc-users] trouble compiling MPICH on cluster

2020-12-16 Thread Randall Mackie
e CC=gcc > CXX=g++ FC=gfortran F77=gfortran] should also provide equivalent [valgrind > clean] MPICH. > > Satish > > > > On Wed, 16 Dec 2020, Randall Mackie wrote: > >> Dear PETSc team: >> >> I am trying to compile a debug-mpich version of PETSc on a new re

[petsc-users] Is there a way to estimate total memory prior to solve

2020-12-10 Thread Randall Mackie
Dear PETSc users: While I can calculate the amount of memory any vector arrays I allocate inside my code (and probably get pretty close to any matrices), what I don’t know how to estimate is how much memory internal PETSc iterative solvers will take. Is there some way to get a reasonable

Re: [petsc-users] using real and complex together

2020-09-28 Thread Randall Mackie
Sam, you can solve a complex matrix using a real version of PETSc by doubling the size of your matrix and spitting out real/imaginary parts. See this paper: https://epubs.siam.org/doi/abs/10.1137/S1064827500372262?mobileUi=0

Re: [petsc-users] question about creating a block matrix

2020-08-11 Thread Randall Mackie
> On Aug 10, 2020, at 9:00 PM, Jed Brown wrote: > > Randall Mackie writes: > >> Dear PETSc users - >> >> I am trying to create a block matrix but it is not clear to me what is the >> right way to do this. >> >> First, I create 2 spa

[petsc-users] question about creating a block matrix

2020-08-10 Thread Randall Mackie
Dear PETSc users - I am trying to create a block matrix but it is not clear to me what is the right way to do this. First, I create 2 sparse matrices J1 and J2 using two different DMDAs. Then I compute the products J1^T J1, and J2^T J2, which are different sized matrices. Since the matrices

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-30 Thread Randall Mackie
tiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > >Thanks > --Junchao Zhang > > > On Fri, Apr 24, 2020 at 5:32 PM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Hi Junchao, > > I tested

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-27 Thread Randall Mackie
worked fine. >Please try it in your environment and let me know the result. Since the > failure is random, you may need to run multiple times. > > BTW, if no objection, I'd like to add your excellent example to petsc repo. > >Thanks > --Junchao Zhang > > > On Fri,

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-24 Thread Randall Mackie
applicable, we can provide options in petsc to carry out the communication in phases to avoid flooding the network (though it is better done by MPI).  Thanks.--Junchao ZhangOn Fri, Apr 17, 2020 at 10:47 AM Randall Mackie <rlmackie...@gmail.com> wrote:Hi Junchao,Thank you for your efforts.W

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-17 Thread Randall Mackie
ask, see case 3 in > https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c > <https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vscat/tests/ex9.c> > BTW, it is good to use petsc master so we are on the same page. > --Junchao Zhang > > > On Wed, Apr 15

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-15 Thread Randall Mackie
me to debug?--Junchao ZhangOn Tue, Apr 14, 2020 at 12:13 PM Randall Mackie <rlmackie...@gmail.com> wrote:Hi Junchao,We have tried your two suggestions but the problem remains.And the problem seems to be on the MPI_Isend line 117 in PetscGatherMessageLengths and not MPI_AllReduce.We have now trie

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-14 Thread Randall Mackie
ption -build_twosided allreduce, which is a workaround for Intel > MPI_Ibarrier bugs we met. >Thanks. > --Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Dear PETSc users, > > We are trying to underst

Re: [petsc-users] MPI error for large number of processes and subcomms

2020-04-13 Thread Randall Mackie
--Junchao Zhang > > > On Mon, Apr 13, 2020 at 10:38 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > Dear PETSc users, > > We are trying to understand an issue that has come up in running our code on > a large cloud cluster with a large number of pr

[petsc-users] MPI error for large number of processes and subcomms

2020-04-13 Thread Randall Mackie
Dear PETSc users, We are trying to understand an issue that has come up in running our code on a large cloud cluster with a large number of processes and subcomms. This is code that we use daily on multiple clusters without problems, and that runs valgrind clean for small test problems. The

[petsc-users] Question on VecScatter options

2020-04-10 Thread Randall Mackie
The VecScatter man page says that the default vecscatter type uses PetscSF, and that one can use PetscSF options to control the communication. PetscSFCreate lists 3 different types, including MPI-3 options. So I’m wondering is it enough to just add, for example, -sf_type neighbor to the list

Re: [petsc-users] duplicate PETSC options

2020-03-30 Thread Randall Mackie
Hi Matt, Yes I just submitted an issue. Thanks very much. Randy M. > On Mar 30, 2020, at 9:15 AM, Matthew Knepley wrote: > > On Mon, Mar 30, 2020 at 11:46 AM Randall Mackie <mailto:rlmackie...@gmail.com>> wrote: > When PETSc reads in a list of options (like PetscOpt

[petsc-users] duplicate PETSC options

2020-03-30 Thread Randall Mackie
When PETSc reads in a list of options (like PetscOptionsGetReal, etc), we have noticed that if there are duplicate entries, that PETSc takes the last one entered as the option to use. This can happen if the user didn’t notice there were two lines with the same options name (but different values

[petsc-users] PETSc 3.12 with .f90 files

2019-10-29 Thread Randall Mackie via petsc-users
Dear PETSc users: In our code, we have one or two small .f90 files that are part of the software, and they have always compiled without any issues with previous versions of PETSc, using standard PETSc make files. However, starting with PETSc 3.12, they no longer compile. Was there some

[petsc-users] call to PetscViewerAsciiOpen

2018-12-21 Thread Randall Mackie via petsc-users
The attached simple test program (when compiled in debug mode with gfortran and mpich) throws a lot of valgrind errors on the call to PetscViewerAsciiOpen, as well as the call to MPI_Comm_split. I can’t figure out if it is a problem in the code, or a bug somewhere else. Any advice is

Re: [petsc-users] Problem compiling with 64bit PETSc

2018-09-05 Thread Randall Mackie
You can use PetscMPIInt for integers in MPI calls. Check petscsys.h for definitions of all of these. Randy > On Sep 5, 2018, at 8:56 PM, TAY wee-beng wrote: > > Hi, > > My code has some problems now after converting to 64bit indices. > > After debugging, I realised that I'm using: > >

Re: [petsc-users] Problem compiling with 64bit PETSc

2018-08-30 Thread Randall Mackie
Don’t forget that not all integers passed into PETSc routines are 64 bit. For example, the error codes when called from Fortran should be defined as PetscErrorCode and not PetscInt. It’s really good to get into the habit of correctly declaring all PETSc variables according to the web pages, so

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-05-15 Thread Randall Mackie
s clearly a bug here, but it helps to localize the problem as >> much as possible. > >>>>>> On Thu, 5 Apr 2018, Randall Mackie wrote: > >>>>> so I assume this is an Intel bug, but before we submit a bug >>>>> report I wanted to see if any

Re: [petsc-users] configuration option for PETSc on cluster

2018-04-24 Thread Randall Mackie
I haven’t followed this thread completely, but have you sourced the Intel mklvars.sh file? I do the following, for example, source …/intel_2018/mkl/bin/mklvars.sh intel64 Then in our configure it’s simply: —with-blas-lapack-dir=…/intel_2018/mkl and this works fine and picks up all the right

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-12 Thread Randall Mackie
> On Apr 12, 2018, at 3:47 PM, Satish Balay wrote: > > On Thu, 12 Apr 2018, Victor Eijkhout wrote: > >> >> >> On Apr 12, 2018, at 3:30 PM, Satish Balay >> > wrote: >> >> I can reproudce this issue with 'icc (ICC) 18.0.2

Re: [petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-11 Thread Randall Mackie
hat incorrect? > > Hardware details weren't mentioned in this thread. > >> Again, there is clearly a bug here, but it helps to localize the problem as >> much as possible. > >>>>>> On Thu, 5 Apr 2018, Randall Mackie wrote: > >>>>> so

[petsc-users] Problems with DMDAVecGetArrayF90 + Intel

2018-04-05 Thread Randall Mackie
Dear PETSc users, I’m curious if anyone else experiences problems using DMDAVecGetArrayF90 in conjunction with Intel compilers? We have had many problems (typically 11 SEGV segmentation violations) when PETSc is compiled in optimize mode (with various combinations of options). These same codes

[petsc-users] valgrind errors on vecscattercreatetoAll

2018-01-18 Thread Randall Mackie
The very simple attached program throws lots of valgrind errors.I am using pets 3.8.3, compiled with the following options: ./configure \  --with-debugging=1 \  --with-fortran=1 \  --download-mpich=../mpich-3.3a2.tar.gz \ The makefile, run file, and valgrind output are also attached.Randy M.

[petsc-users] Intel MKL

2017-11-20 Thread Randall Mackie
Dear PETSc team: On upgrading to version 3.8, we have discovered an inconsistency in the python configuration scripts for using Intel MKL for BLAS/LAPACK. It seems that these options were changed between 3.7 and 3.8: Version 3.8: --with-blaslapack-lib=libsunperf.a --with-blas-lib=libblas.a

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
gt; On Aug 24, 2016, at 5:23 PM, Matthew Knepley <knep...@gmail.com> wrote: >> >> On Wed, Aug 24, 2016 at 5:01 PM, Randall Mackie <rlmackie...@gmail.com> >> wrote: >> I already create my own matrix with the appropriate size and layout. The >> problem seems to b

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
ttp://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetStencil.html#MatSetStencil>(). Randy > On Aug 24, 2016, at 2:52 PM, Barry Smith <bsm...@mcs.anl.gov> wrote: > > >> On Aug 24, 2016, at 4:45 PM, Randall Mackie <rlmackie...@gmail.com> wrote: >>

Re: [petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
of the DMDA? Randy > On Aug 24, 2016, at 2:39 PM, Barry Smith <bsm...@mcs.anl.gov> wrote: > > >> On Aug 24, 2016, at 4:27 PM, Randall Mackie <rlmackie...@gmail.com> wrote: >> >> I’ve run into a situation where MatSetValues seems to be dropping non

[petsc-users] MatSetValues dropping non-local entries

2016-08-24 Thread Randall Mackie
I’ve run into a situation where MatSetValues seems to be dropping non-local entries. Most of the entries that are set are local, but a few are possibly non-local, and are only maximum a few grid points off the local part of the grid. Specifically, I get the local to global mapping, and the

[petsc-users] Hypre - Euclid

2016-08-17 Thread Randall Mackie
It seems that Euclid is not available as a Hypre PC unless it is called as part of BoomerAMG. However, there are many older posts that mention -pc_hypre_type euclid, so I’m wondering why, or if there is some other way to access the parallel ILU(k) of Euclid? Thanks, Randy

[petsc-users] MatSolve_SeqAIJ_NaturalOrdering

2016-04-22 Thread Randall Mackie
After profiling our code, we have found that most of the time is spent in MatSolve_SeqAIJ_NaturalOrdering, which upon inspection is just doing simple forward and backward solves of already factored ILU matrices. We think that we should be able to see improvement by replacing these with

Re: [petsc-users] question about VecScatter from one global vector to another

2016-02-19 Thread Randall Mackie
t 4:39 PM, Matthew Knepley <knep...@gmail.com> wrote: > > On Fri, Feb 19, 2016 at 6:33 PM, Randall Mackie <rlmackie...@gmail.com > <mailto:rlmackie...@gmail.com>> wrote: > I am trying to do a VecScatter of a subset of elements from a global vector > on one DMDA to

[petsc-users] question about VecScatter from one global vector to another

2016-02-19 Thread Randall Mackie
I am trying to do a VecScatter of a subset of elements from a global vector on one DMDA to a global vector on a different DMDA (different sized DMDAs). I thought what made sense was to create a parallel IS using the local to global mapping obtained from the two DMDAs so that the local portion

Re: [petsc-users] memory scalable AO

2016-02-18 Thread Randall Mackie
d the patch did it recompile the file zdaindexf.c (in fact that file >> and daindexf.c should be the only files that changed and hence the only >> files that got recompiled). >> >> Barry >> >> >> >>> On Feb 17, 2016, at 11:41 PM, R

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
PM, Satish Balay <ba...@mcs.anl.gov> wrote: > > Attached are the modified src/dm/impls/da/ftn-auto/daindexf.c and > src/dm/impls/da/ftn-custom/zdaindexf.c files. > > Satish > > On Wed, 17 Feb 2016, Jed Brown wrote: > >> Randall Mackie <rlmackie...@gmail.co

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
this leads to the error ‘bin/maint/generatefortranstubs.py’ …No such file. there is no maint directory under bin. Randy > On Feb 17, 2016, at 7:05 PM, Jed Brown <j...@jedbrown.org> wrote: > > Randall Mackie <rlmackie...@gmail.com> writes: >> So it seems to be

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
n > dmdasetaotype_ once it is there in the debugger you can make sure it is the > right function and step until it crashes to see what I have done wrong. > > Baryr > >> On Feb 17, 2016, at 7:29 PM, Randall Mackie <rlmackie...@gmail.com> wrote: >> >> Unfor

Re: [petsc-users] memory scalable AO

2016-02-17 Thread Randall Mackie
lt;bsm...@mcs.anl.gov> wrote: > > > Should be ok. Do you have implicit none and the correct include files so > AOMEMORYSCALABLE is defined? > > I think you need to run in the debugger next to track why this happens. > > Barry > >> On Feb 17, 2016, at 11:33

[petsc-users] possible to save direct solver factorization for later use

2015-12-09 Thread Randall Mackie
Is it possible to save a direct solver factorization (like Mumps) for use in later parts of a code? In general, I’m thinking of a scenario like this: mat A - do lots of solves using Mumps mat B - do lots of solves using Mumps do other stuff mat A - do lots of solves using Mumps mat

Re: [petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
Thanks Barry and Jose. > On Nov 27, 2015, at 10:27 AM, Barry Smith <bsm...@mcs.anl.gov> wrote: > > > Use MPIU_INTEGER for Fortran > > >> On Nov 27, 2015, at 12:09 PM, Jose E. Roman <jro...@dsic.upv.es> wrote: >> >> >>> El 27 nov

[petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
If my program is compiled using 64-bit-indices, and I have an integer variable defined as PetscInt, what is the right way to broadcast that using MPI_Bcast? I currently have: call MPI_Bcast(n, 1, MPI_INTEGER, … which is the right way to do it for regular integers, but what do I use in place

[petsc-users] running applications with 64 bit indices

2015-11-27 Thread Randall Mackie
I’ve been struggling to get an application running, which was compiled with 64 bit indices. It runs fine locally on my laptop with a petsc-downloaded mpich (and is Valgrind clean). On our cluster, with Intel MPI, it crashes immediately. When I say immediately, I put a goto end of program

[petsc-users] problem compiling

2015-11-26 Thread Randall Mackie
I was trying to recompile PETSc using superlu_dist on a linux system, and I had configure download the necessary packages, and the configure went fine. Compilation bombed out with the error message: /usr/bin/ld cannot find -ldat this came immediately after the ztaulinesearch compile in the

Re: [petsc-users] problem compiling

2015-11-26 Thread Randall Mackie
build. > > Satish > > On Thu, 26 Nov 2015, Randall Mackie wrote: > >> I was trying to recompile PETSc using superlu_dist on a linux system, and I >> had configure download the necessary packages, and the configure went fine. >> >> Compilation bombed out wit

[petsc-users] problem using 64-bit-indices

2015-11-17 Thread Randall Mackie
I ran into a problem yesterday where a call to DMDACreate3d gave an error message about the size being too big and that I should use —with-64-bit-indices. So I recompiled PETSc (latest version, 3.6.2) with that option, but when I recompiled my code, I found the following errors: call

Re: [petsc-users] problem using 64-bit-indices

2015-11-17 Thread Randall Mackie
r parameters > > Satish > > On Tue, 17 Nov 2015, Randall Mackie wrote: > >> I ran into a problem yesterday where a call to DMDACreate3d gave an error >> message about the size being too big and that I should use >> —with-64-bit-indices. >> >> So I

[petsc-users] strange FPE divide by zero

2015-09-14 Thread Randall Mackie
I’ve run into a strange error, which is that when I compile my Fortran code with -ffpe-trap=invalid it bombs out and gives the backtrace below. If I don’t include the ffpe-trap switch, the code runs fine and gives the expected results. I’ve even run the code through Valgrind, and no issues were

Re: [petsc-users] Question about GAMG and memory use

2015-03-06 Thread Randall Mackie
...@mcs.anl.gov wrote: Randy, I've not been able to reproduce this; let us know if get to the point of having something we can run and debug. Barry On Mar 4, 2015, at 7:45 PM, Randall Mackie rlmackie...@gmail.com wrote: In my application, I am repeatedly calling KSPSolve with the following

Re: [petsc-users] Question about GAMG and memory use

2015-03-05 Thread Randall Mackie
On Mar 4, 2015, at 7:30 PM, Barry Smith bsm...@mcs.anl.gov wrote: On Mar 4, 2015, at 7:45 PM, Randall Mackie rlmackie...@gmail.com wrote: In my application, I am repeatedly calling KSPSolve with the following options: -ksp_type gmres \ -pc_type gamg \ -pc_gamg_type agg

[petsc-users] Question about GAMG and memory use

2015-03-04 Thread Randall Mackie
In my application, I am repeatedly calling KSPSolve with the following options: -ksp_type gmres \ -pc_type gamg \ -pc_gamg_type agg \ -pc_gamg_agg_nsmooths 1\ each call is after the matrix and right hand side have been updated. This works well in the sense that it solves the system in a

[petsc-users] include/finclude/petscsysdef.h and daimag

2014-09-29 Thread Randall Mackie
I recently ran into an issue with include/finclude/petscsysdef.h and the definition of PetscImaginaryPart, which is defined as daimag(a) in the case PETSC_MISSING_DREAL is not defined. 1) As far as I know, daimag is not a valid fortran statement, and I suspect that here you might want dimag.

Re: [petsc-users] valgrind error messages with PETSc V3.5.1

2014-08-04 Thread Randall Mackie
does the problem go away? Let me know and I’ll fix the code. Thanks Barry On Aug 2, 2014, at 2:20 PM, Randall Mackie rlmackie...@gmail.com wrote: The attached small program, basically a call to PetscPrintf, gives the following valgrind errors: [rmackie

[petsc-users] valgrind error messages with PETSc V3.5.1

2014-08-02 Thread Randall Mackie
The attached small program, basically a call to PetscPrintf, gives the following valgrind errors: [rmackie ~/tst_petsc_problem] ./cmd_test ==24812== Invalid read of size 1 ==24812==at 0x4C2E500: __GI_strncpy (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==24812==by 0x4EF5B4D:

[petsc-users] inserting vector into row of dense matrix

2014-04-23 Thread Randall Mackie
I have a 3D rectangular grid over which I am doing some computations. I have created a 3D DA for that grid, and I can put the results of the computations into a global vector obtained via DMGetGlobalVector. Now, I need to do these computations for several realizations, and I want to keep all

Re: [petsc-users] printing from some processors to a file

2013-11-27 Thread Randall Mackie
https://bitbucket.org/petsc/petsc/wiki/Home Because it changes APIs I cannot merge it into the current release (though it doesn’t change APIs for your code). Thanks for reporting the problem, Barry On Nov 26, 2013, at 3:45 PM, Randall Mackie rlmackie...@gmail.com wrote: I am

[petsc-users] printing from some processors to a file

2013-11-26 Thread Randall Mackie
I am trying to print a character string to a file from one or more processors in a communicator. I thought that I could do this using PetscViewerASCIISynchronizedPrintf, but it prints to the screen instead of to the file opened as a viewer. The attached simple program illustrates the issue. If I

Re: [petsc-users] VecScatter from slice of global vector to sequential vector + AO errors

2013-10-03 Thread Randall Mackie
On Oct 3, 2013, at 12:24 PM, Matthew Knepley knep...@gmail.com wrote: On Thu, Oct 3, 2013 at 2:07 PM, Randall Mackie rlmackie...@gmail.com wrote: I am trying to create a VecScatter that will scatter all the values from a horizontal slice of a 3D DMDA global vector (with dof=3

Re: [petsc-users] MATMPIBAIJ

2013-09-19 Thread Randall Mackie
I'm not sure I understand your statement about matrix shells being time consuming to implement. They are easy, see this example: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex14f.F.html Randy M. On Sep 19, 2013, at 12:01 PM, Reza Yaghmaie

[petsc-users] possible error

2013-03-14 Thread Randall Mackie
Nachiket, I've run into a similar situation before, where my code ran fine in debug mode, bombed out with segmentation violation in optimize mode, and yet I couldn't find anything in valgrind. In my case, I eventually tracked it down to some variable not being broadcast to all processors, and

[petsc-users] output after updating to petsc-3.3-p4 from p3

2012-11-26 Thread Randall Mackie
I just updated from patch level 3 to patch level 4, and after recompiling and rerunning, I am getting output like the following: [0]Total space allocated 1091174672 bytes [ 0]48 bytes PetscObjectComposedDataIncreaseReal() line 170 in /home/MackieR/PETSc/petsc-3.3-p4/src/sys/objects/state.c

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
For quite some time I've been solving my problems using BCGS with ASM and that works quite well. I was curious to try gamg, but when I try, I get error messages about a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically turn this off with:

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
PM, Randall Mackie rlmackie862 at gmail.com wrote: For quite some time I've been solving my problems using BCGS with ASM and that works quite well. I was curious to try gamg, but when I try, I get error messages about a new nonzero causing a malloc (see error message below). What is strange

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
Thanks, I will try and let you know. Randy On Sep 14, 2012, at 11:07 AM, Mark F. Adams mark.adams at columbia.edu wrote: I just pushed a fix in petsc-dev that should fix the problem. Mark On Sep 14, 2012, at 1:58 PM, Randall Mackie rlmackie862 at gmail.com wrote: Hi Mark, Yes

[petsc-users] Problem with -pc_type gamg

2012-09-14 Thread Randall Mackie
PM, Randall Mackie rlmackie862 at gmail.com wrote: Hi Mark, Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev

[petsc-users] Declaring struct to represent field for dof 1 for DM in Fortran

2012-07-10 Thread Randall Mackie
On Jul 10, 2012, at 11:29 AM, Jed Brown wrote: On Tue, Jul 10, 2012 at 1:22 PM, TAY wee-beng zonexo at gmail.com wrote: Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it gives the error during linking: 1dm_test2d.obj : error LNK2019: unresolved external symbol

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
monitor above for the MyKSPMonitor of that example. I can send you the modified code and c subroutine to test if you want. Thanks, Randy On Tue, Apr 17, 2012 at 10:39 AM, Matthew Knepley knepley at gmail.com wrote: On Tue, Apr 17, 2012 at 1:27 PM, Randall Mackie rlmackie862 at gmail.comwrote

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
wrote: It looks like this null object check was missing from the Fortran bindings. Can you try with this patch included? http://petsc.cs.iit.edu/petsc/petsc-dev/rev/05b5b9325f55 On Thu, Apr 19, 2012 at 10:12, Randall Mackie rlmackie862 at gmail.com wrote: Hi Matt and Barry, I tried

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
://petsc.cs.iit.edu/petsc/petsc-dev/rev/05b5b9325f55 On Thu, Apr 19, 2012 at 10:12, Randall Mackie rlmackie862 at gmail.com wrote: Hi Matt and Barry, I tried this again, but this time I used a c subroutine like Barry suggested, which is this: #include petsc.h PetscErrorCode shellkspmonitor_

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
Jed, I am still calling from Fortran, AND using my original Fortran ShellKSPMonitor that calls KSPMonitorTrueResidual. Seems to work okay now. Thanks again, Randy On Apr 19, 2012, at 12:00 PM, Jed Brown wrote: On Thu, Apr 19, 2012 at 11:53, Randall Mackie rlmackie862 at gmail.com wrote

[petsc-users] ksp_monitor_true_residual_norm

2012-04-19 Thread Randall Mackie
, Randall Mackie rlmackie862 at gmail.com wrote: I am still calling from Fortran, AND using my original Fortran ShellKSPMonitor that calls KSPMonitorTrueResidual. Seems to work okay now. Did you create a PetscViewer context? I don't think it can work correctly passing PETSC_NULL_OBJECT

[petsc-users] ksp_monitor_true_residual_norm

2012-04-17 Thread Randall Mackie
,myKSPMonitorTrueResidualNorm,PETSC_VIEWER_STDOUT,0);CHKERRQ(ierr); On Apr 13, 2012, at 6:52 PM, Randall Mackie wrote: In using ksp_monitor_true_residual_norm, is it possible to change how often this information is printed out? That is, instead of every iteration, say I only want to see it every 10 or 20

[petsc-users] ksp_monitor_true_residual_norm

2012-04-17 Thread Randall Mackie
Hi Matt, I'm afraid it didn't work by passing in dummy=0. Randy On Apr 17, 2012, at 10:39 AM, Matthew Knepley wrote: On Tue, Apr 17, 2012 at 1:27 PM, Randall Mackie rlmackie862 at gmail.com wrote: Hi Barry, I've tried implementing this in Fortran, following ex2f.F in /src/ksp/ksp

[petsc-users] ksp_monitor_true_residual_norm

2012-04-13 Thread Randall Mackie
In using ksp_monitor_true_residual_norm, is it possible to change how often this information is printed out? That is, instead of every iteration, say I only want to see it every 10 or 20 iterations. Is there an easy way to do this, other than creating my own monitor and doing it myself? Thanks,

[petsc-users] CUDA with complex number

2012-01-23 Thread Randall Mackie
I looked into this last year and this was the response from the PETSc group and I don't know if this has changed: = Looks like it is not so trivial as I had made it out to be. Perhaps if you ask on petsc-dev at mcs.anl.gov there may be people who want this

  1   2   >