Re: [petsc-users] Diagnosing Convergence Issue in Fieldsplit Problem

2024-05-23 Thread Jed Brown
Barry Smith writes: > Unfortunately it cannot automatically because -pc_fieldsplit_detect_saddle_point just grabs part of the matrix (having no concept of "what part" so doesn't know to grab the null space information.  ZjQcmQRYFpfptBannerStart This Message Is From an

Re: [petsc-users] How to specify different MPI communication patterns.

2024-05-21 Thread Jed Brown
Randall Mackie writes: > Dear PETSc team, > > A few years ago we were having some issue with MPI communications with large numbers of processes and subcomms, see this thread here: > > https: //urldefense. us/v3/__https: //lists. mcs. anl. gov/mailman/htdig/petsc-users/2020-April/040976. 

Re: [petsc-users] DMPlex periodic face coordinates

2024-05-15 Thread Jed Brown
Matteo Semplice writes: >> The way that periodic coordinates work is that it stores a DG >> coordinate field by cell. Faces default back to the vertices. You >> could think about  also ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This

Re: [petsc-dev] Error creating DMPlex from CGNS file

2024-04-28 Thread Jed Brown
Thanks for fixing this issue. To contribute, the standard practice is to make a fork and push there. https: //urldefense. us/v3/__https: //gitlab. com/petsc/petsc/-/forks/new__;!!G_uCfscf7eWS!dT8V-NZ0-KPA3SkO_ZJ7HqQVqnCB0IftBhmT_RQAaVE2s3mVhMzzLkDDiw1LV3RyVSXBuH5a7Dk9JZVDoAA$

Re: [petsc-users] About recent changes in GAMG

2024-04-24 Thread Jed Brown
Ashish Patel writes: > Hi Jed, > VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me. > > Mark, running without the near nullspace also ZjQcmQRYFpfptBannerStart This Message Is From an External Sender

Re: [petsc-users] About recent changes in GAMG

2024-04-18 Thread Jed Brown
Mark Adams writes: >>> Yea, my interpretation of these methods is also that " > PetscMemoryGetMaximumUsage" should be >= "PetscMallocGetMaximumUsage". >>> But you are seeing the opposite. > ZjQcmQRYFpfptBannerStart This Message Is From an External Sender

Re: [petsc-users] Correct way to set/track global numberings in DMPlex?

2024-04-03 Thread Jed Brown
Matthew Knepley writes: >> I'm developing routines that will read/write CGNS files to DMPlex and vice >> versa. >> One of the recurring challenges is the bookkeeping of global numbering for >> ZjQcmQRYFpfptBannerStart This Message Is From an External Sender

Re: [petsc-users] using custom matrix vector multiplication

2024-03-28 Thread Jed Brown
Interfaces like KSPSetOperators (https: //urldefense. us/v3/__https: //petsc. org/main/manualpages/KSP/KSPSetOperators/__;!!G_uCfscf7eWS!YINsVNEe8TcsVMY3AVwkS1hf46fWdiKi5JNOe9560N5QG1LPQyjMQgodivpJtg1IwxHgRR3_V3uHWG4h2AI$) have Amat and Pmat arguments.  ZjQcmQRYFpfptBannerStart

Re: [petsc-users] Fortran interfaces: Google Summer of Code project?

2024-03-23 Thread Jed Brown
s online, I'll post on linkedin but > ideally we can motivate someone who is already known. > > best regards, > Martin > > On Thu, 2024-03-21 at 23:13 -0600, Jed Brown wrote: >> Barry Smith writes: >> >> > > We already have the generated ftn-a

Re: [petsc-users] Fortran interfaces: Google Summer of Code project?

2024-03-21 Thread Jed Brown
Barry Smith writes: >> We already have the generated ftn-auto-interfaces/*. h90. The INTERFACE keyword could be replaced with CONTAINS (making these definitions instead of just interfaces), and then the bodies ZjQcmQRYFpfptBannerStart This Message Is From an External

Re: [petsc-users] Fortran interfaces: Google Summer of Code project?

2024-03-21 Thread Jed Brown
Barry Smith writes: > In my limited understanding of the Fortran iso_c_binding, if we do not provide an equivalent Fortran stub (the user calls) that uses the iso_c_binding to call PETSc C code, then when the user ZjQcmQRYFpfptBannerStart This Message Is From an

Re: [petsc-users] Fortran interfaces: Google Summer of Code project?

2024-03-21 Thread Jed Brown
Barry Smith writes: > We've always had some tension between adding new features to bfort vs developing an entirely new tool (for example in Python (maybe calling a little LLVM to help parse the C function), for maybe ZjQcmQRYFpfptBannerStart This Message Is From an

Re: [petsc-users] 'Preconditioning' with lower-order method

2024-03-03 Thread Jed Brown
If you're having PETSc use coloring and have confirmed that the stencil is sufficient, then it would be nonsmoothness (again, consider the limiter you've chosen) preventing quadratic convergence (assuming that doesn't kick in eventually). Note ZjQcmQRYFpfptBannerStart

Re: [petsc-users] FW: 'Preconditioning' with lower-order method

2024-03-03 Thread Jed Brown
One option is to form the preconditioner using the FV1 method, which is sparser and satisfies h-ellipticity, while using FV2 for the residual and (optionally) for matrix-free operator application. FV1 is a highly diffusive method so in a sense, ZjQcmQRYFpfptBannerStart

Re: [petsc-users] Parallel vector layout for TAO optimization with separable state/design structure

2024-01-30 Thread Jed Brown
For a bit of assistance, you can use DMComposite and DMRedundantCreate; see src/snes/tutorials/ex21.c and ex22.c. Note that when computing redundantly, it's critical that the computation be deterministic (i.e., not using atomics or randomness without matching seeds) so the logic stays

Re: [petsc-users] fortran interface to snes matrix-free jacobian

2023-12-20 Thread Jed Brown
implement. > > Best, Yi > > -Original Message----- > From: Jed Brown > Sent: Wednesday, December 20, 2023 5:40 PM > To: Yi Hu ; petsc-users@mcs.anl.gov > Subject: Re: [petsc-users] fortran interface to snes matrix-free jacobian > > Are you wanting an analytic matr

Re: [petsc-users] fortran interface to snes matrix-free jacobian

2023-12-20 Thread Jed Brown
Are you wanting an analytic matrix-free operator or one created for you based on finite differencing? If the latter, just use -snes_mf or -snes_mf_operator. https://petsc.org/release/manual/snes/#jacobian-evaluation Yi Hu writes: > Dear PETSc team, > > My  solution scheme relies on a

Re: [petsc-users] [EXTERNAL] Re: Call to DMSetMatrixPreallocateSkip not changing allocation behavior

2023-12-18 Thread Jed Brown
; Thank you, > > Philip Fackler > Research Software Engineer, Application Engineering Group > Advanced Computing Systems Research Section > Computer Science and Mathematics Division > Oak Ridge National Laboratory > > From: Jed Brown > Sen

Re: [petsc-users] Call to DMSetMatrixPreallocateSkip not changing allocation behavior

2023-12-14 Thread Jed Brown
I had a one-character typo in the diff above. This MR to release should work now. https://gitlab.com/petsc/petsc/-/merge_requests/7120 Jed Brown writes: > 17 GB for a 1D DMDA, wow. :-) > > Could you try applying this diff to make it work for DMDA (it's currently > handl

Re: [petsc-users] Call to DMSetMatrixPreallocateSkip not changing allocation behavior

2023-12-14 Thread Jed Brown
17 GB for a 1D DMDA, wow. :-) Could you try applying this diff to make it work for DMDA (it's currently handled by DMPlex)? diff --git i/src/dm/impls/da/fdda.c w/src/dm/impls/da/fdda.c index cad4d926504..bd2a3bda635 100644 --- i/src/dm/impls/da/fdda.c +++ w/src/dm/impls/da/fdda.c @@ -675,19

Re: [petsc-dev] Matshell with PETSs solvers using GPU

2023-12-12 Thread Jed Brown
ult()? If you do your own communication, you don't need to use DMGlobalToLocalBegin/End. > > Thank you, > Han > >> On Nov 4, 2022, at 9:06 AM, Jed Brown wrote: >> >> Yes, this is supported. You can use VecGetArrayAndMemType() to get access to >> device mem

Re: [petsc-users] Bug report VecNorm

2023-12-10 Thread Jed Brown
Pierre Jolivet writes: >> On 10 Dec 2023, at 8:40 AM, Stephan Köhler >> wrote: >> >> Dear PETSc/Tao team, >> >> there is a bug in the voector interface: In the function >> VecNorm, see, eg. >> https://petsc.org/release/src/vec/vec/interface/rvector.c.html#VecNorm line >> 197 the check

Re: [petsc-users] PETSc and MPI-3/RMA

2023-12-09 Thread Jed Brown
It uses nonblocking point-to-point by default since that tends to perform better and is less prone to MPI implementation bugs, but you can select `-sf_type window` to try it, or use other strategies here depending on the sort of problem you're working with. #define PETSCSFBASIC "basic"

Re: [petsc-users] Reading VTK files in PETSc

2023-11-30 Thread Jed Brown
meshes are Cartesian, but non-uniform. > > Thanks, > Kevin > > On Thu, Nov 30, 2023 at 1:02 AM Jed Brown wrote: > >> Is it necessary that it be VTK format or can it be PETSc's binary format >> or a different mesh format? VTK (be it legacy .vtk or the XML-based .vtu

Re: [petsc-users] Reading VTK files in PETSc

2023-11-29 Thread Jed Brown
Is it necessary that it be VTK format or can it be PETSc's binary format or a different mesh format? VTK (be it legacy .vtk or the XML-based .vtu, etc.) is a bad format for parallel reading, no matter how much effort might go into an implementation. "Kevin G. Wang" writes: > Good morning

Re: [petsc-users] [Xolotl-psi-development] [EXTERNAL] Re: Unexpected performance losses switching to COO interface

2023-11-29 Thread Jed Brown
'redundant'. If it's diffusive, then algebraic multigrid would be a good place to start. > Let us know what we can do to answer this question more accurately. > > Cheers, > > Sophie > > From: Jed Brown > Sent: Tuesday, November 28,

Re: [petsc-users] [EXTERNAL] Re: Unexpected performance losses switching to COO interface

2023-11-28 Thread Jed Brown
"Fackler, Philip via petsc-users" writes: > That makes sense. Here are the arguments that I think are relevant: > > -fieldsplit_1_pc_type redundant -fieldsplit_0_pc_type sor -pc_type fieldsplit > -pc_fieldsplit_detect_coupling​ What sort of physics are in splits 0 and 1? SOR is not a good GPU

Re: [PRQ#51463] Orphan Request for parmetis

2023-11-28 Thread Jed Brown
The issue here is that upstream is effectively unmaintained so this version has patches for known bugs. (Patches have been submitted upstream with reproducers over the past decade, but upstream remains generally non-responsive unless cornered in the hallway at a conference.) These can be

Re: [petsc-users] Storing Values using a Triplet for using later

2023-11-08 Thread Jed Brown
I don't think you want to hash floating point values, but I've had a number of reasons to want spatial hashing for near-neighbor queries in PETSc and that would be a great contribution. (Spatial hashes have a length scale and compute integer bins.) Brandon Denton via petsc-users writes: >

Re: [petsc-users] Better solver and preconditioner to use multiple GPU

2023-11-08 Thread Jed Brown
What sort of problem are you solving? Algebraic multigrid like gamg or hypre are good choices for elliptic problems. Sparse triangular solves have horrific efficiency even on one GPU so you generally want to do your best to stay away from them. "Ramoni Z. Sedano Azevedo" writes: > Hey! > > I

Re: [petsc-users] Status of PETScSF failures with GPU-aware MPI on Perlmutter

2023-11-02 Thread Jed Brown
What modules do you have loaded. I don't know if it currently works with cuda-11.7. I assume you're following these instructions carefully. https://docs.nersc.gov/development/programming-models/mpi/cray-mpich/#cuda-aware-mpi In our experience, GPU-aware MPI continues to be brittle on these

Re: [petsc-users] Advices on creating DMPlex from custom input format

2023-10-30 Thread Jed Brown
It's probably easier to apply boundary conditions when you have the serial mesh. You may consider contributing the reader if it's a format that others use. "onur.notonur via petsc-users" writes: > Hi, > > I hope this message finds you all in good health and high spirits. > > I wanted to

Re: [petsc-users] Copying PETSc Objects Across MPI Communicators

2023-10-24 Thread Jed Brown
You can place it in a parallel Mat (that has rows or columns on only one rank or a subset of ranks) and then MatCreateSubMatrix with all new rows/columns on a different rank or subset of ranks. That said, you usually have a function that assembles the matrix and you can just call that on the

Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Jed Brown
Matthew Knepley writes: > On Wed, Oct 11, 2023 at 1:03 PM Jed Brown wrote: > >> I don't see an attachment, but his thesis used conservative variables and >> defined an effective length scale in a way that seemed to assume constant >> shape function gradients. I'm

Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-11 Thread Jed Brown
I don't see an attachment, but his thesis used conservative variables and defined an effective length scale in a way that seemed to assume constant shape function gradients. I'm not aware of systematic literature comparing the covariant and contravariant length measures on anisotropic meshes,

Re: [petsc-users] FEM Implementation of NS with SUPG Stabilization

2023-10-10 Thread Jed Brown
Do you want to write a new code using only PETSc or would you be up for collaborating on ceed-fluids, which is a high-performance compressible SUPG solver based on DMPlex with good GPU support? It uses the metric to compute covariant length for stabilization. We have YZƁ shock capturing, though

Re: [petsc-users] Orthogonalization of a (sparse) PETSc matrix

2023-08-29 Thread Jed Brown
Suitesparse includes a sparse QR algorithm. The main issue is that (even with pivoting) the R factor has the same nonzero structure as a Cholesky factor of A^T A, which is generally much denser than a factor of A, and this degraded sparsity impacts Q as well. I wonder if someone would like to

Re: [petsc-users] 32-bit vs 64-bit GPU support

2023-08-11 Thread Jed Brown
Jacob Faibussowitsch writes: > More generally, it would be interesting to know the breakdown of installed > CUDA versions for users. Unlike compilers etc, I suspect that cluster admins > (and those running on local machines) are much more likely to be updating > their CUDA toolkits to the

Re: [petsc-users] 32-bit vs 64-bit GPU support

2023-08-11 Thread Jed Brown
Rohan Yadav writes: > With modern GPU sizes, for example A100's with 80GB of memory, a vector of > length 2^31 is not that much memory -- one could conceivably run a CG solve > with local vectors > 2^31. Yeah, each vector would be 8 GB (single precision) or 16 GB (double). You can't store a

Re: [petsc-users] Setting a custom predictor in the generalized-alpha time stepper

2023-08-04 Thread Jed Brown
t > be able to take some time to implement a more sustainable solution soon. > > Thanks again, > David > > On Fri, Aug 4, 2023 at 9:23 AM Jed Brown wrote: > >> Some other TS implementations have a concept of extrapolation as an >> initial guess. Such method-speci

Re: [petsc-users] Setting a custom predictor in the generalized-alpha time stepper

2023-08-04 Thread Jed Brown
like > it would be unnecessary if we instead used a callback in > `SNESSetComputeInitialGuess` that had access to the internals of > `TS_Alpha`. > > Thanks, David > > On Thu, Aug 3, 2023 at 11:28 PM Jed Brown wrote: > >> I think you can use TSGetSNES() and SNESSetComputeInitialGuess

Re: [petsc-users] Setting a custom predictor in the generalized-alpha time stepper

2023-08-04 Thread Jed Brown
I think you can use TSGetSNES() and SNESSetComputeInitialGuess() to modify the initial guess for SNES. Would that serve your needs? Is there anything else you can say about how you'd like to compute this initial guess? Is there a paper or something? David Kamensky writes: > Hi, > > My

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-07-31 Thread Jed Brown
; it seems to be related to AL methods ... but requires that the matrix be > symmetric? > > On Fri, Jul 28, 2023 at 7:04 PM Jed Brown wrote: > >> See src/snes/tutorials/ex70.c for the code that I think was used for that >> paper. >> >> Alexander Lindsay write

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-07-28 Thread Jed Brown
built an appropriate mesh and problem size for the problem they want to solve > and added appropriate turbulence modeling (although my general assumption > is often violated). > > > And to confirm, are you doing a nonlinearly implicit velocity-pressure > solve? > > Yes

Re: [petsc-users] [petsc-maint] Monolithic AMG with fieldsplit as smoother

2023-07-26 Thread Jed Brown
AMG is subtle here. With AMG for systems, you typically feed it elements of the near null space. In the case of (smoothed) aggregation, the coarse space will have a regular block structure with block sizes equal to the number of near-null vectors. You can use pc_fieldsplit options to select

Re: [petsc-users] Question about using PETSc to generate matrix preconditioners

2023-07-25 Thread Jed Brown
I think random matrices will produce misleading results. The chance of randomly generating a matrix that resembles an application is effectively zero. I think you'd be better off with some model problems varying parameters that control the physical regime (e.g., shifts to a Laplacian, advection

Re: [petsc-users] GAMG and Hypre preconditioner

2023-06-27 Thread Jed Brown
Zisheng Ye via petsc-users writes: > Dear PETSc Team > > We are testing the GPU support in PETSc's KSPSolve, especially for the GAMG > and Hypre preconditioners. We have encountered several issues that we would > like to ask for your suggestions. > > First, we have couple of questions when

Re: [petsc-users] hypre-ILU vs hypre Euclid

2023-06-22 Thread Jed Brown
It looks like Victor is working on hypre-ILU so it is active. PETSc used to have PILUT support, but it was so buggy/leaky that we removed the interface. Alexander Lindsay writes: > Haha no I am not sure. There are a few other preconditioning options I will > explore before knocking on this

Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Jed Brown
Matthew Knepley writes: >> The matrix entries are multiplied by 2, that is, the number of processes >> used to execute the code. >> > > No. This was mostly intended for GPUs, where there is 1 process. If you > want to use multiple MPI processes, then each process can only introduce > some

Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Jed Brown
You should partition the entries so each entry is submitted by only one process. Note that duplicate entries (on the same or different proceses) are summed as you've seen. For example, in finite elements, it's typical to partition the elements and each process submits entries from its elements.

Re: [petsc-users] dm_view of high-order geometry/solution

2023-06-12 Thread Jed Brown
e error > message to petsc-ma...@mcs.anl.gov-- > ------ > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > with errorcode 56. > > > Does cgns work for degree >= 4? > > > Junming

Re: [petsc-users] dm_view of high-order geometry/solution

2023-06-12 Thread Jed Brown
And here's an MR to do what you want without any code/arg changes. https://gitlab.com/petsc/petsc/-/merge_requests/6588 Jed Brown writes: > Duan Junming writes: > >> Dear Jed, >> >> >> Thank you for the suggestion. >> >> When I run tests/e

Re: [petsc-users] dm_view of high-order geometry/solution

2023-06-12 Thread Jed Brown
Duan Junming writes: > Dear Jed, > > > Thank you for the suggestion. > > When I run tests/ex33.c with > > ./ex33 -dm_plex_simplex 0 -dm_plex_box_faces 1,1 -mesh_transform annulus > -dm_coord_space 0 -dm_coord_petscspace_degree 3 -dm_refine 1 -dm_view > cgns:test.cgns > > and load it using

Re: [petsc-users] dm_view of high-order geometry/solution

2023-06-12 Thread Jed Brown
Matthew Knepley writes: > On Mon, Jun 12, 2023 at 6:01 AM Duan Junming wrote: > >> Dear Matt, >> >> Thank you for the reply. I have a more specific question about the >> spectral element example. Do you have any suggestions that how to write >> all the nodes in each cell to .vtu? >> > It is the

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-06-07 Thread Jed Brown
Alexander Lindsay writes: > This has been a great discussion to follow. Regarding > >> when time stepping, you have enough mass matrix that cheaper preconditioners >> are good enough > > I'm curious what some algebraic recommendations might be for high Re in > transients. What mesh aspect

Re: [petsc-dev] How to add strangers as MR reviewers?

2023-05-24 Thread Jed Brown
Probably adding them as guests (or developers) would be the way to go. My guess is it's to cut down on accidental assignments, but these policies always have side-effects. Barry Smith writes: > Is there a way to add people not in some magic inner circle as an MR > reviewer? For example, I

Re: [petsc-dev] Latest Pull breaks the 'test' target

2023-05-24 Thread Jed Brown
Matthew Knepley writes: > On Wed, May 24, 2023 at 6:53 AM Matthew Knepley wrote: > >> On Tue, May 23, 2023 at 9:00 PM Jed Brown wrote: >> >>> I use it that way all the time, but I can't reproduce. >>> >>> $ touch src/snes/interface/snes.c >>&g

Re: [petsc-dev] Latest Pull breaks the 'test' target

2023-05-23 Thread Jed Brown
I use it that way all the time, but I can't reproduce. $ touch src/snes/interface/snes.c $ make test gs=snes_tutorials-ex5_1 CC ompi/obj/snes/interface/snes.o Using MAKEFLAGS: -j8 -l12 --jobserver-auth=fifo:/tmp/GMfifo1004133 -- gs=snes_tutorials-ex5_1 CLINKER

Re: [petsc-users] Issues creating DMPlex from higher order mesh generated by gmsh

2023-05-15 Thread Jed Brown
Matthew Knepley writes: > On Fri, May 5, 2023 at 10:55 AM Vilmer Dahlberg via petsc-users < > petsc-users@mcs.anl.gov> wrote: > >> Hi. >> >> >> I'm trying to read a mesh of higher element order, in this example a mesh >> consisting of 10-node tetrahedral elements, from gmsh, into PETSC. But It

Re: [petsc-users] How to find the map between the high order coordinates of DMPlex and vertex numbering?

2023-05-14 Thread Jed Brown
Good to hear this works for you. I believe there is still a problem with high order tetrahedral elements (we've been coping with it for months and someone asked last week) and plan to look at it as soon as possible now that my semester finished. Zongze Yang writes: > Hi, Matt, > > The issue

Re: [petsc-users] DMPlex, is there an API to get a list of Boundary points?

2023-05-11 Thread Jed Brown
Boundary faces are often labeled already on a mesh, but you can use this to set a label for all boundary faces. https://petsc.org/main/manualpages/DMPlex/DMPlexMarkBoundaryFaces/ "Ferrand, Jesus A." writes: > Greetings. > > I terms of dm-plex terminology, I need a list points corresponding to

Re: [petsc-users] issues with VecSetValues in petsc 3.19

2023-05-10 Thread Jed Brown
Edoardo alinovi writes: > Hello Barry, > > Welcome to the party! Thank you guys for your precious suggestions, they > are really helpful! > > It's been a while since I am messing around and I have tested many > combinations. Schur + selfp is the best preconditioner, it converges within > 5 iters

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-08 Thread Jed Brown
Sebastian Blauth writes: > Hello everyone, > > I wanted to briefly follow up on my question (see my last reply). > Does anyone know / have an idea why the LSC preconditioner in PETSc does > not seem to scale well with the problem size (the outer fgmres solver I > am using nearly scale nearly

Re: [petsc-users] Scalable Solver for Incompressible Flow

2023-05-02 Thread Jed Brown
Sebastian Blauth writes: > I agree with your comment for the Stokes equations - for these, I have > already tried and used the pressure mass matrix as part of a (additive) > block preconditioner and it gave mesh independent results. > > However, for the Navier Stokes equations, is the Schur

Re: [petsc-users] PETSc testing recipes

2023-04-14 Thread Jed Brown
Look at config/examples/arch-ci-*.py for the configurations. They're driven from .gitlab-ci.yml Alexander Lindsay writes: > Hi, is there a place I can look to understand the testing recipes used in > PETSc CI, e.g. what external packages are included (if any), what C++ > dialect is used for

Re: [petsc-users] MPI linear solver reproducibility question

2023-04-02 Thread Jed Brown
> that it persists, could provide a reproduction scenario. > > > > On Sat, Apr 1, 2023 at 9:53 PM Jed Brown wrote: > >> Mark McClure writes: >> >> > Thank you, I will try BCGSL. >> > >> > And good to know that this is worth pursuing, and

Re: [petsc-users] MPI linear solver reproducibility question

2023-04-01 Thread Jed Brown
Mark McClure writes: > Thank you, I will try BCGSL. > > And good to know that this is worth pursuing, and that it is possible. Step > 1, I guess I should upgrade to the latest release on Petsc. > > How can I make sure that I am "using an MPI that follows the suggestion for > implementers about

Re: [petsc-users] MPI linear solver reproducibility question

2023-04-01 Thread Jed Brown
If you use unpreconditioned BCGS and ensure that you assemble the same matrix (depends how you do the communication for that), I think you'll get bitwise reproducible results when using an MPI that follows the suggestion for implementers about determinism. Beyond that, it'll depend somewhat on

Re: [petsc-users] Using PETSc Testing System

2023-03-28 Thread Jed Brown
Great that you got it working. We would accept a merge request that made our infrastructure less PETSc-specific so long as it doesn't push more complexity on the end user. That would likely make it easier for you to pull updates in the future. Daniele Prada writes: > Dear Matthew, dear

Re: [petsc-users] GAMG failure

2023-03-28 Thread Jed Brown
This suite has been good for my solid mechanics solvers. (It's written here as a coarse grid solver because we do matrix-free p-MG first, but you can use it directly.) https://github.com/hypre-space/hypre/issues/601#issuecomment-1069426997 Blaise Bourdin writes: > On Mar 27, 2023, at 9:11

Re: [petsc-users] GAMG failure

2023-03-27 Thread Jed Brown
Try -pc_gamg_reuse_interpolation 0. I thought this was disabled by default, but I see pc_gamg->reuse_prol = PETSC_TRUE in the code. Blaise Bourdin writes: > On Mar 24, 2023, at 3:21 PM, Mark Adams wrote: > > * Do you set: > > PetscCall(MatSetOption(Amat, MAT_SPD, PETSC_TRUE)); > >

Re: [petsc-users] GAMG failure

2023-03-24 Thread Jed Brown
You can -pc_gamg_threshold .02 to slow the coarsening and either stronger smoother or increase number of iterations used for estimation (or increase tolerance). I assume your system is SPD and you've set the near-null space. Blaise Bourdin writes: > Hi, > > I am having issue with GAMG for

Re: [petsc-users] O3 versus O2

2023-03-08 Thread Jed Brown
You can test a benchmark problem with both. It probably doesn't make a lot of difference with the solver configuration you've selected (most of those operations are memory bandwidth limited). If your residual and Jacobian assembly code is written to vectorize, you may get significant benefit

Re: [petsc-dev] Putting more menu items at the top of petsc.org pages (how to?)

2023-02-21 Thread Jed Brown
Here's the theme option to control it. https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/header-links.html#navigation-bar-dropdown-links "Zhang, Hong via petsc-dev" writes: > I think this is controlled by the theme we are using, which is > pydata-sphinx-theme. It seems that the

Re: [petsc-users] PetscViewer with 64bit

2023-02-16 Thread Jed Brown
> >> Mike, can you test that this branch works with your large problems? I >> tested that .vtu works in parallel for small problems, where works = loads >> correctly in Paraview and VisIt. >> >> https://gitlab.com/petsc/petsc/-/merge_requests/6081 >> >> Da

Re: [petsc-users] PetscViewer with 64bit

2023-02-16 Thread Jed Brown
21:27, Jed Brown wrote: > >> Dave May writes: >> >> > On Tue 14. Feb 2023 at 17:17, Jed Brown wrote: >> > >> >> Can you share a reproducer? I think I recall the format requiring >> certain >> >> things to be Int32. >> &g

Re: [petsc-users] PetscViewer with 64bit

2023-02-14 Thread Jed Brown
Dave May writes: > On Tue 14. Feb 2023 at 17:17, Jed Brown wrote: > >> Can you share a reproducer? I think I recall the format requiring certain >> things to be Int32. > > > By default, the byte offset used with the appended data format is UInt32. I > belie

Re: [petsc-users] PetscViewer with 64bit

2023-02-14 Thread Jed Brown
Can you share a reproducer? I think I recall the format requiring certain things to be Int32. Mike Michell writes: > Thanks for the note. > I understood that PETSc calculates the offsets for me through "boffset" > variable in plexvtu.c file. Please correct me if it is wrong. > > If plexvtu.c

Re: [petsc-users] GPUs and the float-double dilemma

2023-02-10 Thread Jed Brown
Ces VLC writes: > El El vie, 10 feb 2023 a las 21:44, Barry Smith escribió: > >> >>What is the use case you are looking for that cannot be achieved by >> just distributing a single precision application? If the user is happy when >> they happen to have GPUs to use single precision

Re: [petsc-dev] Apply for Google Summer of Code 2023?

2023-02-04 Thread Jed Brown
Satish Balay via petsc-dev writes: > BTW: ANL summer student application process is also in progress - and > it could be easier process [for Junchao] than google to get a student We should be more proactive about ANL summer student openings, even with suggested project ideas to work with

Re: [petsc-dev] Apply for Google Summer of Code 2023?

2023-02-03 Thread Jed Brown
Thanks for proposing this. Some ideas: * DMPlex+libCEED automation * Pipelined Krylov methods using Rust async * Differentiable programming using Enzyme with PETSc Karl Rupp writes: > Dear PETSc developers, > > in order to attract students to PETSc development, I'm thinking about a > PETSc

Re: [petsc-dev] xlocal = da.getLocalVec() .. da.restoreLocalVec(xlocal) skip the restore

2023-02-02 Thread Jed Brown
Barry Smith writes: >> On Feb 1, 2023, at 9:17 PM, Matthew Knepley wrote: >> >> On Wed, Feb 1, 2023 at 9:06 PM Barry Smith > > wrote: >>> >>> Hmm, When I do >>> >>>def somePythonfunction(): >>>... >>>x = da.createLocalVec() >>> >>>return

Re: [petsc-users] Question about handling matrix

2023-02-01 Thread Jed Brown
Is the small matrix dense? Then you can use MatSetValues. If the small matrix is sparse, you can assemble it with larger dimension (empty rows and columns) and use MatAXPY. 김성익 writes: > Hello, > > > I want to put small matrix to large matrix. > The schematic of operation is as below. >

Re: [petsc-dev] PetscOptionsGetViewer() problems with multiple usage on the same object but different options databases

2023-01-22 Thread Jed Brown
scOptionsGetViewer() that no one realized was there when they wrote it > (probably me :-)). > > > > > >> On Jan 22, 2023, at 1:16 PM, Jed Brown wrote: >> >> Matthew Knepley writes: >> >>> On Sun, Jan 22, 2023 at 1:06 PM Barry Smith wrote: >>>

Re: [petsc-dev] PetscOptionsGetViewer() problems with multiple usage on the same object but different options databases

2023-01-22 Thread Jed Brown
Matthew Knepley writes: > On Sun, Jan 22, 2023 at 1:06 PM Barry Smith wrote: > >> >> That makes incrementally adding new options to an existing object >> difficult, since each call to setFromOptions() screws up previous calls. >> >> We already have >> >> -ksp_monitor_cancel >>

Re: [petsc-users] locally deploy PETSc

2023-01-19 Thread Jed Brown
You're probably looking for ./configure --prefix=/opt/petsc. It's documented in ./configure --help. Tim Meehan writes: > Hi - I am trying to set up a local workstation for a few other developers who > need PETSc installed from the latest release. I figured that it would be > easiest for me

Re: [petsc-dev] New Object and Function Development

2023-01-19 Thread Jed Brown
Brandon Denton writes: > Good Morning, > > For the past few years, I have been working to integrate CAD into PETSc. > Over this time, I've worked with Dr. Knepley to enable PETSc to open and > utilize STEP, IGES, EGADS and EGADSlite files. We've also worked to expose > the geometry's parameters

Re: [petsc-users] DMPlex and CGNS

2023-01-17 Thread Jed Brown
Copying my private reply that appeared off-list. If you have one base with different element types, that's in scope for what I plan to develop soon. Congrats, you crashed cgnsview. $ cgnsview dl/HybridGrid.cgns Error in startup script: file was not found while executing "CGNSfile

Re: [petsc-users] DMPlex and CGNS

2023-01-16 Thread Jed Brown
Matthew Knepley writes: > On Mon, Jan 16, 2023 at 6:15 PM Jed Brown wrote: > >> How soon do you need this? I understand the grumbling about CGNS, but it's >> easy to build, uses HDF5 parallel IO in a friendly way, supports high order >> elements, and is generally pr

Re: [petsc-users] DMPlex and CGNS

2023-01-16 Thread Jed Brown
How soon do you need this? I understand the grumbling about CGNS, but it's easy to build, uses HDF5 parallel IO in a friendly way, supports high order elements, and is generally pretty expressive. I wrote a parallel writer (with some limitations that I'll remove) and plan to replace the current

Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-16 Thread Jed Brown via mpi-forum
Second that MPI attributes do not suck. PETSc uses communicator attributes heavily to avoid lots of confusing or wasteful behavior when users pass communicators between libraries and similar comments would apply if other MPI objects were passed between libraries in that way. It was before my

Re: [petsc-users] coordinate degrees of freedom for 2nd-order gmsh mesh

2023-01-12 Thread Jed Brown
Dave May writes: > On Thu 12. Jan 2023 at 17:58, Blaise Bourdin wrote: > >> Out of curiosity, what is the rationale for _reading_ high order gmsh >> meshes? >> > > GMSH can use a CAD engine like OpenCascade. This provides geometric > representations via things like BSplines. Such geometric

Re: [petsc-users] coordinate degrees of freedom for 2nd-order gmsh mesh

2023-01-12 Thread Jed Brown
It's confusing, but this line makes high order simplices always read as discontinuous coordinate spaces. I would love if someone would revisit that, perhaps also using DMPlexSetIsoperiodicFaceSF(), which should simplify the code and avoid the confusing cell coordinates pattern. Sadly, I don't

Re: [petsc-users] GPU implementation of serial smoothers

2023-01-10 Thread Jed Brown
Mark Lohry writes: > I definitely need multigrid. I was under the impression that GAMG was > relatively cuda-complete, is that not the case? What functionality works > fully on GPU and what doesn't, without any host transfers (aside from > what's needed for MPI)? > > If I use -ksp-pc_type gamg

Re: [petsc-users] GPU implementation of serial smoothers

2023-01-10 Thread Jed Brown
up the vector >> and copy down the result. >> >> >> On Tue, Jan 10, 2023 at 1:52 PM Barry Smith wrote: >> >>> >>> We don't have colored smoothers currently in PETSc. >>> >>> > On Jan 10, 2023, at 12:56 PM, Jed Brown wrote: >>

Re: [petsc-users] GPU implementation of serial smoothers

2023-01-10 Thread Jed Brown
Is DILU a point-block method? We have -pc_type pbjacobi (and vpbjacobi if the node size is not uniform). The are good choices for scale-resolving CFD on GPUs. Mark Lohry writes: > I'm running GAMG with CUDA, and I'm wondering how the nominally serial > smoother algorithms are implemented on

Re: [petsc-dev] bad cpu/MPI performance problem

2023-01-08 Thread Jed Brown
nroots=-1 is like a collective setup_called=false flag, except that this SF is always in that not-set-up state for serial runs. I don't think that's a bug per so, though perhaps you'd like it to be conveyed differently. Barry Smith writes: >There is a bug in the routine

Re: [petsc-users] How to install in /usr/lib64 instead of /usr/lib?

2023-01-06 Thread Jed Brown
The make convention would be to respond to `libdir`, which is probably the simplest if we can defer that choice until install time. It probably needs to be known at build time, thus should go in configure. https://www.gnu.org/software/make/manual/html_node/Directory-Variables.html Satish Balay

Re: [petsc-users] MatCreateSeqAIJWithArrays for GPU / cusparse

2023-01-05 Thread Jed Brown
Junchao Zhang writes: >> I don't think it's remotely crazy. libCEED supports both together and it's >> very convenient when testing on a development machine that has one of each >> brand GPU and simplifies binary distribution for us and every package that >> uses us. Every day I wish PETSc could

Re: [petsc-users] MatCreateSeqAIJWithArrays for GPU / cusparse

2023-01-05 Thread Jed Brown
Mark Adams writes: > Support of HIP and CUDA hardware together would be crazy, I don't think it's remotely crazy. libCEED supports both together and it's very convenient when testing on a development machine that has one of each brand GPU and simplifies binary distribution for us and every

  1   2   3   4   5   6   7   8   9   10   >