Re: [petsc-dev] Putting more menu items at the top of petsc.org pages (how to?)

2023-02-21 Thread Zhang, Hong via petsc-dev
I think this is controlled by the theme we are using, which is pydata-sphinx-theme. It seems that the only way to do what you want is to modify the theme directly. https://github.com/pydata/pydata-sphinx-theme/blob/main/src/pydata_sphinx_theme/__init__.py#L283 This function displays 5 TocTree d

[petsc-dev] GPU timers broken in main

2022-12-23 Thread Zhang, Hong via petsc-dev
GPU timers are currently broken in main. Event.GpuTime is always zero, so the GPU FLOPs reported in the log is zero too. Git bisect points to c708d6e3a1c9bc4418db993825b9337456e59b5c as the first bad commit. In this commit, the global variables in plog.c have two versions (one is thread-safe a

Re: [petsc-dev] Potential memory leak in PETSc - hypre interface when using Euclid

2022-10-27 Thread Zhang, Hong via petsc-dev
CCing Ruipeng. I think he can help with this. Hong (Mr.) > On Oct 27, 2022, at 3:53 PM, Barry Smith wrote: > > > My quick examination of hypre.c shows the only relevant code in PETSc is > > PetscCall(PetscOptionsEList("-pc_hypre_boomeramg_smooth_type", "Enable more > complex smoothers", "N

Re: [petsc-dev] petsc4py, numpy's BLAS and PETSc's BLAS

2022-10-24 Thread Zhang, Hong via petsc-dev
The chances of these problems are very slim because almost nobody builds Numpy from source. I usually install it with pip. Pip-installed Numpy on Mac uses Openblas, which is shipped together with the numpy wheels. The official API to check which BLAS is used by Numpy is numpy.show_config(). Howe

Re: [petsc-dev] Enhancing the PETSc Developer Experience

2022-09-28 Thread Zhang, Hong via petsc-dev
Reminder Please provide us with your inputs by answering the following questions and email me back by Friday, Sept. 30, if you have not done so. Hong, Getnet, and Jacob Faibussowitsch From: Zhang, Hong Sent: Friday, September 23, 2022 3:46 PM To: For users

[petsc-dev] Enhancing the PETSc Developer Experience

2022-09-23 Thread Zhang, Hong via petsc-dev
Dear PETSc developers, We are compiling a section on "Enhancing the Developer Experience" that will be a part of the "PETSc Strategic Planning" document. Please provide us with your inputs by answering the following questions and email me back by Friday, Sept. 30. 1. What do developers like or

Re: [petsc-dev] MatProduct_AtB --with-scalar-type=complex

2022-07-15 Thread Zhang, Hong via petsc-dev
Pierre, I believe you are in the correct direction for debugging MatProductReplaceMats() . I 'll investigate it and let you know the result. Hong From: Pierre Jolivet Sent: Friday, July 15, 2022 12:01 AM To: Zhang, Hong Cc: Barry Smith ; For users of the develop

Re: [petsc-dev] MatProduct_AtB --with-scalar-type=complex

2022-07-14 Thread Zhang, Hong via petsc-dev
Pierre, Our MatProductReplaceMats() is not well tested, which might be buggy. I simplified your code without calling MatProductReplaceMats() and got correct results in the cases ./ex -product_view ::ascii_matlab -convert false/true -correct false and ./ex -product_view ::ascii_matlab -con

Re: [petsc-dev] odd log behavior

2022-05-17 Thread Zhang, Hong via petsc-dev
Python users including myself would love NaN since NaN is the default missing value marker for reasons of computational speed and convenience. For example, if you load these values into pandas, no extra code is needed to handle them. Other choices such as N/A would require some extra work for te

Re: [petsc-dev] About the problem of Lagrange multiplier

2022-04-08 Thread Zhang, Hong via petsc-dev
Yahe, What problem do you want to solve, a linear/nonlinear optimisation problem with equality constrains? Hong From: petsc-dev on behalf of Barry Smith Sent: Friday, April 8, 2022 10:04 AM To: 高亚贺 Cc: petsc-dev@mcs.anl.gov Subject: Re: [petsc-dev] About the p

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-08 Thread Zhang, Hong via petsc-dev
ions on GPU, it consumes only 0.004GB CUDA memory. On Jan 7, 2022, at 11:54 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: 1. Commenting out ierr = __initialize(dctx->device->deviceId,dci);CHKERRQ(ierr); in device/impls/cupm/cupmcontext.hpp:L199 CUDA memory:

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
really a mystery. If I import torch only and do some tensor operations on GPU, it consumes only 0.004GB CUDA memory. On Jan 7, 2022, at 11:54 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: 1. Commenting out ierr = __initialize(dctx->device->deviceId,dci);

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
:bsm...@petsc.dev>> wrote: Without log_view it does not load any cuBLAS/cuSolve immediately with -log_view it loads all that stuff at startup. You need to go into the PetscInitialize() routine find where it loads the cublas and cusolve and comment out those lines then run with -log_view On J

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
e() routine find where it loads the cublas and cusolve and comment out those lines then run with -log_view On Jan 7, 2022, at 11:14 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: When PETSc is initialized, it takes about 2GB CUDA memory. This is way too much for doin

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
eed to go into the PetscInitialize() routine find where it loads the cublas and cusolve and comment out those lines then run with -log_view On Jan 7, 2022, at 11:14 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: When PETSc is initialized, it takes about 2GB CUDA

Re: [petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
artup. You need to go into the PetscInitialize() routine find where it loads the cublas and cusolve and comment out those lines then run with -log_view On Jan 7, 2022, at 11:14 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: When PETSc is initialized, it takes about 2GB CUD

[petsc-dev] PETSc init eats too much CUDA memory

2022-01-07 Thread Zhang, Hong via petsc-dev
When PETSc is initialized, it takes about 2GB CUDA memory. This is way too much for doing nothing. A test script is attached to reproduce the issue. If I remove the first line "import torch", PETSc consumes about 0.73GB, which is still significant. Does anyone have any idea about this behavior?

Re: [petsc-dev] DMPLEX cannot support two different edges for the same two vertices, hence DMPLEX cannot?

2021-12-01 Thread Zhang, Hong via petsc-dev
We are working on a traffic flow application, in which same two vertices are connected by at least two edges. I have not seen any problem yet, even in the case that two vertices are located in different ranks. Hong From: Abhyankar, Shrirang G Sent: Wednesday, Dec

Re: [petsc-dev] I have started a new position

2021-09-13 Thread Zhang, Hong via petsc-dev
https://www.simonsfoundation.org/people/barry-smith/ [https://simonsfoundation.imgix.net/wp-content/uploads/2021/09/08170010/Barry-Smith.jpg?auto=format&q=90] Barry Smith Barry Smith on Simon

Re: [petsc-dev] I have started a new position

2021-09-13 Thread Zhang, Hong via petsc-dev
Barry, https://en.wikipedia.org/wiki/Flatiron_Institute Flatiron Institute - Wikipedia The Flatiron Institute is an internal research division of the Simons Foundation, launched in 2016. It comprises five centers for computational science: the Cen

Re: [petsc-dev] DMNetwork static sizing

2021-04-06 Thread Zhang, Hong via petsc-dev
Shri, You designed this approach. Is it intended or out of implementation convenience at the time? Hong From: petsc-dev on behalf of Matthew Knepley Sent: Monday, April 5, 2021 5:47 AM To: PETSc Subject: [petsc-dev] DMNetwork static sizing Dowe really need a c

Re: [petsc-dev] MatTransposeMatMult() bug

2021-03-18 Thread Zhang, Hong via petsc-dev
:00 PM, Patrick Sanan mailto:patrick.sa...@gmail.com>> wrote: Sorry about the current mess but that page is halfway migrated, so any updates should go here: https://docs.petsc.org/en/main/install/externalsoftware_documentation/ Am 18.03.2021 um 15:22 schrieb Zhang, Hong via pet

Re: [petsc-dev] MatTransposeMatMult() bug

2021-03-18 Thread Zhang, Hong via petsc-dev
Re: [petsc-dev] MatTransposeMatMult() bug On Wed, Mar 17, 2021 at 3:27 PM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Pierre, Do you mean a possible bug in C=AtB MatTransposeMatMult()? Can you provide a stand-alone test without hpddm that reproduces this error?

Re: [petsc-dev] MatTransposeMatMult() bug

2021-03-17 Thread Zhang, Hong via petsc-dev
, 2021 at 3:27 PM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Pierre, Do you mean a possible bug in C=AtB MatTransposeMatMult()? Can you provide a stand-alone test without hpddm that reproduces this error? Hong, you should be able to just configure with --download-hpddm an

Re: [petsc-dev] MatTransposeMatMult() bug

2021-03-17 Thread Zhang, Hong via petsc-dev
Pierre, Do you mean a possible bug in C=AtB MatTransposeMatMult()? Can you provide a stand-alone test without hpddm that reproduces this error? Hong From: petsc-dev on behalf of Pierre Jolivet Sent: Wednesday, March 17, 2021 4:31 AM To: For users of the developme

Re: [petsc-dev] Argonne GPU Virtual Hackathon - Accepted

2021-03-12 Thread Zhang, Hong via petsc-dev
On Mar 12, 2021, at 5:25 PM, Barry Smith mailto:bsm...@petsc.dev>> wrote: Jed, Thanks for the insight. Maybe Hong and his Ellpack format? Or his independent set algorithm? These two features are currently functional on NVIDIA GPUs. Neither needs extensive development or refac

Re: [petsc-dev] Commit squashing in MR

2021-03-03 Thread Zhang, Hong via petsc-dev
Patrick, I need update petsc manual on DMNetwork, but do not know how to proceed. I tried your suggested steps: 1) go to the docs page you want to edit on docs.petsc.org 2) select the version you want (usually "main") in the black ReadTheDocs box in the lower right 3) cli

Re: [petsc-dev] Infinite loop in A*B

2021-03-01 Thread Zhang, Hong via petsc-dev
i=0; iworkB->cmap->n=0 (line 590 in mpimatmatmult.c) Hong From: petsc-dev mailto:petsc-dev-boun...@mcs.anl.gov>> on behalf of Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> Sent: Sunday, February 28, 2021 10:33 PM To: Pierre Jolivet mailto:pie.

Re: [petsc-dev] Infinite loop in A*B

2021-02-28 Thread Zhang, Hong via petsc-dev
mpimatmatmult.c) Hong From: petsc-dev on behalf of Zhang, Hong via petsc-dev Sent: Sunday, February 28, 2021 10:33 PM To: Pierre Jolivet ; For users of the development version of PETSc Subject: Re: [petsc-dev] Infinite loop in A*B I can reproduce the hang with mpiexec -n 2 ./matma

Re: [petsc-dev] Infinite loop in A*B

2021-02-28 Thread Zhang, Hong via petsc-dev
The infinite loop in MatMatMultNumeric_MPIAIJ_MPIDense() for (i=0; iworkB->cmap->n=0 (line 590 in mpimatmatmult.c) Hong From: petsc-dev on behalf of Zhang, Hong via petsc-dev Sent: Sunday, February 28, 2021 10:33 PM To: Pierre Jolivet ; For users

Re: [petsc-dev] Infinite loop in A*B

2021-02-28 Thread Zhang, Hong via petsc-dev
I can reproduce the hang with mpiexec -n 2 ./matmatmult It seems in an infinite loop of calling MatDensePlaceArray() from #0 MatDensePlaceArray (mat=0xda5c50, array=0xd15e60) at /home/hongsu/soft/petsc/src/mat/impls/dense/mpi/mpidense.c:2047 #1 0x7fa0d13bf4f7 in MatDenseGetSubMatrix_Seq

Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-14 Thread Zhang, Hong via petsc-dev
gce/projects/TSAdjoint$ icc -O3 -E -dM - < /dev/null | grep AVX hongzhang@petsc-02:/nfs/gce/projects/TSAdjoint$ > On Feb 14, 2021, at 1:25 PM, Zhang, Hong via petsc-dev > wrote: > > > >> On Feb 14, 2021, at 12:04 PM, Barry Smith wrote: >> >> >> For

Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-14 Thread Zhang, Hong via petsc-dev
> On Feb 14, 2021, at 12:04 PM, Barry Smith wrote: > > > For our handcoded AVX functions this is fine, we can handle the dispatching > ourselves. Cool. _may_i_use_cpu_feature() would be very useful to determine the optimal AVX code path at runtime. Theoretically we just need to query for

Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-14 Thread Zhang, Hong via petsc-dev
On Feb 14, 2021, at 10:09 AM, Pierre Jolivet mailto:pie...@joliv.et>> wrote: On 14 Feb 2021, at 4:52 PM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: On Feb 14, 2021, at 5:05 AM, Patrick Sanan mailto:patrick.sa...@gmail.com>> wrote: Am 14.02.202

Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-14 Thread Zhang, Hong via petsc-dev
On Feb 14, 2021, at 5:05 AM, Patrick Sanan mailto:patrick.sa...@gmail.com>> wrote: Am 14.02.2021 um 07:22 schrieb Barry Smith mailto:bsm...@petsc.dev>>: On Feb 13, 2021, at 11:58 PM, Jed Brown mailto:j...@jedbrown.org>> wrote: I usually configure --with-debugging=0 COPTFLAGS='-O2 -march

Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-13 Thread Zhang, Hong via petsc-dev
The CPU supports avx2, but the compiler or the OS may not. You can print out the macros that the compiler defines and grep for avx2. The commands can be found at https://stackoverflow.com/questions/9349754/generate-list-of-preprocessor-macros-defined-by-the-compiler Hong On Feb 13, 2021, at 8

Re: [petsc-dev] "Search" does not work in the testing system?

2021-01-27 Thread Zhang, Hong via petsc-dev
make PETSC_DIR=/Users/kongf/projects/moose4/petsc PETSC_ARCH=arch-darwin-c-debug -f gmakefile test search='snes_tutorials-ex1_*' or make PETSC_DIR=/Users/kongf/projects/moose4/petsc PETSC_ARCH=arch-darwin-c-debug -f gmakefile test globsearch='snes_tutorials-ex1_*’ Hong (Mr.) > On Jan 27, 20

Re: [petsc-dev] obscure changes in TSGetStages_Theta

2021-01-24 Thread Zhang, Hong via petsc-dev
Some TS methods such as TSRK do have an array of vectors like this to store the stage values. But not all TS methods have it. I am fine adding the scratch for TSTheta and any other method missing it. A little drawback is that it is used only by TSGetStages and the TSStep implementation does not

Re: [petsc-dev] obscure changes in TSGetStages_Theta

2021-01-23 Thread Zhang, Hong via petsc-dev
Done. Please check https://gitlab.com/petsc/petsc/-/merge_requests/3583 Sorry for any disturbance it caused. It was for the convenience of the adjoint implementation. The stages returned by TSGetStages_Theta currently do not reflect the true stages associated with these methods. The endpoint var

Re: [petsc-dev] About parallel of ILU

2021-01-15 Thread Zhang, Hong via petsc-dev
Just in case you want to try the exact algorithm you attached, it can be used in PETSc with -pc_type hypre -pc_hypre_type euclid Hong (Mr.) > On Jan 12, 2021, at 8:42 AM, Chen Gang <569615...@qq.com> wrote: > >  > Dear Professor, > > I'm writing about this mail about the ILU algorithm in PET

Re: [petsc-dev] problem with MatSeqAIJCUSPARSEILUAnalysisAndCopyToGPU

2020-12-22 Thread Zhang, Hong via petsc-dev
On Dec 22, 2020, at 3:38 PM, Mark Adams mailto:mfad...@lbl.gov>> wrote: I am MPI serial LU solving a smallish matrix (2D, Q3, 8K equations) on a Summit node (42 P9 cores, 6 V100 GPUs) using cuSparse and Kokkos kernels. The cuSparse performance is terrible. I solve the same TS problem in MPI

Re: [petsc-dev] Can I call PetscSectionAddDof(s, p, ndof) at a shred 'p' by more than one processors?

2020-11-19 Thread Zhang, Hong via petsc-dev
15:26, Zhang, Hong via petsc-dev > mailto:petsc-dev@mcs.anl.gov>> wrote: > > Matt or Jed, > Can I call PetscSectionAddDof(s,p,ndof) at a shred 'p' by more than one > processors? For example, > if (rank == 0) { > PetscSectionAddDof(s,p,1) ; > } else if

[petsc-dev] Can I call PetscSectionAddDof(s, p, ndof) at a shred 'p' by more than one processors?

2020-11-18 Thread Zhang, Hong via petsc-dev
Matt or Jed, Can I call PetscSectionAddDof(s,p,ndof) at a shred 'p' by more than one processors? For example, if (rank == 0) { PetscSectionAddDof(s,p,1) ; } else if (rank == 1) { PetscSectionAddDof(s,p,2) ; } Then, at shared 'p', section 's' has dof=3? I did a test, and got an error petsc

Re: [petsc-dev] sm_70

2020-09-27 Thread Zhang, Hong via petsc-dev
On Sep 25, 2020, at 8:09 PM, Barry Smith mailto:bsm...@petsc.dev>> wrote: Configure by default should find out the available GPU and build for that sm_* it should not require the user to set this (how the heck is the user going to know what to set?) If I remember correctly there is a uti

Re: [petsc-dev] PDIPDM questions

2020-09-14 Thread Zhang, Hong via petsc-dev
Pierre, ex1.c is a toy test inherited from previous experimental pdipm. We simply sent centralised data to all other processes to test pdipm. It is not intended for performance. We should add more tests. Current pdipm is not fully developed yet, especially its linear solver may fail to handle i

Re: [petsc-dev] Statistics on the popularity of PETSc

2020-09-10 Thread Zhang, Hong via petsc-dev
even for macOS… $ brew info petsc … install: 142 (30 days), 436 (90 days), 1,554 (365 days) install-on-request: 140 (30 days), 412 (90 days), 1,450 (365 days) Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) Cell: (312) 694-3391 On Sep 10, 2020, at 16:29, Zhang, Hong via petsc-d

[petsc-dev] Statistics on the popularity of PETSc

2020-09-10 Thread Zhang, Hong via petsc-dev
Someone asks about the number of PETSc users. Do we have relevant info? Hong

Re: [petsc-dev] TAOPDIPM

2020-08-21 Thread Zhang, Hong via petsc-dev
Pierre, We have fixed this bug in petsc-release (maint branch). Thanks for you report. Hong From: petsc-dev on behalf of Pierre Jolivet Sent: Wednesday, August 5, 2020 2:10 AM To: Abhyankar, Shrirang G Cc: PETSc Subject: Re: [petsc-dev] TAOPDIPM Sorry for the

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-06 Thread Zhang, Hong via petsc-dev
Stefano, How about you work on this issue? Hong From: Stefano Zampini Sent: Wednesday, May 6, 2020 2:09 AM To: Zhang, Hong Cc: Pierre Jolivet ; Jose E. Roman ; petsc-dev ; Smith, Barry F. Subject: Re: [petsc-dev] MATOP_MAT_MULT Hong If the product is not sup

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-05 Thread Zhang, Hong via petsc-dev
Stefano: Now, we need address this bug report: enable MatHasOperation(C,MATOP_MAT_MULT,&flg) for matrix products, e.g., C=A*B, which is related to your issue https://gitlab.com/petsc/petsc/-/issues/608. In petsc-3.13: 1) MATOP_MAT_MULT, ..., MATOP_MATMAT_MULT are removed from the MATOP table (t

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-25 Thread Zhang, Hong via petsc-dev
Pierre, When we do MatProductCreate: C = A*B; //C owns A and B, thus B->refct =2 MatProductCreateWithMats: B = A*C; //If I let B own A and C, then C->refct=2 Then MatDestroy(&B) and MatDestroy(&C) only reduce their refct from 2 to 1, thus memory leak. My solution is adding { matreferenc

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-25 Thread Zhang, Hong via petsc-dev
Jose, >> I also now just tested some previously PETSC_VERSION_LT(3,13,0) running code >> with C=A*B, Dense=Nest*Dense, all previously allocated prior to a call to >> MatMatMult and scall = MAT_REUSE_MATRIX. >> Sadly, it’s now broken. It is my fault for not having a test for this in >> https://g

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-23 Thread Zhang, Hong via petsc-dev
I'll try to do it in maint. Hong From: Jose E. Roman Sent: Thursday, April 23, 2020 2:36 AM To: Pierre Jolivet Cc: Zhang, Hong ; Stefano Zampini ; petsc-dev ; Smith, Barry F. Subject: Re: [petsc-dev] MATOP_MAT_MULT I agree with Pierre. However, if the fix invo

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-22 Thread Zhang, Hong via petsc-dev
Jose, I'll check and fix them. I have to do it in master, is ok? Hong From: Pierre Jolivet Sent: Wednesday, April 22, 2020 3:08 PM To: Zhang, Hong Cc: Jose E. Roman ; Stefano Zampini ; petsc-dev ; Smith, Barry F. Subject: Re: [petsc-dev] MATOP_MAT_MULT Hong,

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-22 Thread Zhang, Hong via petsc-dev
Jose, Pierre and Stefano, Now I understand the issue that Stefano raised. I plan to add MatProductIsSupported(Wmat,&supported,&matproductsetfromoptions) the flag 'supported' tells if the product is supported/implemented or not, and the function pointer 'matproductsetfromoptions' gives the name of

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-22 Thread Zhang, Hong via petsc-dev
Pierre, Well, that’s just not an option. I don’t want the code to error, I want a fallback mechanism so that I can do the MatMatMult myself, column by column (or implement this as part of issue #608 in the case of dense B and C so neither José nor me have to bot

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-21 Thread Zhang, Hong via petsc-dev
Jose, We need both A and Vmat to determine if Wmat= A*Vmat is supported or not. MatHasOperation(A,MATOP_MAT_MULT,&flg); //this call is not sufficient to ensure Wmat. How about replacing if (V->vmm && flg) { ierr = BVGetMat(V,&Vmat);CHKERRQ(ierr); ierr = BVGetMat(W,&Wmat);CHKERRQ(ierr);

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-21 Thread Zhang, Hong via petsc-dev
Pierre, The old API, MatMatMult(), MatPtAP() ... are still available as wrappers to the new API: MatProductCreate() MatProductSetType(,MATPRODUCT_AB/PtAP) MatProductSetFromOptions() MatProductSymbolic() MatProductNumeric() You do not need to change your code. When you call MatMatMult() with seqsba

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-21 Thread Zhang, Hong via petsc-dev
Pierre, MatMatMult_xxx() is removed from MatOps table. MatMatMult() is replaced by MatProductCreate() MatProductSetType(,MATPRODUCT_AB) MatProductSetFromOptions() MatProductSymbolic() MatProductNumeric() Where/when do you need query a single matrix for its product operation? Hong

Re: [petsc-dev] Question about Binary-IO in READ mode with POSIX APIs

2020-03-16 Thread Zhang, Hong via petsc-dev
On Mar 16, 2020, at 12:12 PM, Lisandro Dalcin mailto:dalc...@gmail.com>> wrote: On Mon, 16 Mar 2020 at 16:35, Jed Brown mailto:j...@jedbrown.org>> wrote: Lisandro Dalcin mailto:dalc...@gmail.com>> writes: > Currently, binary viewers using POSIX file descriptors with READ mode open > the fil

Re: [petsc-dev] [petsc-users] Matrix-free method in PETSc

2020-02-18 Thread Zhang, Hong via petsc-dev
DMDA and MatShell are among the least documented in PETSc. But they are extremely useful at least to me. Hopefully I will try to get my TS+MatShell+DMDA example into master early next month. Hong On Feb 18, 2020, at 9:10 PM, Smith, Barry F. via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote:

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-13 Thread Zhang, Hong via petsc-dev
gt; U cudaFreeHost@@libcudart.so.10.1 >> >> Hong >> >>> >>> >>> >>> >>> >>>> On Feb 12, 2020, at 1:51 PM, Munson, Todd via petsc-dev >>>> wrote: >>>> >>>> &g

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-12 Thread Zhang, Hong via petsc-dev
; think some MPI compilers insert their own version. >> >> Todd. >> >>> On Feb 12, 2020, at 11:38 AM, Zhang, Hong via petsc-dev >>> wrote: >>> >>> >>> >>>> On Feb 12, 2020, at 11:09 AM, Matthew Knepley wrote: >>

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-12 Thread Zhang, Hong via petsc-dev
On Feb 12, 2020, at 11:09 AM, Matthew Knepley mailto:knep...@gmail.com>> wrote: On Wed, Feb 12, 2020 at 11:06 AM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Sorry for the long post. Here are replies I have got from OLCF so far. We still don’t know how to solve

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-12 Thread Zhang, Hong via petsc-dev
020, at 11:14 AM, Smith, Barry F. mailto:bsm...@mcs.anl.gov>> wrote: gprof or some similar tool? On Feb 10, 2020, at 11:18 AM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: -cuda_initialize 0 does not make any difference. Actually this issue has nothing to do with P

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-10 Thread Zhang, Hong via petsc-dev
ng via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: On Feb 8, 2020, at 5:03 PM, Matthew Knepley mailto:knep...@gmail.com>> wrote: On Sat, Feb 8, 2020 at 4:34 PM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: I did some further investigation. The overhe

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-08 Thread Zhang, Hong via petsc-dev
On Feb 8, 2020, at 5:03 PM, Matthew Knepley mailto:knep...@gmail.com>> wrote: On Sat, Feb 8, 2020 at 4:34 PM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: I did some further investigation. The overhead persists for both the PETSc shared library and the static

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-08 Thread Zhang, Hong via petsc-dev
it seems that the first CUDA function triggered loading petsc so (if petsc so is linked), which is slow on the summit file system. Hong On Feb 7, 2020, at 2:54 PM, Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Linking any other shared library does not slow down the execution

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Zhang, Hong via petsc-dev
ered loading petsc so (if >> petsc so is linked), which is slow on the summit file system. >> >> Hong >> >>> On Feb 7, 2020, at 2:54 PM, Zhang, Hong via petsc-dev >>> wrote: >>> >>> Linking any other shared library does not slow down the ex

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Zhang, Hong via petsc-dev
Note that the overhead was triggered by the first call to a CUDA function. So it seems that the first CUDA function triggered loading petsc so (if petsc so is linked), which is slow on the summit file system. Hong On Feb 7, 2020, at 2:54 PM, Zhang, Hong via petsc-dev mailto:petsc-dev

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Zhang, Hong via petsc-dev
.anl.gov>> wrote: ldd -o on the executable of both linkings of your code. My guess is that without PETSc it is linking the static version of the needed libraries and with PETSc the shared. And, in typical fashion, the shared libraries are off on some super slow file system so ta

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Zhang, Hong via petsc-dev
Statically linked excitable works fine. The dynamic linker is probably broken. Hong On Feb 7, 2020, at 12:53 PM, Matthew Knepley mailto:knep...@gmail.com>> wrote: On Fri, Feb 7, 2020 at 1:23 PM Zhang, Hong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Hi all, Previously I

[petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Zhang, Hong via petsc-dev
Hi all, Previously I have noticed that the first call to a CUDA function such as cudaMalloc and cudaFree in PETSc takes a long time (7.5 seconds) on summit. Then I prepared a simple example as attached to help OCLF reproduce the problem. It turned out that the problem was caused by PETSc. The

Re: [petsc-dev] "participants" on gitlab

2019-10-30 Thread Zhang, Hong via petsc-dev
ation problem you > mention below. Unfortunately, I think that reduces incentive to review, > and we're always stressed for reviewing resources. > > "Zhang, Hong via petsc-dev" writes: > >> How is the list of participants determined when a MR is created on gitl

Re: [petsc-dev] AVX kernels, old gcc, still broken

2019-10-24 Thread Zhang, Hong via petsc-dev
Hi Lisandro, Can you please check if the following patch fixes the problem? I will create a MR. diff --git a/src/mat/impls/aij/seq/aijperm/aijperm.c b/src/mat/impls/aij/seq/aijperm/aijperm.c index 577dfc6713..568535117a 100644 --- a/src/mat/impls/aij/seq/aijperm/aijperm.c +++ b/src/mat/impls/ai

[petsc-dev] "participants" on gitlab

2019-10-21 Thread Zhang, Hong via petsc-dev
How is the list of participants determined when a MR is created on gitlab? It seems to include everybody by default. Is there any way to shorten the list? Ideally only the participants involved in the particular MR should be picked. Note that currently there is a huge gap between the ''Participa

Re: [petsc-dev] People spent tim doing this

2019-10-11 Thread Zhang, Hong via petsc-dev
It is hard to understand where the speedup comes. What is the difference between "manner 1" and "manner 2”? Btw, we don’t provide “ELL” format in PETSc. We provide “SELL”, which should be more SIMD-friendly than the column-ELL proposed in the paper. Hong On Oct 10, 2019, at 8:16 PM, Matthew Kn

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Zhang, Hong via petsc-dev
Done. See https://gitlab.com/petsc/petsc/commit/85ec510f49531057ebfe1fb641fe93a36371878e Hong On Mon, Sep 23, 2019 at 11:32 AM Pierre Jolivet mailto:pierre.joli...@enseeiht.fr>> wrote: Hong, You should probably cherry pick https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a9

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Zhang, Hong via petsc-dev
Barry: As a hack for this release could you have the Numeric portion of the multiply routines check if the symbolic data is there and if not just call the symbolic an attach the needed data? You might need to have a utility function that does all the symbolic part except the allocation of th

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Zhang, Hong via petsc-dev
Barry : We would like avoid allocating a huge array for the matrix and then having the user place on top of it. In the new paradigm there could be options called on the resulting C of MatMatGetProduct() that would take effect before the C is fully formed to prevent the allocating and fre

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Zhang, Hong via petsc-dev
Yes, we should allow users to provide their own matrix array. We use MatDensePlaceArray() to plug an array into matrix C before MatMatMult(). If we cannot do this, we will have to copy from the internal array of the result C to our array. Would the following sequence work? MatMatMultSymbolic()

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Zhang, Hong via petsc-dev
I'll check it tomorrow. Hong On Sun, Sep 22, 2019 at 1:04 AM Pierre Jolivet via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Jed, I’m not sure how easy it is to put more than a few lines of code on GitLab, so I’ll just send the (tiny) source here, as a follow-up of our discussion https://git

Re: [petsc-dev] moving from BitBucket to GitLab

2019-06-16 Thread Zhang, Hong via petsc-dev
If it is mainly because of CI, why don't we host petsc on GitHub and use the GitLab CI? https://about.gitlab.com/solutions/github/ GitHub has been the biggest social network for developers. Changing a utility is easy to me, but changing a social network isn't. Thanks, Hong (Mr.) On Jun 15, 201

Re: [petsc-dev] Is bitbucket less responsive than it use to be?

2019-05-14 Thread Zhang, Hong via petsc-dev
Vote for GitHub +1. We would have almost moved to GitHub early last year. But I was not sure what stopped the transition. Hong On May 14, 2019, at 10:51 AM, Fande Kong via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: Any difficulty to switch over to GitHub? I like GitHub better than bitbu

Re: [petsc-dev] New implementation of PtAP based on all-at-once algorithm

2019-04-12 Thread Zhang, Hong via petsc-dev
I would suggest Fande add this new implementation into petsc. What is the algorithm? I'll try to see if I can further reduce memory consumption of the current symbolic PtAP when I get time. Hong On Fri, Apr 12, 2019 at 8:27 AM Mark Adams via petsc-dev mailto:petsc-dev@mcs.anl.gov>> wrote: On

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-27 Thread Zhang, Hong via petsc-dev
Myriam, - PETSc 3.6.4 (reference) - PETSc 3.10.4 without specific options - PETSc 3.10.4 with the three scalability options you mentionned What are the 'three scalability options' here? What is "MaxMemRSS", the max memory used by a single core? How many cores do you start with? Do you have 'execu

Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-22 Thread Zhang, Hong via petsc-dev
Fande, The images are very interesting and helpful. How did you get these images? Petsc PtAP uses 753MB for PtAPSymbolic and only 116MB for PtAPNumeric, while hypre uses 215MB -- it seems hypre does not implement symbolic PtAP. When I implement PtAP, my focus was on numeric part because it was us

Re: [petsc-dev] How long?

2019-03-11 Thread Zhang, Hong via petsc-dev
Is linux kernel maintainable and extendable? Does anyone want to reimplement linux in Julia? Hong (Mr.) > On Mar 11, 2019, at 9:28 PM, Smith, Barry F. via petsc-dev > wrote: > > > PETSc source code is becoming an unmaintainable, unextendable monstrosity. > How long until Julia is mature e

Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Zhang, Hong via petsc-dev
Replace ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr); to ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr); Replace ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr); to ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr); Then add MatSeqAIJSetPreallocation() MatSeqDenseSetPreallocation() Hong On Mon, Jan 1