[petsc-dev] PETSc Social on line in 2 hours. 5pm Central time

2020-04-02 Thread Smith, Barry F. via petsc-dev
We are having a PETSc social event at 5 pm central time today (in 2 hours). Everyone is welcome. Hope to see many of you, Barry Begin forwarded message: From: Todd Munson via BlueJeans Network mailto:inv...@bluejeans.com>> Subject: PETSc Social Date: March 31, 2020 at 5:16:32 PM CDT T

[petsc-dev] Fwd: [Xlab] El Capitan CPU announcement

2020-03-04 Thread Smith, Barry F. via petsc-dev
Begin forwarded message: From: "Thakur, Rajeev" mailto:tha...@anl.gov>> Subject: [Xlab] El Capitan CPU announcement Date: March 4, 2020 at 1:33:13 PM CST To: "x...@cels.anl.gov" mailto:x...@cels.anl.gov>> AMD https://www.anandtech.com/show/15581/el-capitan-supercomput

Re: [petsc-dev] [petsc-users] Matrix-free method in PETSc

2020-02-20 Thread Smith, Barry F. via petsc-dev
ith, Barry F. wrote: > > > In the past you needed a brain to get a Stanford email account > > >> Begin forwarded message: >> >> From: Yuyun Yang >> Subject: Re: [petsc-users] Matrix-free method in PETSc >> Date: February 18, 2020 at 8:26:11 AM CS

[petsc-dev] Fwd: [petsc-users] Matrix-free method in PETSc

2020-02-18 Thread Smith, Barry F. via petsc-dev
but didn’t know how the time stepping can be set up. Thanks, Yuyun From: Matthew Knepley mailto:knep...@gmail.com>> Date: Tuesday, February 18, 2020 at 9:23 PM To: Yuyun Yang mailto:yyan...@stanford.edu>> Cc: "Smith, Barry F." mailto:bsm...@mcs.anl.gov>>, "petsc

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-13 Thread Smith, Barry F. via petsc-dev
> On Feb 13, 2020, at 5:39 PM, Zhang, Hong wrote: > > > >> On Feb 13, 2020, at 7:39 AM, Smith, Barry F. wrote: >> >> >> How are the two being compiled and linked? The same way, one with the PETSc >> library in the path and the other without? Or

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-13 Thread Smith, Barry F. via petsc-dev
ote: > > > >> On Feb 12, 2020, at 5:11 PM, Smith, Barry F. wrote: >> >> >> ldd -o on the petsc program (static) and the non petsc program (static), >> what are the differences? > > There is no difference in the outputs. > >> >> nm -o both exe

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-12 Thread Smith, Barry F. via petsc-dev
gt;> 0.00% 1.7260us 2 863ns 490ns 1.2360us cuDeviceGet >>> >>> 0.00% 377ns 1 377ns 377ns 377ns cuDeviceGetUuid >>> >>> >>> >>> I also get the expected behavior if I add an MPI_Init and MPI_Finalize to >>> the >>> code inst

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-10 Thread Smith, Barry F. via petsc-dev
spack/20180914/linux-rhel7-ppc64le/gcc-4.8.5/pgi-19.4-6acz4xyqjlpoaonjiiqjme2aknrfnzoy/linuxpower/19.4/lib/libpgf902.so >>>>> (0x20002124) >>>>> libpgftnrtl.so => >>>>> /autofs/nccs-svm1_sw/summit/.swci/0-core/opt/spack/20180914/linux-rhel7-pp

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Smith, Barry F. via petsc-dev
pack/20180914/linux-rhel7-ppc64le/pgi-19.4/zlib-1.2.11-2htm7ws4hgrthi5tyjnqxtjxgpfklxsc/lib/libz.so.1 >> (0x200021a1) >> libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x200021a6) >> /lib64/ld64.so.2 (0x2000) >>

Re: [petsc-dev] First call to cudaMalloc or cudaFree is very slow on summit

2020-02-07 Thread Smith, Barry F. via petsc-dev
ldd -o on the executable of both linkings of your code. My guess is that without PETSc it is linking the static version of the needed libraries and with PETSc the shared. And, in typical fashion, the shared libraries are off on some super slow file system so take a long time to be loaded

[petsc-dev] Fwd: [SIAM-CSE] Introducing hIPPYlib, a python-based inverse problems solver library

2020-02-05 Thread Smith, Barry F. via petsc-dev
Lois sent out this announcement on hIPPYlib 3.0 Begin forwarded message: From: "McInnes, Lois Curfman" mailto:curf...@anl.gov>> Subject: FW: [SIAM-CSE] Introducing hIPPYlib, a python-based inverse problems solver library Date: February 4, 2020 at 8:52:46 AM CST To:

Re: [petsc-dev] is make alldoc tested?

2020-02-04 Thread Smith, Barry F. via petsc-dev
We should still be doing the testing now > On Feb 4, 2020, at 2:46 PM, Jed Brown wrote: > > Moving toward Sphinx (as Patrick has been working on) will help reduce > the number of stages and improve identification and reporting of > diagnostics/errors. > > Satish Balay via petsc-dev writes

Re: [petsc-dev] is make alldoc tested?

2020-02-04 Thread Smith, Barry F. via petsc-dev
es in doc build targets - and errors in each > stage are ignored. So will have to fix them to stop on error [in the subset > of targets we build in CI] > > And all doc-only changes would have to be run through this CI test - before > merge.. > > Satish > > On Tue,

Re: [petsc-dev] is make alldoc tested?

2020-02-04 Thread Smith, Barry F. via petsc-dev
- but not > in gitlab-ci. > > I have the fixes at https://gitlab.com/petsc/petsc/-/merge_requests/2503 > > Satish > > On Sat, 1 Feb 2020, Smith, Barry F. via petsc-dev wrote: > >> >> Generating manual example links >> Unexpected argument -print

[petsc-dev] is make alldoc tested?

2020-01-31 Thread Smith, Barry F. via petsc-dev
Generating manual example links Unexpected argument -printmatch-link! Unexpected argument -printmatch-link! manualpages in: /Users/barrysmith/Src/petsc/src/snes/linesearch/impls/bt Error: Error reading html.def in path .:/Users/barrysmith/Src/petsc/arch-master/share or TEXTFILTER_PATH environm

Re: [petsc-dev] complex fix and -Wfloat-equal

2020-01-29 Thread Smith, Barry F. via petsc-dev
> On Jan 27, 2020, at 3:42 PM, Lisandro Dalcin wrote: > > I usually compile my code with almost most warning flags on, including > -Wfloat-equal. > My implementation of the C++ complex fix that is not included by default is > an obvious offender (see warning messages at the end). > > A simp

Re: [petsc-dev] how do see artifacts?

2020-01-20 Thread Smith, Barry F. via petsc-dev
Do they link with iCloud? ☠️ > On Jan 20, 2020, at 5:07 AM, Matthew Knepley wrote: > > Try Firefox or Chrome? > >Matt > > On Sun, Jan 19, 2020 at 11:41 PM Smith, Barry F. via petsc-dev > wrote: > > > > On Jan 19, 2020, at 10:39 PM, Jed Brown

Re: [petsc-dev] how do see artifacts?

2020-01-19 Thread Smith, Barry F. via petsc-dev
> On Jan 19, 2020, at 10:39 PM, Jed Brown wrote: > > "Smith, Barry F. via petsc-dev" writes: > >> I have no left hand side. > > Right hand side? I certainly see them. Are you using a funny browser? Apple's :-) > >> If I grab the lef

Re: [petsc-dev] how do see artifacts?

2020-01-19 Thread Smith, Barry F. via petsc-dev
ng a different > one > > https://gitlab.com/petsc/petsc/-/jobs/407945185 > > Attaching what I see [with the 'artifact' section in the right side column] > > Satish > > On Mon, 20 Jan 2020, Smith, Barry F. via petsc-dev wrote: > >> >> >>

Re: [petsc-dev] how do see artifacts?

2020-01-19 Thread Smith, Barry F. via petsc-dev
g > > Satish > > On Sun, 19 Jan 2020, Smith, Barry F. via petsc-dev wrote: > >> >> With the new Gitlab interfaces how do you see the artifacts for failed >> pipelines/tests? I can't find them anywhere. It just takes me straight to >> the output with no sidebar for artifacts? >> >>Thanks >> >> Barry >> >

[petsc-dev] how do see artifacts?

2020-01-19 Thread Smith, Barry F. via petsc-dev
With the new Gitlab interfaces how do you see the artifacts for failed pipelines/tests? I can't find them anywhere. It just takes me straight to the output with no sidebar for artifacts? Thanks Barry

Re: [petsc-dev] Fortran equivalent + separate output with output_file

2020-01-17 Thread Smith, Barry F. via petsc-dev
them in the Fortran stubs, we have missed this case but we need to know what it is so we can fix it. Barry > On Jan 17, 2020, at 3:18 PM, Pierre Jolivet > wrote: > > > >> On 13 Jan 2020, at 4:38 PM, Smith, Barry F. wrote: >> >> >> >>

Re: [petsc-dev] Fortran equivalent + separate output with output_file

2020-01-13 Thread Smith, Barry F. via petsc-dev
> On Jan 13, 2020, at 9:32 AM, Pierre Jolivet > wrote: > > Hello, > This is actually two separate questions, sorry. > 1) I’m looking for the Fortran equivalent of the following, but I couldn’t > get any help looking at the sources. > ierr = PetscOptionsBegin(PETSC_COMM_WORLD,"","","");CHKERR

Re: [petsc-dev] Weird errors with enum in 64-bit ints

2020-01-13 Thread Smith, Barry F. via petsc-dev
I looked at the code and see nothing glaring wrong. I guess it requires valgrind and the debugger on the machine where the trouble occurs. Your enum should not need to be changed. Barry > On Jan 13, 2020, at 3:52 AM, Matthew Knepley wrote: > > I have some unexplainable errors with 64-b

Re: [petsc-dev] Bash tool for handling switching between multiple PETSc branches and configures

2020-01-06 Thread Smith, Barry F.
> On Jan 6, 2020, at 11:36 PM, Jed Brown wrote: > > "Smith, Barry F." writes: > >> I think they just have the wrong algorithms in their compilers for >> modules, I don't think there is anything fundamental to the language >> etc that req

Re: [petsc-dev] PCASM with custom overlap/local matrices

2020-01-05 Thread Smith, Barry F.
0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > on my laptop and four subdomains. > > I guess I can expect nice gains for our Helmholtz and Maxwell solvers at > scale and/or with higher order discretizations! > Pierre > >> On 4 Jan 2020, at 9:17 PM, S

Re: [petsc-dev] How to replace floating point numbers in test outputs?

2020-01-04 Thread Smith, Barry F.
Yes, since difference in floating point are considered not a change the REPLACE which only updates files with changes won't update them. I don't understand the output below, looks identical to me, why is it a diff? Scott, Perhaps when DIFF_NUMBERS=1 is given the REPLACE should replace i

Re: [petsc-dev] PCASM with custom overlap/local matrices

2020-01-04 Thread Smith, Barry F.
Can you overload the MatCreateSubMatrices() to use your function instead of the default. Using MatSetOperation()? Barry > On Jan 4, 2020, at 5:30 AM, Pierre Jolivet wrote: > > Hello, > I’d like to bypass the call to MatCreateSubMatrices during PCSetUp_PCASM > because I’m using a custo

Re: [petsc-dev] Bash tool for handling switching between multiple PETSc branches and configures

2020-01-03 Thread Smith, Barry F.
le, and hurts everyone all the time. > >Matt > > On Fri, Jan 3, 2020 at 7:03 PM wrote: > The time to rebuild Fortran modules, which is pretty much an entire lifetime. > I disable Fortran in most arches that I rebuild frequently. > > On Jan 3, 2020 16:58, "Sm

Re: [petsc-dev] Bash tool for handling switching between multiple PETSc branches and configures

2020-01-03 Thread Smith, Barry F.
> On Jan 3, 2020 15:04, "Smith, Barry F." wrote: > > > > On Jan 3, 2020, at 3:07 PM, Matthew Knepley wrote: > > > > On Fri, Jan 3, 2020 at 3:52 PM Smith, Barry F. wrote: > > > >A long time ago Oana suggested a tool that allowed switching

Re: [petsc-dev] Bash tool for handling switching between multiple PETSc branches and configures

2020-01-03 Thread Smith, Barry F.
> On Jan 3, 2020, at 3:07 PM, Matthew Knepley wrote: > > On Fri, Jan 3, 2020 at 3:52 PM Smith, Barry F. wrote: > >A long time ago Oana suggested a tool that allowed switching between PETSc > configurations after pulls etc that didn't require waiting to rec

[petsc-dev] Bash tool for handling switching between multiple PETSc branches and configures

2020-01-03 Thread Smith, Barry F.
A long time ago Oana suggested a tool that allowed switching between PETSc configurations after pulls etc that didn't require waiting to recompile code or rerun configure. Based on her ideas I finally got something that has been behaving reasonably satisfactory for me. Note that it is only

Re: [petsc-dev] Failure in Master

2020-01-01 Thread Smith, Barry F.
What is this doing in an email? Yes, my mistake. I foolishly used the Gitlab Gui to change an error in the documention. In reality it wasn't documentation. No more GUI changes for me, the weird thing is it doesn't even offer you a MR, it just pushes to master. Barry > On Jan 1, 2020,

Re: [petsc-dev] Errors from PETSc 3.11 and 3.12 headers

2019-12-22 Thread Smith, Barry F.
Please send configure.log and /usr/include/openmpi-x86_64/petsc/petscconf.h In theory our configure checks if deprecated can be used for Enums but perhaps our test is not complete. Barry > On Dec 22, 2019, at 12:01 PM, Antonio Trande wrote: > > Hi all. > > I don't know of these are

Re: [petsc-dev] make: print path to mpi.h

2019-12-18 Thread Smith, Barry F.
https://gitlab.com/petsc/petsc/merge_requests/2409 > On Dec 16, 2019, at 3:02 PM, Lisandro Dalcin wrote: > > While rebuilding a configuration with C++ on macOS, I got this weird output: > > - > Using system modules: > error: invalid argument '-std=c+

Re: [petsc-dev] NumFOCUS affiliation?

2019-12-10 Thread Smith, Barry F.
Maybe we should make it a GitLab issue, we always finish GitLab issues promptly > On Dec 10, 2019, at 11:22 AM, Jed Brown wrote: > > We made some first steps, but we/I dropped the ball on finishing the > process. I'll pick it up over break. > > "Mills, Richard Tran" writes: > >> Fello

Re: [petsc-dev] Valgrind problems

2019-12-07 Thread Smith, Barry F.
> On Dec 7, 2019, at 6:00 PM, Matthew Knepley wrote: > > Nope, you are right. > > Thanks, > > Matt > > On Sat, Dec 7, 2019 at 6:19 PM Balay, Satish wrote: > The fix for this is in 363424266cb675e6465b4c7dcb06a6ff8acf57d2 > > Do you have this commit in your branch - and still seeing

Re: [petsc-dev] Valgrind problems

2019-12-07 Thread Smith, Barry F.
Can you point to the pipeline test output where it fails for issue that mentions it? Satish had issues posted for each valgrind problem (I still have the biharmonic I will try to fix today) and I can't find that one. Thanks > On Dec 7, 2019, at 2:45 PM, Matthew Knepley wrote: > > I a

Re: [petsc-dev] error with --download-amgx

2019-12-03 Thread Smith, Barry F.
l on this > thread] - and this flag is not passed in from petsc configure to amgx > cmake - so it must be somehow set internally in this package. > > Satish > > On Wed, 4 Dec 2019, Smith, Barry F. wrote: > >> >>> Also - its best to avoid -Werror in externalpac

Re: [petsc-dev] error with --download-amgx

2019-12-03 Thread Smith, Barry F.
&envl) >> { >> //throws... >> // >> -FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE); >> +// FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE); >> } >> >> AMGX_Mo

Re: [petsc-dev] error with --download-amgx

2019-12-03 Thread Smith, Barry F.
found.\n", AMGX_ERR_BAD_MODE); > +// FatalError("Mode not found.\n", AMGX_ERR_BAD_MODE); > } > > AMGX_Mode mode = static_cast(itFound->second); > @@ -1125,4 +1125,4 @@ inline bool remove_managed_matrix(AMGX_matrix_handle > envl) > } //namespa

Re: [petsc-dev] error with --download-amgx

2019-12-03 Thread Smith, Barry F.
The first error is nvcc error : 'cicc' died due to signal 9 (Kill signal) nvcc error : 'cicc' died due to signal 9 (Kill signal) later /autofs/nccs-svm1_home1/adams/petsc/arch-summit-opt64-gnu-cuda/externalpackages/git.amgx/base/src/amgx_c_common.cu(77): catastrophic error: error while

[petsc-dev] https://software.intel.com/en-us/devcloud/oneapi

2019-11-18 Thread Smith, Barry F. via petsc-dev

Re: [petsc-dev] KSP in DMPlex

2019-11-17 Thread Smith, Barry F. via petsc-dev
> On Nov 14, 2019, at 12:30 PM, Faibussowitsch, Jacob via petsc-dev > wrote: > > Hello, > > So I am trying to make a simple 5 pt stencil finite difference laplacian > operator (https://akmitra.public.iastate.edu/aero361/design_web/Laplace.pdf) > work on a dmplex mesh, but all of the example

Re: [petsc-dev] Gitlab notifications and labels

2019-11-15 Thread Smith, Barry F. via petsc-dev
Scott, Thanks for researching this and send it out. Yes, I think all MR on DMPLEX should use a label DMPLEX; I don't think having an additional DM label is worth the effort since all the meat at the moment is DMPLEX. We should probably also have a TS label. Maybe others wil

Re: [petsc-dev] Parmetis bug

2019-11-10 Thread Smith, Barry F. via petsc-dev
Nice. Anyway to add this exact reproducibility into a PETSc example that runs daily? The truism: all codes are buggy, even those that haven't been touched in 15 years, is definitely represented here. Barry > On Nov 10, 2019, at 7:31 PM, Fande Kong via petsc-dev > wrote: > > Valgr

[petsc-dev] Place to capture all our work on GPUs (and ECP ...)

2019-11-10 Thread Smith, Barry F. via petsc-dev
Please do not respond to this email: use https://gitlab.com/petsc/petsc/issues/490 Mark Adams has been generating some great information on Summit with GAMG and now AMGx and other people such as Hannah and Junchao generating information important to our education about GPUs and, of cours

Re: [petsc-dev] Right-preconditioned GMRES

2019-11-08 Thread Smith, Barry F. via petsc-dev
https://gitlab.com/petsc/petsc/merge_requests/2290 > On Nov 7, 2019, at 4:24 AM, Pierre Jolivet wrote: > > > >> On 7 Nov 2019, at 5:32 AM, Smith, Barry F. wrote: >> >> >> Some idiot logged what they did, but not why they did it. >> >>

Re: [petsc-dev] Right-preconditioned GMRES

2019-11-06 Thread Smith, Barry F. via petsc-dev
is > is the case. > > Thanks, > Pierre > >> On 24 Oct 2019, at 5:40 PM, Smith, Barry F. wrote: >> >> >> Send the code and exact instructions to run a "good" and a "bad" ASM >> >> Barry >> >> >>> On

Re: [petsc-dev] Feed back on report on performance of vector operations on Summit requested

2019-10-31 Thread Smith, Barry F. via petsc-dev
Jed, Thanks, this is very useful. Barry > On Oct 31, 2019, at 11:47 AM, Jed Brown wrote: > > "Smith, Barry F." writes: > >>> On Oct 23, 2019, at 7:15 PM, Jed Brown wrote: >>> >>> IMO, Figures 2 and 7+ are more interesting

Re: [petsc-dev] Feed back on report on performance of vector operations on Summit requested

2019-10-29 Thread Smith, Barry F. via petsc-dev
ou give me access to the repository with data and current plotting > scripts, I can take a crack at slicing it in the way that I think would > be useful. Hannah will give you the data. Barry > > "Smith, Barry F. via petsc-dev" writes: > >> We've prepare

Re: [petsc-dev] Feed back on report on performance of vector operations on Summit requested

2019-10-29 Thread Smith, Barry F. via petsc-dev
will report a much higher latency than the latter, because > synchronizations are expensive (i.e. your latency consists of kernel launch > latency plus device synchronization latency). Approach B is slightly > over-optimistic, but I've found it to better match what one observes for a

Re: [petsc-dev] AVX kernels, old gcc, still broken

2019-10-26 Thread Smith, Barry F. via petsc-dev
> On Oct 26, 2019, at 9:09 AM, Jed Brown wrote: > > "Smith, Barry F." writes: > >> The proposed fix is #if defined(PETSC_USE_AVX512_KERNELS) && && && && && >> in https://gitlab.com/petsc/pets

Re: [petsc-dev] AVX kernels, old gcc, still broken

2019-10-25 Thread Smith, Barry F. via petsc-dev
think it should be used minimally, you hate configure and think it should be used minimally. Barry > On Oct 25, 2019, at 1:54 PM, Jed Brown wrote: > > "Smith, Barry F. via petsc-dev" writes: > >> This needs to be fixed properly with a configure test(s)

Re: [petsc-dev] AVX kernels, old gcc, still broken

2019-10-25 Thread Smith, Barry F. via petsc-dev
https://gitlab.com/petsc/petsc/issues/434 > On Oct 25, 2019, at 9:16 AM, Smith, Barry F. wrote: > > > This needs to be fixed properly with a configure test(s) and not with huge > and inconsistent checks like this > > #if defined(PETSC_HAVE_IMMINTRIN_H) &

Re: [petsc-dev] AVX kernels, old gcc, still broken

2019-10-25 Thread Smith, Barry F. via petsc-dev
This needs to be fixed properly with a configure test(s) and not with huge and inconsistent checks like this #if defined(PETSC_HAVE_IMMINTRIN_H) && defined(__AVX512F__) && defined(PETSC_USE_REAL_DOUBLE) && !defined(PETSC_USE_COMPLEX) && !defined(PETSC_USE_64BIT_INDICES) or this #elif d

Re: [petsc-dev] Right-preconditioned GMRES

2019-10-24 Thread Smith, Barry F. via petsc-dev
hat the problem is _most likely_ in my > RHS. > But I need to figure out why I only get this problem with > right-preconditioned KSPs with restrict or none. > > Thanks, > Pierre > > > > > On 13 Oct 2019, at 8:16 PM, Smith, Barry F. wrote: > > >

Re: [petsc-dev] PetscLayoutFindOwner and PetscLayoutFindOwnerIndex

2019-10-24 Thread Smith, Barry F. via petsc-dev
These routines should be fixed. > On Oct 16, 2019, at 5:19 AM, Pierre Jolivet via petsc-dev > wrote: > > Hello, > These two functions use a parameter “owner” of type PetscInt*. > Shouldn’t this be PetscMPIInt*? > This implies changes left and right, so I want to check I’m not pushing an >

Re: [petsc-dev] I think the test system is broken in master

2019-10-23 Thread Smith, Barry F. via petsc-dev
The test system really has to dump all the output including stderr to GitLab; the stack frames etc. Drop just little tidbits without all the error output is worse than users who cut and paste one line from configure.log and expect it to help. I don't know how you decide to redirect stuff an

Re: [petsc-dev] [Suggestion] Configure QOL Improvements

2019-10-23 Thread Smith, Barry F. via petsc-dev
https://www.youtube.com/watch?v=NVopDink4uQ https://gitlab.com/petsc/petsc/merge_requests/2207 > On Oct 23, 2019, at 8:34 PM, Matthew Knepley via petsc-dev > wrote: > > On Wed, Oct 23, 2019 at 5:08 PM Faibussowitsch, Jacob > wrote: > I think Jed is referring to the fact that configure app

Re: [petsc-dev] Suggestion regarding CI issues

2019-10-23 Thread Smith, Barry F. via petsc-dev
Great idea and good text: https://gitlab.com/petsc/petsc/issues/360 https://gitlab.com/petsc/petsc/wikis/home > On Oct 23, 2019, at 4:13 AM, Hapla Vaclav via petsc-dev > wrote: > > Issues related to CI testing often affect multiple independent MRs. There > should be a single point where

Re: [petsc-dev] Wrong "failed tests" command

2019-10-21 Thread Smith, Barry F. via petsc-dev
May need more work on the tester infrastructure? > On Oct 21, 2019, at 12:30 PM, Pierre Jolivet via petsc-dev > wrote: > > Hello, > In this pipeline build log, https://gitlab.com/petsc/petsc/-/jobs/326525063, > it shows that I can rerun failed tests using the following command: > /usr/bin/m

Re: [petsc-dev] "participants" on gitlab

2019-10-21 Thread Smith, Barry F. via petsc-dev
gt; > On Mon, Oct 21, 2019 at 9:47 AM Smith, Barry F. via petsc-dev > wrote: > > > > On Oct 21, 2019, at 10:27 AM, Zhang, Hong via petsc-dev > > wrote: > > > > How is the list of participants determined when a MR is created on gitlab? > > It seems

Re: [petsc-dev] ksp_error_if_not_converged in multilevel solvers

2019-10-21 Thread Smith, Barry F. via petsc-dev
> On Oct 21, 2019, at 12:55 AM, Pierre Jolivet > wrote: > > > > On Oct 20, 2019, at 6:07 PM, "Smith, Barry F." wrote: > >> >> The reason the code works this way is that normally >> -ksp_error_if_not_converged is propagated into the

Re: [petsc-dev] "participants" on gitlab

2019-10-21 Thread Smith, Barry F. via petsc-dev
> On Oct 21, 2019, at 10:27 AM, Zhang, Hong via petsc-dev > wrote: > > How is the list of participants determined when a MR is created on gitlab? It > seems to include everybody by default. Is there any way to shorten the list? > Ideally only the participants involved in the particular MR sh

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-21 Thread Smith, Barry F. via petsc-dev
3:01 PM, Jed Brown wrote: >> >> Pierre Jolivet via petsc-dev writes: >> >>>> On 21 Oct 2019, at 7:52 AM, Smith, Barry F. wrote: >>>> >>>> >>>> >>>>> On Oct 21, 2019, at 12:23 AM, Pierre Jo

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Smith, Barry F. via petsc-dev
> On Oct 21, 2019, at 12:23 AM, Pierre Jolivet > wrote: > > > >> On 21 Oct 2019, at 7:11 AM, Smith, Barry F. wrote: >> >> >> >>> On Oct 20, 2019, at 11:52 PM, Pierre Jolivet >>> wrote: >>> >>> >>> &

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Smith, Barry F. via petsc-dev
> On Oct 20, 2019, at 11:52 PM, Pierre Jolivet > wrote: > > > >> On 21 Oct 2019, at 6:42 AM, Smith, Barry F. wrote: >> >> Could you provide a use case where you want to access/have a block size of >> a IS that is not an ISBlock? > > In th

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Smith, Barry F. via petsc-dev
Could you provide a use case where you want to access/have a block size of a IS that is not an ISBlock? > On Oct 16, 2019, at 2:50 AM, Pierre Jolivet via petsc-dev > wrote: > > Hello, > I’m trying to understand what is the rationale for naming a function > ISBlockGetIndices and another IS

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Smith, Barry F. via petsc-dev
> On Oct 16, 2019, at 9:41 AM, Stefano Zampini via petsc-dev > wrote: > > I just took a look at the ISGENERAL code. ISSetBlockSize_General just sets > the block size of the layout (??) > ISGetIndices always return the data->idx memory. > So, a more profound question is: what is the model beh

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Smith, Barry F. via petsc-dev
> On Oct 16, 2019, at 2:50 AM, Pierre Jolivet via petsc-dev > wrote: > > Hello, > I’m trying to understand what is the rationale for naming a function > ISBlockGetIndices and another ISLocalToGlobalMappingGetBlockIndices (BlockGet > vs. GetBlock). ISBlockGetIndices returns the indices fo

Re: [petsc-dev] ksp_error_if_not_converged in multilevel solvers

2019-10-20 Thread Smith, Barry F. via petsc-dev
The reason the code works this way is that normally -ksp_error_if_not_converged is propagated into the inner (and innerer) solves and normally it is desirable that these inner solves do not error simply because they reach the maximum number of iterations since for nested iterative methods g

Re: [petsc-dev] SuperLU + GPUs

2019-10-18 Thread Smith, Barry F. via petsc-dev
https://gitlab.com/petsc/petsc/merge_requests/2048 All discussion etc should go into the Discussion for that MR. > On Oct 18, 2019, at 12:21 PM, Mark Adams via petsc-dev > wrote: > > What is the status of supporting SuperLU_DIST with GPUs? > Thanks, > Mark

Re: [petsc-dev] Should v->valid_GPU_array be a bitmask?

2019-10-13 Thread Smith, Barry F. via petsc-dev
both > CPU and GPU. I believe we should allocate on CPU on-demand for VecCUDA. > > --Junchao Zhang > > > On Sun, Oct 13, 2019 at 12:27 PM Smith, Barry F. wrote: > > Yikes, forget about bit flags and names. > > Does this behavior make sense? EVERY CUDA vec

Re: [petsc-dev] Right-preconditioned GMRES

2019-10-13 Thread Smith, Barry F. via petsc-dev
Is this one process with one subdomain? (And hence no meaningful overlap since there is nothing to overlap?) And you expect to get the "exact" answer on one iteration? Please run the right preconditioned GMRES with -pc_asm_type [restrict and basic and none] -ksp_monitor_true_solution an

Re: [petsc-dev] Should v->valid_GPU_array be a bitmask?

2019-10-13 Thread Smith, Barry F. via petsc-dev
Yikes, forget about bit flags and names. Does this behavior make sense? EVERY CUDA vector allocates memory on both GPU and CPU ? Or do I misunderstand the code? This seems fundamentally wrong and is different than before. What about the dozens of work vectors on the GPU (for example f

Re: [petsc-dev] Better error message for missing components

2019-10-13 Thread Smith, Barry F. via petsc-dev
PetscErrorCode MatGetFactor(Mat mat, MatSolverType type,MatFactorType ftype,Mat *f) { PetscErrorCode ierr,(*conv)(Mat,MatFactorType,Mat*); PetscBool foundpackage,foundmtype; PetscFunctionBegin; PetscValidHeaderSpecific(mat,MAT_CLASSID,1); PetscValidType(mat,1); if (mat->factor

Re: [petsc-dev] SNES_DIVERGED_TR_DELTA

2019-10-12 Thread Smith, Barry F. via petsc-dev
I clarified the text in changes/312.html I don't think it can be depreciated in the usual way since it no longer exists and its previous usage was nonsense. Barry > On Oct 6, 2019, at 1:10 AM, Pierre Jolivet via petsc-dev > wrote: > > Hello, > Shouldn’t there be a deprecation warning f

Re: [petsc-dev] MR 2036 : Better implementation of multipreconditioning

2019-10-12 Thread Smith, Barry F. via petsc-dev
> On Oct 11, 2019, at 9:19 AM, Pierre Gosselet via petsc-dev > wrote: > > Dear all, > Barry kindly identified many flaws in our implementation, which gives > us many opportunities to improve our code (which is currently under MR > https://gitlab.com/petsc/petsc/merge_requests/2036). Among hi

[petsc-dev] PETSc 3.12 release

2019-09-30 Thread Smith, Barry F. via petsc-dev
We are pleased to announce the release of PETSc version 3.12 at http://www.mcs.anl.gov/petsc The major changes and updates can be found at http://www.mcs.anl.gov/petsc/documentation/changes/312.html We recommend upgrading to PETSc 3.12 soon. As always, please report problems to petsc-ma...@m

Re: [petsc-dev] MatMult on Summit

2019-09-23 Thread Smith, Barry F. via petsc-dev
5.7 4.4 > 20 267181.0 4.5 > 22 270290.4 4.6 > 24 221944.9 3.8 > 26 238302.8 4.0 > > > --Junchao Zhang > > > On Sun, Sep 22, 2019 at 6:04 PM Smith, Barry F. wrote: > > Junchao, > > For completeness co

Re: [petsc-dev] It would be really nice if you could run a single job on the pipeline with a branch

2019-09-23 Thread Smith, Barry F. via petsc-dev
> On Sep 23, 2019, at 10:43 AM, Jed Brown wrote: > > "Smith, Barry F. via petsc-dev" writes: > >>> On Sep 22, 2019, at 11:26 PM, Balay, Satish wrote: >>> >>> Even-though a fix addresses a breakage in a single build - that change >>&g

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Smith, Barry F. via petsc-dev
Hong, As a hack for this release could you have the Numeric portion of the multiply routines check if the symbolic data is there and if not just call the symbolic an attach the needed data? You might need to have a utility function that does all the symbolic part except the allocation

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Smith, Barry F. via petsc-dev
We would like avoid allocating a huge array for the matrix and then having the user place on top of it. In the new paradigm there could be options called on the resulting C of MatMatGetProduct() that would take effect before the C is fully formed to prevent the allocating and freeing f

Re: [petsc-dev] It would be really nice if you could run a single job on the pipeline with a branch

2019-09-22 Thread Smith, Barry F. via petsc-dev
ns [within this pipeline] YES! > > The status in the MR will reflect that the pipeline failed [due to all the > other canceled jobs] > > Satish > > On Mon, 23 Sep 2019, Smith, Barry F. via petsc-dev wrote: > >> >> When you fix something on a branch that broke

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
est MatMatMult_MPIAIJ_MPIDense(), we added internal data structures and > obtained an impressive improvement in memory usage. > Hong > > On Sun, Sep 22, 2019 at 4:38 PM Pierre Jolivet via petsc-dev > wrote: > > >> On 22 Sep 2019, at 8:32 PM, Smith, Barry F. wrote: >&g

[petsc-dev] PETSc release testing and schedule

2019-09-22 Thread Smith, Barry F. via petsc-dev
Petsc-developers, We are planning on a PETSc release for Sunday Sept 29 (about 5 pm CST) For this - we'll have a feature freeze (on merges) on Tuesday Sept 24th (about 5 pm CST] After this time we will be accepting only bug fix/doc fix MR. Any unfinished MR in the gitlab site will remain

[petsc-dev] It would be really nice if you could run a single job on the pipeline with a branch

2019-09-22 Thread Smith, Barry F. via petsc-dev
When you fix something on a branch that broke a particular job in gitlab-ci it would be nice to be able to run that single job on the updated branch instead of having to submit an entirely new pipeline Does this exist? Should this be requested in gitlab-ci issues? Could we make a work

Re: [petsc-dev] MatMult on Summit

2019-09-22 Thread Smith, Barry F. via petsc-dev
4.38 > 24265443.29244.50 > 26266562.78724.52 > 28267043.63674.53 > 30266833.72124.52 > 32 267183.84744.53 > > On Sat, Sep 21, 2019 at 11:24 PM Smith, Barry F. wrote: > > Junchao could tr

Re: [petsc-dev] MatMult on Summit

2019-09-22 Thread Smith, Barry F. via petsc-dev
Here is how the bandwidth improves with more cores. Terrible in going from 1 to 2 cores per socket > On Sep 21, 2019, at 2:03 PM, Zhang, Junchao wrote: > > I made the following changes: > 1) In MatMultAdd_SeqAIJCUSPARSE, use this code sequence at the end > ierr = WaitForGPU();CHKERRCUDA(

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
n we expect this "feature" to be fixed for the upcoming > release and deprecated later on, or will you get rid of this for good for the > release? > > Thanks, > Pierre > > On Sep 22, 2019, at 7:11 PM, "Smith, Barry F." wrote: > >> >>

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
gt; > It is the same as MatMatMultSymbolic_MPIAIJ_MPIDense() except does not > create C > */ > > > Il giorno dom 22 set 2019 alle ore 20:11 Smith, Barry F. via petsc-dev > ha scritto: > >Jose, > > Thanks for the pointer. > > Will this ch

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
Jose > > > >> El 22 sept 2019, a las 18:49, Pierre Jolivet via petsc-dev >> escribió: >> >> >>> On 22 Sep 2019, at 6:33 PM, Smith, Barry F. wrote: >>> >>> >>> Ok. So we definitely need better error checking and to

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
> On Sep 22, 2019, at 11:49 AM, Pierre Jolivet > wrote: > > >> On 22 Sep 2019, at 6:33 PM, Smith, Barry F. wrote: >> >> >> Ok. So we definitely need better error checking and to clean up the code, >> comments and docs >> >> As

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
maybe get a clean into a MR so it gets into the release? Thanks Barry > On Sep 22, 2019, at 11:12 AM, Pierre Jolivet > wrote: > > >> On 22 Sep 2019, at 6:03 PM, Smith, Barry F. wrote: >> >> >> >>> On Sep 22, 2019, at 10:14 AM, Pierr

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Smith, Barry F. via petsc-dev
> On Sep 22, 2019, at 10:14 AM, Pierre Jolivet via petsc-dev > wrote: > > FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here > https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80 > (with MAT_INITIAL_MATRIX). > I believe there is something not right in your MR

Re: [petsc-dev] MatMult on Summit

2019-09-22 Thread Smith, Barry F. via petsc-dev
gt; > On Sep 22, 2019 08:46, "Smith, Barry F." wrote: > >I'm guessing it would be very difficult to connect this particular > performance bug with a decrease in performance for an actual full application > since models don't catch this level of detail w

Re: [petsc-dev] MatMult on Summit

2019-09-22 Thread Smith, Barry F. via petsc-dev
I'm guessing it would be very difficult to connect this particular performance bug with a decrease in performance for an actual full application since models don't catch this level of detail well (and since you cannot run the application without the bug to see the better performance)? IBM

Re: [petsc-dev] MatMult on Summit

2019-09-22 Thread Smith, Barry F. via petsc-dev
Ok, thanks. Then one has to be careful in HPC when using the term so each time it is used everyone in the conversation knows which one it is referring to. > On Sep 22, 2019, at 8:33 AM, Jed Brown wrote: > > "Smith, Barry F." writes: > >>> On Sep 21, 2019

Re: [petsc-dev] MatMult on Summit

2019-09-21 Thread Smith, Barry F. via petsc-dev
> On Sep 21, 2019, at 11:43 PM, Jed Brown wrote: > > "Smith, Barry F." writes: > >> Jed, >> >> What does latency as a function of message size mean? It is in the plots > > It's just the wall-clock time to ping-pong a message of that s

  1   2   3   4   5   6   7   8   >