Re: [petsc-dev] building on Titan

2018-06-21 Thread Mark Adams
> > > So - suggest avoiding --download-cmake - and use 'module load cmake' > or whatever is appropriate on titan. > > That works, thanks, > Satish > >

[petsc-dev] MatCreateSubMatrix question

2018-06-21 Thread Mark Adams
I have a parallel matrix and I want to extract parts of it locally to create small local matrices. I use PETSC_COMM_SELF for the ISs, but that does not seem to be enough to tell MatCreateSubMatrix that I want a PETSC_COMM_SELF (sub)matrix. How should I do this? I am not doing any communication i

Re: [petsc-dev] HDF5 download error

2018-06-20 Thread Mark Adams
df5-1.8.18.tar.gz > > Satish > > On Wed, 20 Jun 2018, Karl Rupp wrote: > > > Hi Mark, > > > > the FTP server at MCS is down today. It should come back up later today. > > > > Best regards, > > Karli > > > > On 06/20/2018 01:17

Re: [petsc-dev] HDF5 download error

2018-06-20 Thread Mark Adams
On Wed, Jun 20, 2018 at 7:18 AM Karl Rupp wrote: > Hi Mark, > > the FTP server at MCS is down today. It should come back up later today. > > >Trying to download > > > https://support.hdfgroup.org/ftp/HDF5/current18/src/hdf5-1.8.18.tar.gz > > for HDF5 OK, and it

Re: [petsc-dev] HDF5 download error

2018-06-20 Thread Mark Adams
This looks like it is a problem with NERSC, this does not work: 04:14 cori04 maint= ~/petsc_install/petsc$ ping ftp.mcs.anl.gov PING ftp.mcs.anl.gov (140.221.6.23) 56(84) bytes of data. On Wed, Jun 20, 2018 at 7:08 AM Mark Adams wrote: > I get this error downloading HDF5 on Cori at NERSC

[petsc-dev] HDF5 download error

2018-06-20 Thread Mark Adams
I get this error downloading HDF5 on Cori at NERSC and it has worked before. This is on maint. === === Trying to download https:/

[petsc-dev] test errors

2018-06-16 Thread Mark Adams
FYI, I suspect this is fine but I get some errors in testing 'maint' on Cori (NERSC) KNL: 19:57 nid02516 maint= ~/petsc_install/petsc$ make PETSC_DIR=/global/homes/m/madams/petsc_install/petsc-cori-knl-opt64-intel-omp PETSC_ARCH="" test Running test examples to verify correct installation Using PE

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-14 Thread Mark Adams
And with 7-point stensils and no large material discontinuities you probably want -pc_gamg_square_graph 10 -pc_gamg_threshold 0.0 and you could test the square graph parameter (eg, 1,2,3,4). And I would definitely test hypre. On Thu, Jun 14, 2018 at 8:54 AM Mark Adams wrote: > >> Just

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-14 Thread Mark Adams
> > > Just -pc_type hypre instead of -pc_type gamg. > > And you need to have configured PETSc with hypre.

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-14 Thread Mark Adams
pc_type gamg. > I also tried periodic boundary condition and ran it with -mat_view > ::load_balance. It gives fewer KSP iterations and but PETSc still reports > load imbalance at coarse levels. > > > --Junchao Zhang > > On Tue, Jun 12, 2018 at 3:17 PM, Mark Adams

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-12 Thread Mark Adams
boundary condition so that the nonzeros > are perfectly balanced across processes. I will try that to see what > happens. > > --Junchao Zhang > > On Mon, Jun 11, 2018 at 8:09 AM, Mark Adams wrote: > >> >> >> On Mon, Jun 11, 2018 at 12:46 AM, Junchao Zhang >&g

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-11 Thread Mark Adams
cesses fighting over the same memory >> bandwidth at the same time than in the smaller case. Ahh, here is >> something you can try, lets undersubscribe the memory bandwidth needs, run >> on say 16 processes per node with 8 nodes and 16 processes per node with 64 >> nodes a

Re: [petsc-dev] [petsc-users] Poor weak scaling when solving successive linearsystems

2018-06-09 Thread Mark Adams
(source files attached) so you can profile it yourself. I appreciate >> the offer Junchao, thank you. >> > > You can adjust the system size per processor at runtime via >> "-nodes_per_proc 30" and the number of repeated calls to the function >> containing

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-10 Thread Mark Adams
On Sat, Feb 10, 2018 at 12:54 PM, Jed Brown wrote: > Mark Adams writes: > > > On Fri, Feb 9, 2018 at 9:39 PM, Jeff Hammond > wrote: > > > >> https://msdn.microsoft.com/en-us/library/dn473234(v=vs.85).aspx > >> > >> This function name i

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-10 Thread Mark Adams
ORLD. I guess I need to change that. Thanks, > Jeff > > On Fri, Feb 9, 2018 at 3:11 PM Mark Adams wrote: > >> I get an error in PetscCommGetNewTag. So I copied the test to may main >> and I get the same problem. So this code fails: >> >> int main( int

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-09 Thread Mark Adams
On Fri, Feb 9, 2018 at 6:22 PM, Jed Brown wrote: > Mark Adams writes: > > > I get an error in PetscCommGetNewTag. So I copied the test to may main > and > > I get the same problem. So this code fails: > > > > int main( int argc, char **args ) > > {

Re: [petsc-dev] MPI_Attr_get test fails

2018-02-09 Thread Mark Adams
On Fri, Feb 9, 2018 at 6:18 PM, Smith, Barry F. wrote: > > Bad MPI. Do you get this with --download-mpich > yes, and I nuked it and rebuilt ... > > > > On Feb 9, 2018, at 5:11 PM, Mark Adams wrote: > > > > I get an error in PetscCommGetNewTag. So I copied

[petsc-dev] MPI_Attr_get test fails

2018-02-09 Thread Mark Adams
I get an error in PetscCommGetNewTag. So I copied the test to may main and I get the same problem. So this code fails: int main( int argc, char **args ) { PetscErrorCode ierr; ierr = PetscInitialize( &argc, &args, "./.petscrc", NULL );CHKERRQ(ierr); { // debug PetscCommCounter *counter

Re: [petsc-dev] configure error

2018-01-26 Thread Mark Adams
> > > i.e use : > > --with-cxx=g++-7 > Thanks, that fixed it.

Re: [petsc-dev] PETSc Quarterly Telecon

2017-12-18 Thread Mark Adams
s. I don't know if this affects dial ins. If this is a problem I can create a new session easily. Hi there, MARK ADAMS is inviting you to a scheduled Zoom meeting. Join from PC, Mac, Linux, iOS or Android: https://lbnl.zoom.us/j/767747958 <https://www.google.com/url?q=https%3A%2F%2Flbnl.z

Re: [petsc-dev] is this wrong?

2017-11-06 Thread Mark Adams
he default is false. Am I missing something? Not a big deal, I assume there was a good reason to split the default value into a new argument. > > Barry > > > > > > On Nov 6, 2017, at 8:25 AM, Mark Adams wrote: > > > > This code looks wrong. I get a valgrind wa

Re: [petsc-dev] is this wrong?

2017-11-06 Thread Mark Adams
On Mon, Nov 6, 2017 at 9:39 AM, Lisandro Dalcin wrote: > Did you mean > > *flg = currentvalue; > > ? > yes > > On 6 November 2017 at 17:25, Mark Adams wrote: > > This code looks wrong. I get a valgrind warning if the option is not > set. I &g

[petsc-dev] is this wrong?

2017-11-06 Thread Mark Adams
This code looks wrong. I get a valgrind warning if the option is not set. *I think this code should be added.* PetscErrorCode PetscOptionsBool_Private(PetscOptionItems *PetscOptionsObject,const char opt[],const char text[],const char man[],PetscBool currentvalue,PetscBool *flg,PetscBool *set) {

Re: [petsc-dev] Recover from TS failure

2017-11-02 Thread Mark Adams
> > > >This is absolute nonsense. No way should you or do you need to do this. > TS can/is suppose to handle failures in the solver cleanly. We need a > reproducible result where it does not handle it correctly and then we fix > the problem > > No problem, Jed gave me the magic incantations :)

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
On Wed, Nov 1, 2017 at 8:30 PM, Jed Brown wrote: > Mark Adams writes: > > >> > >> > >> > >> The last relevant output you've shown me is SNES failing a bunch of > >> times as the adaptive controller attempts to shrink the step size and >

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
> > > > The last relevant output you've shown me is SNES failing a bunch of > times as the adaptive controller attempts to shrink the step size and > retry. If you fixed that problem, you need to tell me. If not, THAT is > the problem you need to fix. > That is the problem I am trying to fix. No

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
On Wed, Nov 1, 2017 at 7:21 PM, Jed Brown wrote: > Mark Adams writes: > > >> > >> > >> > >> You're in PetscTraceBackErrorHandler which is way too late. Back up. > >> What caused the error? > >> > >> > > SNES p

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
> > > > You're in PetscTraceBackErrorHandler which is way too late. Back up. > What caused the error? > > SNES problem. It can be linear solver error, max its, or line search failure. See attached. I see that I want to unset TS->errorifstepfailed. I see how to do that with SNES, Will look again

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
TSSetTimeStep(ts, dt);CHKERRQ(ierr); ierr = VecCopy(ctx.u0,u);CHKERRQ(ierr); /* recover state */ } else { SETERRQ1(PETSC_COMM_WORLD,PETSC_ERR_ARG_WRONG,"Unhandled error %s",TSConvergedReasons[reason]); } On Wed, Nov 1, 2017 at 2:45 PM, Mark Adams wrote: > > &g

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
On Wed, Nov 1, 2017 at 2:34 PM, Jed Brown wrote: > Mark Adams writes: > > > Oh, Maybe the Jacobian has Nans or Infs even though the last time step > > survived. Maybe it was going crazy. I'll check > > If that is the case you would use TSSetFunctionDomainError(). &

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
Oh, Maybe the Jacobian has Nans or Infs even though the last time step survived. Maybe it was going crazy. I'll check On Wed, Nov 1, 2017 at 2:13 PM, Mark Adams wrote: > Yea, I don't understand the linear solve error: > > -ts_monitor -ts_type beuler -pc_type lu -pc_factor

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
at 2:02 PM, Jed Brown wrote: > Mark Adams writes: > > > On Wed, Nov 1, 2017 at 1:46 PM, Jed Brown wrote: > > > >> Mark Adams writes: > >> > >> > I have added some code in a TS post step method to look at the number > of > >> >

Re: [petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
On Wed, Nov 1, 2017 at 1:46 PM, Jed Brown wrote: > Mark Adams writes: > > > I have added some code in a TS post step method to look at the number of > > nonlinear iterations and cut the time step if it took too many SNES > > iterations. That helped but now I want to

[petsc-dev] Recover from TS failure

2017-11-01 Thread Mark Adams
I have added some code in a TS post step method to look at the number of nonlinear iterations and cut the time step if it took too many SNES iterations. That helped but now I want to go one step further and recover from a failed time step (see appended error message). How can/should I recover fro

Re: [petsc-dev] get number of SNES iterations

2017-10-25 Thread Mark Adams
gt;Matt > > On Wed, Oct 25, 2017 at 7:59 AM, Mark Adams wrote: > >> I want to modify the TS time step, in a post-step function, and would >> like to get the number of Newton iterations that were used in the time >> step. I am not seeing how to get that. I see number

[petsc-dev] get number of SNES iterations

2017-10-25 Thread Mark Adams
I want to modify the TS time step, in a post-step function, and would like to get the number of Newton iterations that were used in the time step. I am not seeing how to get that. I see number of linear solver iterations. I'm sure I am missing something ...

Re: [petsc-dev] Threadsafe for Matrix assemby in Petsc?

2017-10-17 Thread Mark Adams
There is no support for threaded matrix assembly in PETSc. Here is a recent email thread on the issue: https://mail.google.com/mail/u/0/#search/label%3Apetsc+thread+assembly/15f10d078e9ea8e7 So you pretty much have to deal with race conditions yourself. There are several failure modes with thread

Re: [petsc-dev] SuperLU failure with valgrind

2017-10-16 Thread Mark Adams
ds on Cori. Again the code runs fine though. Probably false positives. On Mon, Oct 16, 2017 at 12:31 PM, Matthew Knepley wrote: > We had a previous error with pdgssvx in SuperLU I think. Maybe searching > petsc-maint would get it? > >Matt > > On Mon, Oct 16, 2017 at 12:21 PM, Ma

Re: [petsc-dev] SuperLU failure with valgrind

2017-10-16 Thread Mark Adams
in static_schedule() routine, I don't see > any problem. > > Sherry > > > > On Mon, Oct 16, 2017 at 7:21 AM, Mark Adams wrote: > >> FYI, I get this error on one processor with SuperLU under valgrind. Could >> this just be a valgrind issue? >> >>

[petsc-dev] SuperLU failure with valgrind

2017-10-16 Thread Mark Adams
FYI, I get this error on one processor with SuperLU under valgrind. Could this just be a valgrind issue? Mark /Users/markadams/Codes/petsc/arch-macosx-gnu-g/bin/mpiexec -n 1 valgrind --dsymutil=yes --leak-check=no --gen-suppressions=no --num-callers=20 --error-limit=no ./ex48 -debug 2 -dim 2 -dm_

Re: [petsc-dev] tiny bug in DMPlexDistribute

2017-10-13 Thread Mark Adams
Thanks Vaclav, I've been seeing a valgrind warning in PlexDestroy for a long time. Mark On Fri, Oct 13, 2017 at 10:33 AM, Matthew Knepley wrote: > On Fri, Oct 13, 2017 at 10:26 AM, Vaclav Hapla > wrote: > >> Hello >> >> In DMPlexDistribute, when it is run on 1 process and sf != NULL, the >> out

Re: [petsc-dev] Plex web pages are broken

2017-10-12 Thread Mark Adams
Humm, this one fails also. slightly different problem (using DM directory): http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexGetTransitiveClosure.html Maybe Google has old pointers ... On Thu, Oct 12, 2017 at 12:31 PM, Mark Adams wrote: > Google is returning (for

Re: [petsc-dev] Plex web pages are broken

2017-10-12 Thread Mark Adams
; http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/ > DMPlexGetDepthStratum.html > > Does some other doc have this incorrect URL? > > Satish > > > On Thu, 12 Oct 2017, Mark Adams wrote: > > > FYI, It looks like the Plex method we pages are dead,

[petsc-dev] Plex web pages are broken

2017-10-12 Thread Mark Adams
FYI, It looks like the Plex method we pages are dead, such as: http://www.mcs.anl.gov/petsc/petsc-current/docs/.../DM/DMPlexGetDepthStratum.html

Re: [petsc-dev] development version for MATAIJMKL type mat

2017-10-10 Thread Mark Adams
more experience with OMP + AMG and can give you a better sense of what to expect. Mark On Tue, Oct 10, 2017 at 4:06 PM, Bakytzhan Kallemov wrote: > > > On 10/10/2017 12:47 PM, Mark Adams wrote: > > > > >What are you comparing? Are you using say 32 MPI processes an

Re: [petsc-dev] development version for MATAIJMKL type mat

2017-10-10 Thread Mark Adams
putting this back on the list. On Tue, Oct 10, 2017 at 3:21 PM, Bakytzhan Kallemov wrote: > > > > Forwarded Message > Subject: Re: [petsc-dev] development version for MATAIJMKL type mat > Date: Tue, 10 Oct 2017 12:18:08 -0700 > From: Bakytzhan Kallemov > To: Barry Smith > >

Re: [petsc-dev] development version for MATAIJMKL type mat

2017-10-10 Thread Mark Adams
On Tue, Oct 10, 2017 at 2:50 PM, Barry Smith wrote: > > > On Oct 10, 2017, at 10:52 AM, Bakytzhan Kallemov > wrote: > > > > Hi, > > > > My name is Baky Kallemov. > > > > Currently, I am working on improving a scalibility of the Chombo-Petsc > interface on cori machine at nersc system. > > > > I

Re: [petsc-dev] why am I getting this message?

2017-09-27 Thread Mark Adams
:55 cori01 maint= ~/petsc_install/petsc$ git describe origin/maint v3.8 07:55 cori01 maint= ~/petsc_install/petsc$ git describe origin/maint v3.8 07:55 cori01 maint= ~/petsc_install/petsc$ > Satish > > On Wed, 27 Sep 2017, Mark Adams wrote: > > > I just pulled maint but still g

Re: [petsc-dev] why am I getting this message?

2017-09-27 Thread Mark Adams
atish > > On Wed, 27 Sep 2017, Mark Adams wrote: > > > This message seems to have gone away ... > > > > On Wed, Sep 27, 2017 at 8:14 AM, Mark Adams wrote: > > > > > I just pulled maint but still get a message that I am out of date: > > > > > >

Re: [petsc-dev] why am I getting this message?

2017-09-27 Thread Mark Adams
This message seems to have gone away ... On Wed, Sep 27, 2017 at 8:14 AM, Mark Adams wrote: > I just pulled maint but still get a message that I am out of date: > > 1072 git pull origin > 1073 h > 05:12 cori01 maint= ~/petsc_install/petsc$ ../arch-cori-knl-opt-intel.py &

[petsc-dev] why am I getting this message?

2017-09-27 Thread Mark Adams
I just pulled maint but still get a message that I am out of date: 1072 git pull origin 1073 h 05:12 cori01 maint= ~/petsc_install/petsc$ ../arch-cori-knl-opt-intel.py PETSC_DIR=$PWD +++ The version of PETSc

Re: [petsc-dev] HDF5 error from GAMG

2017-09-18 Thread Mark Adams
Also, I tested this on my Mac (this output) and on Cori at NERSC and the behavior looked identical. On Mon, Sep 18, 2017 at 8:44 PM, Mark Adams wrote: > I get this strange error when I use GAMG, in parallel. I don't see it in > serial and I don't see it with the default solver.

[petsc-dev] HDF5 error from GAMG

2017-09-18 Thread Mark Adams
I get this strange error when I use GAMG, in parallel. I don't see it in serial and I don't see it with the default solver. The problem seems to be that I use -vec_view in my code and the guts of Vec seem to be picking this up and causing havoc. In the past I have added a prefix to my -[x2_]vec_vi

Re: [petsc-dev] hypre + OMP and MKL-AIJ

2017-09-14 Thread Mark Adams
TSc to MKL much early so if any petsc user will show interest to try this > functionality we can commit PETSc wrapper early and provide engineering > build with correspond function. > > Thanks, > > Alex > > > > > > *From:* Mark Adams [mailto:mfad...@lbl.gov]

Re: [petsc-dev] hypre + OMP and MKL-AIJ

2017-09-14 Thread Mark Adams
> > *From:* Richard Tran Mills [mailto:rtmi...@anl.gov] > *Sent:* Thursday, September 14, 2017 11:45 AM > *To:* Mark Adams > *Cc:* For users of the development version of PETSc ; > Kalinkin, Alexander A ; Sokolova, Irina < > irina.sokol...@intel.com> > *Subject:* Re: [pe

Re: [petsc-dev] hypre + OMP and MKL-AIJ

2017-09-14 Thread Mark Adams
I don't mind if you wrap two Mat-Mat into the P'AP. Both of my apps are linear so this is amortized. Thanks, > > > --Richard > > On Thu, Sep 14, 2017 at 11:32 AM, Mark Adams wrote: > >> I recall Barry saying that he updated the hypre interface after the

[petsc-dev] hypre + OMP and MKL-AIJ

2017-09-14 Thread Mark Adams
I recall Barry saying that he updated the hypre interface after the last hypre release, which includes OpenMP. But, I am not finding the email. Can someone tell me the status of this? Note, I have two users that are interested in using threads with AMG. I think we would be interested in testing th

Re: [petsc-dev] snes_type test

2017-08-16 Thread Mark Adams
Adams wrote: > I see, I had a tiny time step that make the Jacobian big and the errors > small. > > On Thu, Aug 17, 2017 at 12:30 AM, Mark Adams wrote: > >> >> >> On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley >> wrote: >> >>> On Wed, A

Re: [petsc-dev] snes_type test

2017-08-16 Thread Mark Adams
I see, I had a tiny time step that make the Jacobian big and the errors small. On Thu, Aug 17, 2017 at 12:30 AM, Mark Adams wrote: > > > On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley > wrote: > >> On Wed, Aug 16, 2017 at 10:14 AM, Mark Adams wrote: >> &

Re: [petsc-dev] snes_type test

2017-08-16 Thread Mark Adams
On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley wrote: > On Wed, Aug 16, 2017 at 10:14 AM, Mark Adams wrote: > >> I just want to check the interpretation of snes_type test. Is this pretty >> conclusively good? >> > > It looks right, but the entires in the Jacobi

[petsc-dev] snes_type test

2017-08-16 Thread Mark Adams
I just want to check the interpretation of snes_type test. Is this pretty conclusively good? Testing hand-coded Jacobian, if the ratio is O(1.e-8), the hand-coded Jacobian is probably correct. Run with -snes_test_display to show difference of hand-coded and finite difference Jacobian. Norm of matr

Re: [petsc-dev] configure error at NERSC

2017-08-10 Thread Mark Adams
b/master/travis/install-autotools.sh. > > Best, > > Jeff > > On Wed, Aug 9, 2017 at 9:32 PM, Mark Adams wrote: > >> removing the module loads in my .bashrc file fixed it. >> >> I asked NERSC about the auto modules, which are needed for p4est, >> >> O

Re: [petsc-dev] configure error at NERSC

2017-08-09 Thread Mark Adams
removing the module loads in my .bashrc file fixed it. I asked NERSC about the auto modules, which are needed for p4est, On Thu, Aug 10, 2017 at 10:56 AM, Mark Adams wrote: > I commented out these module loads in my .bashrc and it seems to get > further. > > This build does no

Re: [petsc-dev] configure error at NERSC

2017-08-09 Thread Mark Adams
I commented out these module loads in my .bashrc and it seems to get further. This build does not have p4est. Maybe the module load error broke configure. On Thu, Aug 10, 2017 at 10:43 AM, Mark Adams wrote: > But this is a different failure. I am/was able to get p4est to work by > rer

Re: [petsc-dev] configure error at NERSC

2017-08-09 Thread Mark Adams
0, 2017 at 10:27:58AM +0900, Mark Adams wrote: > > Yea, I saw that. I'll ask NERSC. As I recall the auto stuff was for > p4est. > > It seems to die before that. > > > > On Thu, Aug 10, 2017 at 10:21 AM, Matthew Knepley > wrote: > > > > > On Wed, Aug 9,

Re: [petsc-dev] configure error at NERSC

2017-08-09 Thread Mark Adams
Yea, I saw that. I'll ask NERSC. As I recall the auto stuff was for p4est. It seems to die before that. On Thu, Aug 10, 2017 at 10:21 AM, Matthew Knepley wrote: > On Wed, Aug 9, 2017 at 7:42 PM, Mark Adams wrote: > >> NERSC changed some stuff and my configure is crashing. Any

[petsc-dev] Fwd: Setting -info on proc 0

2017-07-28 Thread Mark Adams
We have a problem that -info seems to be jamming the IO system on jobs with 2K nodes on Cori/KNL. The code seems to hang when -info is used. I suggested having only process 0 do -info. Tuomas here, tried something (see below) that looks like it should work, but apparently it did not. Does what Tu

Re: [petsc-dev] 3rd party GPU AMG solvers

2017-07-14 Thread Mark Adams
On Fri, Jul 14, 2017 at 10:22 AM, Karl Rupp wrote: > > it will nonetheless require a lot of convincing that at best they >> get moderate speed-ups, not the 580+x claimed in some of those early >> GPU papers... >> >> >> Karli, we are talking about two different things. You are talking

Re: [petsc-dev] 3rd party GPU AMG solvers

2017-07-14 Thread Mark Adams
> > > it will nonetheless require a lot of convincing that at best they get > moderate speed-ups, not the 580+x claimed in some of those early GPU > papers... > > Karli, we are talking about two different things. You are talking about performance, and I applaud you for that, but I am talking about

Re: [petsc-dev] 3rd party GPU AMG solvers

2017-07-14 Thread Mark Adams
On Fri, Jul 14, 2017 at 8:02 AM, Matthew Knepley wrote: > On Fri, Jul 14, 2017 at 6:53 AM, Mark Adams wrote: > >> Karli, this would be great if you could investigate this. >> >> A lot of this is driven by desires of DOE programs -- not your monkey not >> your circ

Re: [petsc-dev] 3rd party GPU AMG solvers

2017-07-14 Thread Mark Adams
Karli, this would be great if you could investigate this. A lot of this is driven by desires of DOE programs -- not your monkey not your circus -- but I think that we need to have a story for how to use GPUs, or whatever apps in our funding community want to do, and tell it dispassionately. We don

[petsc-dev] 3rd party GPU AMG solvers

2017-07-13 Thread Mark Adams
I hear Hypre has support for GPUs in a May release. Any word on the status of using it in PETSc? And we discussed interfacing to AMGx, which is complicated (precluded?) by not releasing source. Anything on the potential of interfacing to AMGx? I think it would be great to make this available. It

Re: [petsc-dev] configure failure

2017-07-13 Thread Mark Adams
17 2:18:51 PM EDT, Barry Smith wrote: > >> > >> p4est configure should provide an option to not run compiled code and > >> instead have needed values passed in as configure arguments. > >> > >>> On Jul 13, 2017, at 1:07 PM, Matthew Knepley > >> wr

Re: [petsc-dev] configure failure

2017-07-13 Thread Mark Adams
; > Mark, gets us /global/u2/m/madams/petsc/arch-cori-knl-opt-intel/ > externalpackages/git.p4est/config.log > > Thanks, > > Matt > > > On Thu, Jul 13, 2017 at 11:15 AM, Mark Adams wrote: > >> I always get this error on Cori at NERSC, KNL, but I ca

Re: [petsc-dev] error configuring on KNL

2017-06-30 Thread Mark Adams
Yes, I submitted jobs before I went to bed last night (in the UK) and it configure and built. So far so good. Thanks, On Fri, Jun 30, 2017 at 11:32 PM, Richard Tran Mills wrote: > > > On Fri, Jun 30, 2017 at 3:27 PM, Balay, Satish wrote: > >> On Fri, 30 Jun 2017, Richard Tran Mills wrote: >> >>

Re: [petsc-dev] error configuring on KNL

2017-06-30 Thread Mark Adams
Argh, thanks, that is the/a problem. On Fri, Jun 30, 2017 at 2:16 PM, Satish Balay wrote: > https://www.nersc.gov/users/computational-systems/cori/ > running-jobs/running-jobs-on-cori-faq/ > > Perhaps you need to specify the knl partition to srun? > > Satish > > On Fri,

Re: [petsc-dev] snes/ex56 target is gone

2017-06-28 Thread Mark Adams
thew Knepley > wrote: > > > > > On Tue, Jun 27, 2017 at 6:22 PM, Mark Adams wrote: > > > > > >> I looks like the target (runex56) for snes/ex56.c is not in the > makefile. > > >> Anyone know why that happened? > > >> > > > >

[petsc-dev] snes/ex56 target is gone

2017-06-27 Thread Mark Adams
I looks like the target (runex56) for snes/ex56.c is not in the makefile. Anyone know why that happened?

Re: [petsc-dev] parallel direct solvers for MG

2017-06-27 Thread Mark Adams
change my logic. Not a big deal. (I could also look at Telescope parameters in the process and try to align the two) > > Hong > > On Tue, Jun 27, 2017 at 9:46 AM, Mark Adams wrote: > >> >> >> On Tue, Jun 27, 2017 at 8:35 AM, Matthew Knepley >> w

Re: [petsc-dev] parallel direct solvers for MG

2017-06-27 Thread Mark Adams
On Tue, Jun 27, 2017 at 8:35 AM, Matthew Knepley wrote: > On Tue, Jun 27, 2017 at 6:36 AM, Mark Adams wrote: > >> In talking with Garth, this will not work. >> >> I/we am now thinking that we should replace the MG object with Telescope. >> Telescope seems to be d

Re: [petsc-dev] parallel direct solvers for MG

2017-06-27 Thread Mark Adams
good idea? Am I missing anything important? Mark On Tue, Jun 27, 2017 at 4:48 AM, Mark Adams wrote: > Parallel coarse grid solvers are a bit broken at large scale where you > don't want to use all processors on the coarse grid. The ideal thing might > be to create a sub communicator

[petsc-dev] parallel direct solvers for MG

2017-06-27 Thread Mark Adams
Parallel coarse grid solvers are a bit broken at large scale where you don't want to use all processors on the coarse grid. The ideal thing might be to create a sub communicator, but it's not clear how to integrate this in (eg, check if the sub communicator exists before calling the coarse grid sol

Re: [petsc-dev] explicit FEM

2017-05-17 Thread Mark Adams
> > > What ex48? > knepley/feature-plasma-example= ~/Codes/petsc/src/ts/examples/tutorials/ex48.c

Re: [petsc-dev] explicit FEM

2017-05-16 Thread Mark Adams
> > > Its missing. We will have to put in a DMPlexTSComputeRHSFunctionFEM(). > OK, I can add it to ex48 when it is at least roughed in.

[petsc-dev] explicit FEM

2017-05-16 Thread Mark Adams
I am having problems with explicit + FEM. I set the DM with something like: ierr = DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM, &ctx);CHKERRQ(ierr); But DMPlexTSComputeRHSFunctionFVM calls DMPlexComputeResidual_Internal without a "locX_t" so there is no time derivative and so th

[petsc-dev] SETERRQ for void methods

2017-05-15 Thread Mark Adams
Is there a way to get a stack trace, when you get a segv, inside of a void method? (Plex point functions)

Re: [petsc-dev] SuperLU configure error on CG at ANL

2017-04-24 Thread Mark Adams
Merge branch 'maint' > > commit c7820e35e37acd21255cd00e399111b9de215482 > Author: Satish Balay > Date: Sun Mar 5 18:03:42 2017 -0600 > > superlu: libray is installed in PREFIX/lib64 - fix this to use > PREFIX/lib > > Reported-by: Ju Liu > b > On Mon, 24 Apr 2017, Mark Adams wrote: > > > I get this error, is there a superLU built on CG that I should use? > > > >

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-12 Thread Mark Adams
> PetscOptionsSetValue(NULL,"-vecscatter_alltoall","true"); > VecScatterCreate... > PetscOptionsClearValue(NULL,"-vecscatter_alltoall") > >You need to possibly change it slightly for different PETSc versions or > Fortran. > > Please let us know how it goes, This worked, Thanks

Re: [petsc-dev] building on KNL

2017-03-11 Thread Mark Adams
> > Mark, do you actually see 'cray-udreg.pc' in PKG_CONFIG_PATH? > No, I see an .so. I don't see cray-udreg.pc anywhere. I submitted a report to NERSC.

Re: [petsc-dev] building on KNL

2017-03-11 Thread Mark Adams
et set for both the front end and the compute nodes. > > Barry > >> On Mar 11, 2017, at 11:55 AM, Mark Adams wrote: >> >> Well, I get the same error now with testing. I will ask NERSC. >> PKG_CONFIG_PATH does have a path to libudreg.so.0.2.3, but that does >&

Re: [petsc-dev] building on KNL

2017-03-11 Thread Mark Adams
uired by 'mpich', not found /global/homes/m/madams/petsc_install/petsc-cori-knl-opt64-intel/lib/petsc/conf/rules:399: recipe for target 'ex19.o' failed gmake[3]: *** [ex19.o] Error 1 On Sat, Mar 11, 2017 at 12:17 PM, Mark Adams wrote: > I tried running in batch=1 and that se

Re: [petsc-dev] building on KNL

2017-03-11 Thread Mark Adams
nt main() { > ; > return 0; > } > Popping language C > Error testing C compiler: Cannot compile C with cc. > Deleting "CC" > >Matt > > On Sat, Mar 11, 2017 at 8:43 AM, Mark Adams wrote: >> >> I have been

[petsc-dev] building on KNL

2017-03-11 Thread Mark Adams
I have been using Cori/KNL with Intel MPI and want to move to cray-mpi and am having an error with cc. 06:42 1 cori06 maint= ~/petsc_install/petsc$ cc --version icc (ICC) 17.0.1 20161005 Copyright (C) 1985-2016 Intel Corporation. All rights reserved. configure.log Description: Binary data

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-09 Thread Mark Adams
> Matt is right, > > You should definitely try this before writing additional code. But you > need to put it in the code so it affects just this one scatter, not all the > scatters. So in the place where you create this "all to all" vector scatter > do the following. > > PetscOptio

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-09 Thread Mark Adams
>Ok, in this situation VecScatter cannot detect that it is an all to all so > will generate a message from each process to each other process. Given my > past experience with Cray MPI (why do they even have their own MPI when Intel > provides one; in fact why does Cray even exist when they j

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-08 Thread Mark Adams
> > Is the scatter created with VecScatterCreateToAll()? If so, internally > the VecScatterBegin/End will use VecScatterBegin_MPI_ToAll() which then uses > a MPI_Allgatherv() to do the communication. You can check in the debugger > for this (on 2 processes) by just putting a break point in

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-08 Thread Mark Adams
> -Tuomas > > > > On 3/8/17 16:29, Barry Smith wrote: >> >>Mark, >> >> Are you getting this with PETSc 3.7.5 ? Is the code valgrinded? >> >> >>> On Mar 8, 2017, at 6:27 PM, Mark Adams wrote: >>> >>> On Wed,

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-08 Thread Mark Adams
; with threaded codes that call PETSc. >> >> It is OMP threaded, but it should certainly not call PETSc inside of a >> thread loop... but this does look like something that threading could >> cause. >> >> >>> >>> --Richard >>> >

Re: [petsc-dev] VecScatter scaling problem on KNL

2017-03-08 Thread Mark Adams
call PETSc inside of a thread loop... but this does look like something that threading could cause. > > --Richard > > On Wed, Mar 8, 2017 at 1:33 PM, Mark Adams wrote: >> >> Our code is having scaling problems on KNL (Cori), when we get up to >> about 1K sockets. >&g

[petsc-dev] VecScatter scaling problem on KNL

2017-03-08 Thread Mark Adams
Our code is having scaling problems on KNL (Cori), when we get up to about 1K sockets. We have isolated the problem to a certain VecScatter. This code stores the data redundantly. Scattering into the solver is just a local copy, but scattering out requires that each process send all of its data to

<    2   3   4   5   6   7   8   9   10   11   >