>
>
> So - suggest avoiding --download-cmake - and use 'module load cmake'
> or whatever is appropriate on titan.
>
>
That works, thanks,
> Satish
>
>
I have a parallel matrix and I want to extract parts of it locally to
create small local matrices.
I use PETSC_COMM_SELF for the ISs, but that does not seem to be enough to
tell MatCreateSubMatrix that I want a PETSC_COMM_SELF (sub)matrix.
How should I do this?
I am not doing any communication i
df5-1.8.18.tar.gz
>
> Satish
>
> On Wed, 20 Jun 2018, Karl Rupp wrote:
>
> > Hi Mark,
> >
> > the FTP server at MCS is down today. It should come back up later today.
> >
> > Best regards,
> > Karli
> >
> > On 06/20/2018 01:17
On Wed, Jun 20, 2018 at 7:18 AM Karl Rupp wrote:
> Hi Mark,
>
> the FTP server at MCS is down today. It should come back up later today.
>
> >Trying to download
> >
> https://support.hdfgroup.org/ftp/HDF5/current18/src/hdf5-1.8.18.tar.gz
> > for HDF5
OK, and it
This looks like it is a problem with NERSC, this does not work:
04:14 cori04 maint= ~/petsc_install/petsc$ ping ftp.mcs.anl.gov
PING ftp.mcs.anl.gov (140.221.6.23) 56(84) bytes of data.
On Wed, Jun 20, 2018 at 7:08 AM Mark Adams wrote:
> I get this error downloading HDF5 on Cori at NERSC
I get this error downloading HDF5 on Cori at NERSC and it has worked
before. This is on maint.
===
===
Trying to download
https:/
FYI, I suspect this is fine but I get some errors in testing 'maint' on
Cori (NERSC) KNL:
19:57 nid02516 maint= ~/petsc_install/petsc$ make
PETSC_DIR=/global/homes/m/madams/petsc_install/petsc-cori-knl-opt64-intel-omp
PETSC_ARCH="" test
Running test examples to verify correct installation
Using
PE
And with 7-point stensils and no large material discontinuities you
probably want -pc_gamg_square_graph 10 -pc_gamg_threshold 0.0 and you could
test the square graph parameter (eg, 1,2,3,4).
And I would definitely test hypre.
On Thu, Jun 14, 2018 at 8:54 AM Mark Adams wrote:
>
>> Just
>
>
> Just -pc_type hypre instead of -pc_type gamg.
>
>
And you need to have configured PETSc with hypre.
pc_type gamg.
> I also tried periodic boundary condition and ran it with -mat_view
> ::load_balance. It gives fewer KSP iterations and but PETSc still reports
> load imbalance at coarse levels.
>
>
> --Junchao Zhang
>
> On Tue, Jun 12, 2018 at 3:17 PM, Mark Adams
boundary condition so that the nonzeros
> are perfectly balanced across processes. I will try that to see what
> happens.
>
> --Junchao Zhang
>
> On Mon, Jun 11, 2018 at 8:09 AM, Mark Adams wrote:
>
>>
>>
>> On Mon, Jun 11, 2018 at 12:46 AM, Junchao Zhang
>&g
cesses fighting over the same memory
>> bandwidth at the same time than in the smaller case. Ahh, here is
>> something you can try, lets undersubscribe the memory bandwidth needs, run
>> on say 16 processes per node with 8 nodes and 16 processes per node with 64
>> nodes a
(source files attached) so you can profile it yourself. I appreciate
>> the offer Junchao, thank you.
>> > > You can adjust the system size per processor at runtime via
>> "-nodes_per_proc 30" and the number of repeated calls to the function
>> containing
On Sat, Feb 10, 2018 at 12:54 PM, Jed Brown wrote:
> Mark Adams writes:
>
> > On Fri, Feb 9, 2018 at 9:39 PM, Jeff Hammond
> wrote:
> >
> >> https://msdn.microsoft.com/en-us/library/dn473234(v=vs.85).aspx
> >>
> >> This function name i
ORLD. I guess I need to
change that.
Thanks,
> Jeff
>
> On Fri, Feb 9, 2018 at 3:11 PM Mark Adams wrote:
>
>> I get an error in PetscCommGetNewTag. So I copied the test to may main
>> and I get the same problem. So this code fails:
>>
>> int main( int
On Fri, Feb 9, 2018 at 6:22 PM, Jed Brown wrote:
> Mark Adams writes:
>
> > I get an error in PetscCommGetNewTag. So I copied the test to may main
> and
> > I get the same problem. So this code fails:
> >
> > int main( int argc, char **args )
> > {
On Fri, Feb 9, 2018 at 6:18 PM, Smith, Barry F. wrote:
>
> Bad MPI. Do you get this with --download-mpich
>
yes, and I nuked it and rebuilt ...
>
>
> > On Feb 9, 2018, at 5:11 PM, Mark Adams wrote:
> >
> > I get an error in PetscCommGetNewTag. So I copied
I get an error in PetscCommGetNewTag. So I copied the test to may main and
I get the same problem. So this code fails:
int main( int argc, char **args )
{
PetscErrorCode ierr;
ierr = PetscInitialize( &argc, &args, "./.petscrc", NULL );CHKERRQ(ierr);
{ // debug
PetscCommCounter *counter
>
>
> i.e use :
>
> --with-cxx=g++-7
>
Thanks, that fixed it.
s. I
don't know if this affects dial ins. If this is a problem I can create a
new session easily.
Hi there, MARK ADAMS is inviting you to a scheduled Zoom meeting. Join from
PC, Mac, Linux, iOS or Android: https://lbnl.zoom.us/j/767747958
<https://www.google.com/url?q=https%3A%2F%2Flbnl.z
he default is false.
Am I missing something? Not a big deal, I assume there was a good reason to
split the default value into a new argument.
>
> Barry
>
>
>
>
> > On Nov 6, 2017, at 8:25 AM, Mark Adams wrote:
> >
> > This code looks wrong. I get a valgrind wa
On Mon, Nov 6, 2017 at 9:39 AM, Lisandro Dalcin wrote:
> Did you mean
>
> *flg = currentvalue;
>
> ?
>
yes
>
> On 6 November 2017 at 17:25, Mark Adams wrote:
> > This code looks wrong. I get a valgrind warning if the option is not
> set. I
&g
This code looks wrong. I get a valgrind warning if the option is not set. *I
think this code should be added.*
PetscErrorCode PetscOptionsBool_Private(PetscOptionItems
*PetscOptionsObject,const char opt[],const char text[],const char
man[],PetscBool currentvalue,PetscBool *flg,PetscBool *set)
{
>
>
>
>This is absolute nonsense. No way should you or do you need to do this.
> TS can/is suppose to handle failures in the solver cleanly. We need a
> reproducible result where it does not handle it correctly and then we fix
> the problem
>
>
No problem, Jed gave me the magic incantations :)
On Wed, Nov 1, 2017 at 8:30 PM, Jed Brown wrote:
> Mark Adams writes:
>
> >>
> >>
> >>
> >> The last relevant output you've shown me is SNES failing a bunch of
> >> times as the adaptive controller attempts to shrink the step size and
>
>
>
>
> The last relevant output you've shown me is SNES failing a bunch of
> times as the adaptive controller attempts to shrink the step size and
> retry. If you fixed that problem, you need to tell me. If not, THAT is
> the problem you need to fix.
>
That is the problem I am trying to fix. No
On Wed, Nov 1, 2017 at 7:21 PM, Jed Brown wrote:
> Mark Adams writes:
>
> >>
> >>
> >>
> >> You're in PetscTraceBackErrorHandler which is way too late. Back up.
> >> What caused the error?
> >>
> >>
> > SNES p
>
>
>
> You're in PetscTraceBackErrorHandler which is way too late. Back up.
> What caused the error?
>
>
SNES problem. It can be linear solver error, max its, or line search
failure. See attached.
I see that I want to unset TS->errorifstepfailed. I see how to do that with
SNES, Will look again
TSSetTimeStep(ts, dt);CHKERRQ(ierr);
ierr = VecCopy(ctx.u0,u);CHKERRQ(ierr); /* recover state */
} else {
SETERRQ1(PETSC_COMM_WORLD,PETSC_ERR_ARG_WRONG,"Unhandled error
%s",TSConvergedReasons[reason]);
}
On Wed, Nov 1, 2017 at 2:45 PM, Mark Adams wrote:
>
>
&g
On Wed, Nov 1, 2017 at 2:34 PM, Jed Brown wrote:
> Mark Adams writes:
>
> > Oh, Maybe the Jacobian has Nans or Infs even though the last time step
> > survived. Maybe it was going crazy. I'll check
>
> If that is the case you would use TSSetFunctionDomainError().
&
Oh, Maybe the Jacobian has Nans or Infs even though the last time step
survived. Maybe it was going crazy. I'll check
On Wed, Nov 1, 2017 at 2:13 PM, Mark Adams wrote:
> Yea, I don't understand the linear solve error:
>
> -ts_monitor -ts_type beuler -pc_type lu -pc_factor
at 2:02 PM, Jed Brown wrote:
> Mark Adams writes:
>
> > On Wed, Nov 1, 2017 at 1:46 PM, Jed Brown wrote:
> >
> >> Mark Adams writes:
> >>
> >> > I have added some code in a TS post step method to look at the number
> of
> >> >
On Wed, Nov 1, 2017 at 1:46 PM, Jed Brown wrote:
> Mark Adams writes:
>
> > I have added some code in a TS post step method to look at the number of
> > nonlinear iterations and cut the time step if it took too many SNES
> > iterations. That helped but now I want to
I have added some code in a TS post step method to look at the number of
nonlinear iterations and cut the time step if it took too many SNES
iterations. That helped but now I want to go one step further and recover
from a failed time step (see appended error message).
How can/should I recover fro
gt;Matt
>
> On Wed, Oct 25, 2017 at 7:59 AM, Mark Adams wrote:
>
>> I want to modify the TS time step, in a post-step function, and would
>> like to get the number of Newton iterations that were used in the time
>> step. I am not seeing how to get that. I see number
I want to modify the TS time step, in a post-step function, and would like
to get the number of Newton iterations that were used in the time step. I
am not seeing how to get that. I see number of linear solver iterations.
I'm sure I am missing something ...
There is no support for threaded matrix assembly in PETSc. Here is a recent
email thread on the issue:
https://mail.google.com/mail/u/0/#search/label%3Apetsc+thread+assembly/15f10d078e9ea8e7
So you pretty much have to deal with race conditions yourself. There are
several failure modes with thread
ds on
Cori.
Again the code runs fine though. Probably false positives.
On Mon, Oct 16, 2017 at 12:31 PM, Matthew Knepley wrote:
> We had a previous error with pdgssvx in SuperLU I think. Maybe searching
> petsc-maint would get it?
>
>Matt
>
> On Mon, Oct 16, 2017 at 12:21 PM, Ma
in static_schedule() routine, I don't see
> any problem.
>
> Sherry
>
>
>
> On Mon, Oct 16, 2017 at 7:21 AM, Mark Adams wrote:
>
>> FYI, I get this error on one processor with SuperLU under valgrind. Could
>> this just be a valgrind issue?
>>
>>
FYI, I get this error on one processor with SuperLU under valgrind. Could
this just be a valgrind issue?
Mark
/Users/markadams/Codes/petsc/arch-macosx-gnu-g/bin/mpiexec -n 1 valgrind
--dsymutil=yes --leak-check=no --gen-suppressions=no --num-callers=20
--error-limit=no ./ex48 -debug 2 -dim 2 -dm_
Thanks Vaclav,
I've been seeing a valgrind warning in PlexDestroy for a long time.
Mark
On Fri, Oct 13, 2017 at 10:33 AM, Matthew Knepley wrote:
> On Fri, Oct 13, 2017 at 10:26 AM, Vaclav Hapla
> wrote:
>
>> Hello
>>
>> In DMPlexDistribute, when it is run on 1 process and sf != NULL, the
>> out
Humm, this one fails also. slightly different problem (using DM directory):
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexGetTransitiveClosure.html
Maybe Google has old pointers ...
On Thu, Oct 12, 2017 at 12:31 PM, Mark Adams wrote:
> Google is returning (for
; http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/
> DMPlexGetDepthStratum.html
>
> Does some other doc have this incorrect URL?
>
> Satish
>
>
> On Thu, 12 Oct 2017, Mark Adams wrote:
>
> > FYI, It looks like the Plex method we pages are dead,
FYI, It looks like the Plex method we pages are dead, such as:
http://www.mcs.anl.gov/petsc/petsc-current/docs/.../DM/DMPlexGetDepthStratum.html
more experience with OMP + AMG and can give
you a better sense of what to expect.
Mark
On Tue, Oct 10, 2017 at 4:06 PM, Bakytzhan Kallemov
wrote:
>
>
> On 10/10/2017 12:47 PM, Mark Adams wrote:
>
>
>
>
>What are you comparing? Are you using say 32 MPI processes an
putting this back on the list.
On Tue, Oct 10, 2017 at 3:21 PM, Bakytzhan Kallemov
wrote:
>
>
>
> Forwarded Message
> Subject: Re: [petsc-dev] development version for MATAIJMKL type mat
> Date: Tue, 10 Oct 2017 12:18:08 -0700
> From: Bakytzhan Kallemov
> To: Barry Smith
>
>
On Tue, Oct 10, 2017 at 2:50 PM, Barry Smith wrote:
>
> > On Oct 10, 2017, at 10:52 AM, Bakytzhan Kallemov
> wrote:
> >
> > Hi,
> >
> > My name is Baky Kallemov.
> >
> > Currently, I am working on improving a scalibility of the Chombo-Petsc
> interface on cori machine at nersc system.
> >
> > I
:55 cori01 maint= ~/petsc_install/petsc$ git describe origin/maint
v3.8
07:55 cori01 maint= ~/petsc_install/petsc$ git describe origin/maint
v3.8
07:55 cori01 maint= ~/petsc_install/petsc$
> Satish
>
> On Wed, 27 Sep 2017, Mark Adams wrote:
>
> > I just pulled maint but still g
atish
>
> On Wed, 27 Sep 2017, Mark Adams wrote:
>
> > This message seems to have gone away ...
> >
> > On Wed, Sep 27, 2017 at 8:14 AM, Mark Adams wrote:
> >
> > > I just pulled maint but still get a message that I am out of date:
> > >
> > >
This message seems to have gone away ...
On Wed, Sep 27, 2017 at 8:14 AM, Mark Adams wrote:
> I just pulled maint but still get a message that I am out of date:
>
> 1072 git pull origin
> 1073 h
> 05:12 cori01 maint= ~/petsc_install/petsc$ ../arch-cori-knl-opt-intel.py
&
I just pulled maint but still get a message that I am out of date:
1072 git pull origin
1073 h
05:12 cori01 maint= ~/petsc_install/petsc$ ../arch-cori-knl-opt-intel.py
PETSC_DIR=$PWD
+++
The version of PETSc
Also, I tested this on my Mac (this output) and on Cori at NERSC and the
behavior looked identical.
On Mon, Sep 18, 2017 at 8:44 PM, Mark Adams wrote:
> I get this strange error when I use GAMG, in parallel. I don't see it in
> serial and I don't see it with the default solver.
I get this strange error when I use GAMG, in parallel. I don't see it in
serial and I don't see it with the default solver.
The problem seems to be that I use -vec_view in my code and the guts of Vec
seem to be picking this up and causing havoc. In the past I have added a
prefix to my -[x2_]vec_vi
TSc to MKL much early so if any petsc user will show interest to try this
> functionality we can commit PETSc wrapper early and provide engineering
> build with correspond function.
>
> Thanks,
>
> Alex
>
>
>
>
>
> *From:* Mark Adams [mailto:mfad...@lbl.gov]
>
> *From:* Richard Tran Mills [mailto:rtmi...@anl.gov]
> *Sent:* Thursday, September 14, 2017 11:45 AM
> *To:* Mark Adams
> *Cc:* For users of the development version of PETSc ;
> Kalinkin, Alexander A ; Sokolova, Irina <
> irina.sokol...@intel.com>
> *Subject:* Re: [pe
I don't mind if you wrap two Mat-Mat into the P'AP. Both of my apps are
linear so this is amortized.
Thanks,
>
>
> --Richard
>
> On Thu, Sep 14, 2017 at 11:32 AM, Mark Adams wrote:
>
>> I recall Barry saying that he updated the hypre interface after the
I recall Barry saying that he updated the hypre interface after the last
hypre release, which includes OpenMP. But, I am not finding the email. Can
someone tell me the status of this?
Note, I have two users that are interested in using threads with AMG. I
think we would be interested in testing th
Adams wrote:
> I see, I had a tiny time step that make the Jacobian big and the errors
> small.
>
> On Thu, Aug 17, 2017 at 12:30 AM, Mark Adams wrote:
>
>>
>>
>> On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley
>> wrote:
>>
>>> On Wed, A
I see, I had a tiny time step that make the Jacobian big and the errors
small.
On Thu, Aug 17, 2017 at 12:30 AM, Mark Adams wrote:
>
>
> On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley
> wrote:
>
>> On Wed, Aug 16, 2017 at 10:14 AM, Mark Adams wrote:
>>
&
On Thu, Aug 17, 2017 at 12:22 AM, Matthew Knepley wrote:
> On Wed, Aug 16, 2017 at 10:14 AM, Mark Adams wrote:
>
>> I just want to check the interpretation of snes_type test. Is this pretty
>> conclusively good?
>>
>
> It looks right, but the entires in the Jacobi
I just want to check the interpretation of snes_type test. Is this pretty
conclusively good?
Testing hand-coded Jacobian, if the ratio is
O(1.e-8), the hand-coded Jacobian is probably correct.
Run with -snes_test_display to show difference
of hand-coded and finite difference Jacobian.
Norm of matr
b/master/travis/install-autotools.sh.
>
> Best,
>
> Jeff
>
> On Wed, Aug 9, 2017 at 9:32 PM, Mark Adams wrote:
>
>> removing the module loads in my .bashrc file fixed it.
>>
>> I asked NERSC about the auto modules, which are needed for p4est,
>>
>> O
removing the module loads in my .bashrc file fixed it.
I asked NERSC about the auto modules, which are needed for p4est,
On Thu, Aug 10, 2017 at 10:56 AM, Mark Adams wrote:
> I commented out these module loads in my .bashrc and it seems to get
> further.
>
> This build does no
I commented out these module loads in my .bashrc and it seems to get
further.
This build does not have p4est. Maybe the module load error broke configure.
On Thu, Aug 10, 2017 at 10:43 AM, Mark Adams wrote:
> But this is a different failure. I am/was able to get p4est to work by
> rer
0, 2017 at 10:27:58AM +0900, Mark Adams wrote:
> > Yea, I saw that. I'll ask NERSC. As I recall the auto stuff was for
> p4est.
> > It seems to die before that.
> >
> > On Thu, Aug 10, 2017 at 10:21 AM, Matthew Knepley
> wrote:
> >
> > > On Wed, Aug 9,
Yea, I saw that. I'll ask NERSC. As I recall the auto stuff was for p4est.
It seems to die before that.
On Thu, Aug 10, 2017 at 10:21 AM, Matthew Knepley wrote:
> On Wed, Aug 9, 2017 at 7:42 PM, Mark Adams wrote:
>
>> NERSC changed some stuff and my configure is crashing. Any
We have a problem that -info seems to be jamming the IO system on jobs with
2K nodes on Cori/KNL. The code seems to hang when -info is used. I
suggested having only process 0 do -info.
Tuomas here, tried something (see below) that looks like it should work,
but apparently it did not.
Does what Tu
On Fri, Jul 14, 2017 at 10:22 AM, Karl Rupp wrote:
>
> it will nonetheless require a lot of convincing that at best they
>> get moderate speed-ups, not the 580+x claimed in some of those early
>> GPU papers...
>>
>>
>> Karli, we are talking about two different things. You are talking
>
>
> it will nonetheless require a lot of convincing that at best they get
> moderate speed-ups, not the 580+x claimed in some of those early GPU
> papers...
>
>
Karli, we are talking about two different things. You are talking about
performance, and I applaud you for that, but I am talking about
On Fri, Jul 14, 2017 at 8:02 AM, Matthew Knepley wrote:
> On Fri, Jul 14, 2017 at 6:53 AM, Mark Adams wrote:
>
>> Karli, this would be great if you could investigate this.
>>
>> A lot of this is driven by desires of DOE programs -- not your monkey not
>> your circ
Karli, this would be great if you could investigate this.
A lot of this is driven by desires of DOE programs -- not your monkey not
your circus -- but I think that we need to have a story for how to use
GPUs, or whatever apps in our funding community want to do, and tell it
dispassionately. We don
I hear Hypre has support for GPUs in a May release. Any word on the status
of using it in PETSc?
And we discussed interfacing to AMGx, which is complicated (precluded?) by
not releasing source. Anything on the potential of interfacing to AMGx? I
think it would be great to make this available. It
17 2:18:51 PM EDT, Barry Smith wrote:
> >>
> >> p4est configure should provide an option to not run compiled code and
> >> instead have needed values passed in as configure arguments.
> >>
> >>> On Jul 13, 2017, at 1:07 PM, Matthew Knepley
> >> wr
;
> Mark, gets us /global/u2/m/madams/petsc/arch-cori-knl-opt-intel/
> externalpackages/git.p4est/config.log
>
> Thanks,
>
> Matt
>
>
> On Thu, Jul 13, 2017 at 11:15 AM, Mark Adams wrote:
>
>> I always get this error on Cori at NERSC, KNL, but I ca
Yes, I submitted jobs before I went to bed last night (in the UK) and it
configure and built. So far so good.
Thanks,
On Fri, Jun 30, 2017 at 11:32 PM, Richard Tran Mills
wrote:
>
>
> On Fri, Jun 30, 2017 at 3:27 PM, Balay, Satish wrote:
>
>> On Fri, 30 Jun 2017, Richard Tran Mills wrote:
>>
>>
Argh, thanks, that is the/a problem.
On Fri, Jun 30, 2017 at 2:16 PM, Satish Balay wrote:
> https://www.nersc.gov/users/computational-systems/cori/
> running-jobs/running-jobs-on-cori-faq/
>
> Perhaps you need to specify the knl partition to srun?
>
> Satish
>
> On Fri,
thew Knepley
> wrote:
> >
> > > On Tue, Jun 27, 2017 at 6:22 PM, Mark Adams wrote:
> > >
> > >> I looks like the target (runex56) for snes/ex56.c is not in the
> makefile.
> > >> Anyone know why that happened?
> > >>
> > >
>
I looks like the target (runex56) for snes/ex56.c is not in the makefile.
Anyone know why that happened?
change my logic. Not a big deal. (I could also
look at Telescope parameters in the process and try to align the two)
>
> Hong
>
> On Tue, Jun 27, 2017 at 9:46 AM, Mark Adams wrote:
>
>>
>>
>> On Tue, Jun 27, 2017 at 8:35 AM, Matthew Knepley
>> w
On Tue, Jun 27, 2017 at 8:35 AM, Matthew Knepley wrote:
> On Tue, Jun 27, 2017 at 6:36 AM, Mark Adams wrote:
>
>> In talking with Garth, this will not work.
>>
>> I/we am now thinking that we should replace the MG object with Telescope.
>> Telescope seems to be d
good idea? Am I missing anything important?
Mark
On Tue, Jun 27, 2017 at 4:48 AM, Mark Adams wrote:
> Parallel coarse grid solvers are a bit broken at large scale where you
> don't want to use all processors on the coarse grid. The ideal thing might
> be to create a sub communicator
Parallel coarse grid solvers are a bit broken at large scale where you
don't want to use all processors on the coarse grid. The ideal thing might
be to create a sub communicator, but it's not clear how to integrate this
in (eg, check if the sub communicator exists before calling the coarse grid
sol
>
>
> What ex48?
>
knepley/feature-plasma-example=
~/Codes/petsc/src/ts/examples/tutorials/ex48.c
>
>
> Its missing. We will have to put in a DMPlexTSComputeRHSFunctionFEM().
>
OK, I can add it to ex48 when it is at least roughed in.
I am having problems with explicit + FEM. I set the DM with something like:
ierr = DMTSSetRHSFunctionLocal(dm, DMPlexTSComputeRHSFunctionFVM,
&ctx);CHKERRQ(ierr);
But DMPlexTSComputeRHSFunctionFVM calls DMPlexComputeResidual_Internal
without a "locX_t" so there is no time derivative and so th
Is there a way to get a stack trace, when you get a segv, inside of a void
method? (Plex point functions)
Merge branch 'maint'
>
> commit c7820e35e37acd21255cd00e399111b9de215482
> Author: Satish Balay
> Date: Sun Mar 5 18:03:42 2017 -0600
>
> superlu: libray is installed in PREFIX/lib64 - fix this to use
> PREFIX/lib
>
> Reported-by: Ju Liu
> b
> On Mon, 24 Apr 2017, Mark Adams wrote:
>
> > I get this error, is there a superLU built on CG that I should use?
> >
>
>
> PetscOptionsSetValue(NULL,"-vecscatter_alltoall","true");
> VecScatterCreate...
> PetscOptionsClearValue(NULL,"-vecscatter_alltoall")
>
>You need to possibly change it slightly for different PETSc versions or
> Fortran.
>
> Please let us know how it goes,
This worked,
Thanks
>
> Mark, do you actually see 'cray-udreg.pc' in PKG_CONFIG_PATH?
>
No, I see an .so. I don't see cray-udreg.pc anywhere.
I submitted a report to NERSC.
et set for both the front end and the compute nodes.
>
> Barry
>
>> On Mar 11, 2017, at 11:55 AM, Mark Adams wrote:
>>
>> Well, I get the same error now with testing. I will ask NERSC.
>> PKG_CONFIG_PATH does have a path to libudreg.so.0.2.3, but that does
>&
uired by 'mpich', not found
/global/homes/m/madams/petsc_install/petsc-cori-knl-opt64-intel/lib/petsc/conf/rules:399:
recipe for target 'ex19.o' failed
gmake[3]: *** [ex19.o] Error 1
On Sat, Mar 11, 2017 at 12:17 PM, Mark Adams wrote:
> I tried running in batch=1 and that se
nt main() {
> ;
> return 0;
> }
> Popping language C
> Error testing C compiler: Cannot compile C with cc.
> Deleting "CC"
>
>Matt
>
> On Sat, Mar 11, 2017 at 8:43 AM, Mark Adams wrote:
>>
>> I have been
I have been using Cori/KNL with Intel MPI and want to move to cray-mpi
and am having an error with cc.
06:42 1 cori06 maint= ~/petsc_install/petsc$ cc --version
icc (ICC) 17.0.1 20161005
Copyright (C) 1985-2016 Intel Corporation. All rights reserved.
configure.log
Description: Binary data
> Matt is right,
>
> You should definitely try this before writing additional code. But you
> need to put it in the code so it affects just this one scatter, not all the
> scatters. So in the place where you create this "all to all" vector scatter
> do the following.
>
> PetscOptio
>Ok, in this situation VecScatter cannot detect that it is an all to all so
> will generate a message from each process to each other process. Given my
> past experience with Cray MPI (why do they even have their own MPI when Intel
> provides one; in fact why does Cray even exist when they j
>
> Is the scatter created with VecScatterCreateToAll()? If so, internally
> the VecScatterBegin/End will use VecScatterBegin_MPI_ToAll() which then uses
> a MPI_Allgatherv() to do the communication. You can check in the debugger
> for this (on 2 processes) by just putting a break point in
> -Tuomas
>
>
>
> On 3/8/17 16:29, Barry Smith wrote:
>>
>>Mark,
>>
>> Are you getting this with PETSc 3.7.5 ? Is the code valgrinded?
>>
>>
>>> On Mar 8, 2017, at 6:27 PM, Mark Adams wrote:
>>>
>>> On Wed,
; with threaded codes that call PETSc.
>>
>> It is OMP threaded, but it should certainly not call PETSc inside of a
>> thread loop... but this does look like something that threading could
>> cause.
>>
>>
>>>
>>> --Richard
>>>
>
call PETSc inside of a
thread loop... but this does look like something that threading could
cause.
>
> --Richard
>
> On Wed, Mar 8, 2017 at 1:33 PM, Mark Adams wrote:
>>
>> Our code is having scaling problems on KNL (Cori), when we get up to
>> about 1K sockets.
>&g
Our code is having scaling problems on KNL (Cori), when we get up to
about 1K sockets.
We have isolated the problem to a certain VecScatter. This code stores
the data redundantly. Scattering into the solver is just a local copy,
but scattering out requires that each process send all of its data to
601 - 700 of 1122 matches
Mail list logo