Re: [petsc-dev] MPI_UB is deprecated in MPI-2.0

2019-03-21 Thread Zhang, Junchao via petsc-dev
I pushed an update to this branch, which adopts MPI_Type_create_resized.
--Junchao Zhang


On Tue, Mar 19, 2019 at 11:56 AM Balay, Satish via petsc-dev 
mailto:petsc-dev@mcs.anl.gov>> wrote:
For now - I'm merging this branch to next. If better fix comes up later we can 
merge it then.

thanks,
Satish

On Wed, 13 Mar 2019, Isaac, Tobin G wrote:

>
> Pushed a fix that just uses MPI_Type_contiguous(MPI_BYTE, sizeof(),
> ...), which is not great but I'm only creating the type to work with
> PetscSF, so it does the job.  Satish, do you want this as a pull
> request, or can you just merge it into next
> (`tisaac/feature-remove-mpi-ub`)?
>
> Thanks,
>   Toby
>
> On Tue, Mar 12, 2019 at 10:21:42PM -0600, Jed Brown wrote:
> > MPI_Type_create_resized (if needed).
> >
> > "Balay, Satish via petsc-dev" 
> > mailto:petsc-dev@mcs.anl.gov>> writes:
> >
> > > http://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2019/03/01/make_master_arch-linux-pkgs-64idx_thrash.log
> > > has the following [but for some reason - its filtered out from the 
> > > warning count]
> > >
> > 
> > > In file included from 
> > > /sandbox/petsc/petsc.master-3/src/dm/impls/forest/p4est/dmp4est.c:13:0:
> > > /sandbox/petsc/petsc.master-3/src/dm/impls/forest/p4est/pforest.c: In 
> > > function ‘DMPforestGetTransferSF_Point’:
> > > /sandbox/petsc/petsc.master-3/src/dm/impls/forest/p4est/pforest.c:2518:7: 
> > > warning: ‘ompi_mpi_ub’ is deprecated (declared at 
> > > /sandbox/petsc/petsc.master-3/arch-linux-pkgs-64idx/include/mpi.h:928): 
> > > MPI_UB is deprecated in MPI-2.0 [-Wdeprecated-declarations]
> > >MPI_Datatype blockTypes[5] = 
> > > {MPI_INT32_T,MPI_INT8_T,MPI_INT16_T,MPI_INT32_T,MPI_UB};
> > > <<
> > >
> > > Any idea how to fix this?
> > >
> > > Thanks,
> > > Satish
>


Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Mark Adams via petsc-dev
>
>
> Could you explain this more by adding some small examples?
>
>
Since you are considering implementing all-at-once (four nested loops,
right?) I'll give you my old code.

This code is hardwired for two AMG and for a geometric-AMG, where the
blocks of the R (and hence P) matrices are scaled identities and I only
store the scale. So you ignore those branches. This code also does
equivalent real form complex, so more branches to ignore.


prom_mat_prod.C
Description: Binary data


Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Fande Kong via petsc-dev
Hi Mark,

Thanks for your email.

On Thu, Mar 21, 2019 at 6:39 AM Mark Adams via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:

> I'm probably screwing up some sort of history by jumping into dev, but
> this is a dev comment ...
>
> (1) -matptap_via hypre: This call the hypre package to do the PtAP trough
>> an all-at-once triple product. In our experiences, it is the most memory
>> efficient, but could be slow.
>>
>
> FYI,
>
> I visited LLNL in about 1997 and told them how I did RAP. Simple 4 nested
> loops. They were very interested. Clearly they did it this way after I
> talked to them. This approach came up here a while back (eg, we should
> offer this as an option).
>
> Anecdotally, I don't see a noticeable difference in performance on my 3D
> elasticity problems between my old code (still used by the bone modeling
> people) and ex56 ...
>

You may not see differences when the problem is small.  What I observed is
that the HYPRE PtAP is ten times slower than the PETSc scalable PtAP when
we had a 3-billions problem on 10K processor cores.


>
> My kernel is an unrolled dense matrix triple product. I doubt Hypre did
> this. It ran at about 2x+ the flop rate of the mat-vec at scale on the SP3
> in 2004.
>

Could you explain this more by adding some small examples?

 I am profiling the current PETSc algorithms on some real simulations. If
PETSc PtAP still takes more memory than desired with my fix (
https://bitbucket.org/petsc/petsc/pull-requests/1452), I am going to
implement the all-at-once triple product with dropping all intermediate
data.  If you have any documents (except the code you posted before), it
would be a great help.

Fande,


> Mark
>
>


Re: [petsc-dev] [petsc-maint] PETSc release by March 29, 2019

2019-03-21 Thread Balay, Satish via petsc-dev
On Tue, 5 Mar 2019, Balay, Satish via petsc-maint wrote:

>   perhaps starting March 18 - freeze access to next - and keep
>   recreating next & next-tmp dynamically as needed

A note: I've restricted access to 'next' so that the above workflow
can be used for the release [if needed].

Satish


Re: [petsc-dev] [petsc-maint] PETSc release by March 29, 2019

2019-03-21 Thread Balay, Satish via petsc-dev
A reminder!.

Also please check and update src/docs/website/documentation/changes/dev.html 

thanks,
Satish

On Tue, 5 Mar 2019, Balay, Satish via petsc-maint wrote:

> Sure - I would add caveats such as:
> 
> - its best to submit PRs early - if they are critical [i.e if the
> branch should be in release] - or if they are big - and likely to
> break builds.
> 
> - we should somehow use both next and next-tmp in a way to avoid some
>   PRs clogging the process for others.
> 
>   perhaps starting March 18 - freeze access to next - and keep
>   recreating next & next-tmp dynamically as needed with the goal of
>   testing fewer branches together (ideally 1 branch at a time) - so
>   that we can:
> 
>* easily identify the branch corresponding to test failures and
>* easily identify branchs that are ready for graduation.
> 
> - We should accept (minor?) bug-fix PRs even after March 22 [i.e
>   anything that would be acceptable in our maint work-flow shouldn't
>   be frozen]
> 
> - And we should be able to drop troublesome PRs if they are blocking
>   the release.
> 
> Satish
> 
> On Tue, 5 Mar 2019, Karl Rupp via petsc-dev wrote:
> 
> > Dear PETSc developers,
> > 
> > let me suggest Friday, March 22, as the cut-off-date for new Pull Requests 
> > for
> > the upcoming release. This allows for 7 days to iron out any remaining
> > glitches. (It only took us a few days to release after the cut-off date last
> > September, so this should be fine)
> > 
> > Also, a clearly communicated cut-off date helps to prevent "may I also 
> > squeeze
> > this in at the very last minute"-PRs, which I may not have the time to deal
> > with anyway.
> > 
> > Satish, does the above schedule work for you? Since you're creating the
> > tarballs, you've got the final word on this :-)
> > 
> > Best regards,
> > Karli
> > 
> > 
> > 
> > 
> > On 3/4/19 4:31 AM, Smith, Barry F. via petsc-dev wrote:
> > > 
> > >    Due to ECP deliverables there will be a PETSc release by March 29, 
> > > 2019.
> > > 
> > >    Please prepare materials you wish to get into the release soon and 
> > > check
> > > on the progress of your current pull requests to make sure they do not 
> > > block
> > > beyond the release deadline.
> > > 
> > >      Thanks
> > > 
> > >       Barry
> > > 
> > >    If someone would like to propose an intermediate deadline before the 
> > > 29th
> > > for testing/etc purposes please feel free, I don't have the energy or
> > > initiative.
> > > 
> > > 
> > >> Begin forwarded message:
> > >>
> > >> *From: *Jed Brown via petsc-maint  > >> >
> > >> *Subject: **Re: [petsc-maint] Release 3.11?*
> > >> *Date: *March 3, 2019 at 10:07:26 AM CST
> > >> *To: *"Munson, Todd" mailto:tmun...@mcs.anl.gov>>
> > >> *Cc: *petsc-maint  > >> >
> > >> *Reply-To: *Jed Brown mailto:j...@jedbrown.org>>
> > >>
> > >> Can you, or someone else involved at that level, please propose a 
> > >> timeline
> > >> on petsc-dev?
> > >>
> > >> "Munson, Todd" mailto:tmun...@mcs.anl.gov>> writes:
> > >>
> > >>> Hi Jed,
> > >>>
> > >>> Yes, we have a funding milestone due at the end of this month, so we
> > >>> should push out a release.
> > >>>
> > >>> Thanks, Todd.
> > >>>
> >  On Mar 2, 2019, at 11:36 PM, Jed Brown  >  > wrote:
> > 
> >  Is there a funding milestone to release 3.11 this month?  If so, we 
> >  need
> >  to publicize a timeline and mention it on petsc-dev?  If not, we can
> >  feature release whenever we feel ready, but probably in the next few
> >  months.
> > > 
> > 
> 


Re: [petsc-dev] [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-21 Thread Mark Adams via petsc-dev
I'm probably screwing up some sort of history by jumping into dev, but this
is a dev comment ...

(1) -matptap_via hypre: This call the hypre package to do the PtAP trough
> an all-at-once triple product. In our experiences, it is the most memory
> efficient, but could be slow.
>

FYI,

I visited LLNL in about 1997 and told them how I did RAP. Simple 4 nested
loops. They were very interested. Clearly they did it this way after I
talked to them. This approach came up here a while back (eg, we should
offer this as an option).

Anecdotally, I don't see a noticeable difference in performance on my 3D
elasticity problems between my old code (still used by the bone modeling
people) and ex56 ...

My kernel is an unrolled dense matrix triple product. I doubt Hypre did
this. It ran at about 2x+ the flop rate of the mat-vec at scale on the SP3
in 2004.

Mark