Re: [petsc-users] local to global mapping for DMPlex

2017-12-19 Thread Matthew Knepley
On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC  wrote:

> Hello,
>
> We want to extract the cell connectivity from a DMPlex. We have no problem
> for a sequential run.
>

Do you want it on disk? If so, you can just DMView() for HDF5. That outputs
the connectivity in a global numbering.
I can show you the calls I use inside if you want. I usually put

  DMViewFromOptions(dm, NULL, "-dm_view")

Then

  -dm_view hdf5:mesh.h5

  Thanks,

Matt


> However for parallel ones, we need to get the node numbering in the global
> ordering, as when we distribute the mesh, we only have local nodes, and
> thus local numbering.
>
> It seems that we should use DMGetLocalToGlobalMapping (we are using
> Fortran with Petsc 3.8p3). However, we get the running error :
>
> [0]PETSC ERROR: No support for this operation for this object type
> [0]PETSC ERROR: DM can not create LocalToGlobalMapping
>
> Is it the right way to do it ?
>
> Many thanks,
>
> Regards,
>
> Yann
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] local to global mapping for DMPlex

2017-12-19 Thread Yann JOBIC

Hello,

We want to extract the cell connectivity from a DMPlex. We have no 
problem for a sequential run.


However for parallel ones, we need to get the node numbering in the 
global ordering, as when we distribute the mesh, we only have local 
nodes, and thus local numbering.


It seems that we should use DMGetLocalToGlobalMapping (we are using 
Fortran with Petsc 3.8p3). However, we get the running error :


[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: DM can not create LocalToGlobalMapping

Is it the right way to do it ?

Many thanks,

Regards,

Yann



Re: [petsc-users] configure fails with batch+scalapack

2017-12-19 Thread Santiago Andres Triana
Epilogue:

I was able to complete the configuration and compilation using an
interactive session in one compute node. Certainly, there was no need for
the --with-batch option.

However, at run time, the SGI MPT's mpiexec_mpt (required by the job
scheduler in this cluster) throws a cryptic error: Cannot find executable:
-f
It seems not petsc specific, though, as other mpi programs also fail.

In any case I would like to thank you all for the prompt help!

Santiago

On Mon, Dec 18, 2017 at 1:03 AM, Smith, Barry F.  wrote:

>
>   Configure runs fine. When it runs fine absolutely no reason to run it
> with --with-batch.
>
>Make test fails because it cannot launch parallel jobs directly using
> the mpiexec it is using.
>
>You need to determine how to submit jobs on this system and then you
> are ready to go.
>
>Barry
>
>
> > On Dec 17, 2017, at 4:55 PM, Santiago Andres Triana 
> wrote:
> >
> > Thanks for your quick responses!
> >
> > Attached is the configure.log obtained without using the --with-batch
> option. Configures without errors but fails at the 'make test' stage. A
> snippet of the output with the error (which I attributed to the job
> manager) is:
> >
> >
> >
> > >   Local host:  hpca-login
> > >   Registerable memory: 32768 MiB
> > >   Total memory:65427 MiB
> > >
> > > Your MPI job will continue, but may be behave poorly and/or hang.
> > > 
> --
> > 3c25
> > < 0 KSP Residual norm 0.239155
> > ---
> > > 0 KSP Residual norm 0.235858
> > 6c28
> > < 0 KSP Residual norm 6.81968e-05
> > ---
> > > 0 KSP Residual norm 2.30906e-05
> > 9a32,33
> > > [hpca-login:38557] 1 more process has sent help message
> help-mpi-btl-openib.txt / reg mem limit low
> > > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0
> to see all help / error messages
> > /home/trianas/petsc-3.8.3/src/snes/examples/tutorials
> > Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above
> > =
> > Possible error running Fortran example src/snes/examples/tutorials/ex5f
> with 1 MPI process
> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> > 
> --
> > WARNING: It appears that your OpenFabrics subsystem is configured to only
> > allow registering part of your physical memory.  This can cause MPI jobs
> to
> > run with erratic performance, hang, and/or crash.
> >
> > This may be caused by your OpenFabrics vendor limiting the amount of
> > physical memory that can be registered.  You should investigate the
> > relevant Linux kernel module parameters that control how much physical
> > memory can be registered, and increase them to allow registering all
> > physical memory on your machine.
> >
> > See this Open MPI FAQ item for more information on these Linux kernel
> module
> > parameters:
> >
> > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
> >
> >   Local host:  hpca-login
> >   Registerable memory: 32768 MiB
> >   Total memory:65427 MiB
> >
> > Your MPI job will continue, but may be behave poorly and/or hang.
> > 
> --
> > Number of SNES iterations = 4
> > Completed test examples
> > =
> > Now to evaluate the computer systems you plan use - do:
> > make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug
> streams
> >
> >
> >
> >
> > On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley 
> wrote:
> > On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana <
> rep...@gmail.com> wrote:
> > Dear petsc-users,
> >
> > I'm trying to install petsc in a cluster that uses a job manager.  This
> is the configure command I use:
> >
> > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex
> --with-mumps=1 --download-mumps --download-parmetis
> --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl
> --download-metis --with-scalapack=1 --download-scalapack --with-batch
> >
> > This fails when including the option --with-batch together with
> --download-scalapack:
> >
> > We need configure.log
> >
> > 
> ===
> >  Configuring PETSc to compile on your system
> > 
> ===
> > TESTING: check from config.libraries(config/
> BuildSystem/config/libraries.py:158)
>  
> ***
> >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log
> for details):
> > 
> ---
> > Unable to find scalapack in