Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
>
> If jsrun is not functional from configure, alternatives are
> --with-mpiexec=/bin/true or --with-batch=1
>
>
--with-mpiexec=/bin/true  seems to be working.

Thanks,
Mark


> Satish
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> On Wed, Sep 25, 2019 at 8:40 PM Balay, Satish  wrote:
> 
> > > Unable to run jsrun -g 1 with option "-n 1"
> > > Error: It is only possible to use js commands within a job allocation
> > > unless CSM is running
> >
> >
> > Nope  this is a different error message.
> >
> > The message suggests - you can't run 'jsrun -g 1 -n 1 binary' Can you try
> > this manually and see
> > what you get?
> >
> > jsrun -g 1 -n 1 printenv
> >
> 
> I tested this earlier today and originally when I was figuring out the/a
> minimal run command:
> 
> 22:08  /gpfs/alpine/geo127/scratch/adams$ jsrun -g 1 -n 1 printenv
> GIT_PS1_SHOWDIRTYSTATE=1
> XDG_SESSION_ID=494
> SHELL=/bin/bash
> HISTSIZE=100
> PETSC_ARCH=arch-summit-opt64-pgi-cuda
> SSH_CLIENT=160.91.202.152 48626 22
> LC_ALL=
> USER=adams

from configure.log:

>
Executing: jsrun -g 1 -n 1 printenv

Unable to run jsrun -g 1 with option "-n 1"
Error: It is only possible to use js commands within a job allocation unless 
CSM is running
09-25-2019 22:11:56:169 68747 main: Error initializing RM connection. Exiting.
<

Its the exact same command. I don't know why it would work from shell for you - 
but not form configure.

If jsrun is not functional from configure, alternatives are 
--with-mpiexec=/bin/true or --with-batch=1

Satish


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
On Wed, Sep 25, 2019 at 8:40 PM Balay, Satish  wrote:

> > Unable to run jsrun -g 1 with option "-n 1"
> > Error: It is only possible to use js commands within a job allocation
> > unless CSM is running
>
>
> Nope  this is a different error message.
>
> The message suggests - you can't run 'jsrun -g 1 -n 1 binary' Can you try
> this manually and see
> what you get?
>
> jsrun -g 1 -n 1 printenv
>

I tested this earlier today and originally when I was figuring out the/a
minimal run command:

22:08  /gpfs/alpine/geo127/scratch/adams$ jsrun -g 1 -n 1 printenv
GIT_PS1_SHOWDIRTYSTATE=1
XDG_SESSION_ID=494
SHELL=/bin/bash
HISTSIZE=100
PETSC_ARCH=arch-summit-opt64-pgi-cuda
SSH_CLIENT=160.91.202.152 48626 22
LC_ALL=
USER=adams
 ...


>
> Satish
>
>
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>
> > On Wed, Sep 25, 2019 at 6:23 PM Balay, Satish  wrote:
> >
> > > > 18:16 (cb53a04...) ~/petsc-karl$
> > >
> > > So this is the commit I recommended you test against - and that's what
> > > you have got now. Please go ahead and test.
> > >
> > >
> > I sent the log for this. This is the output:
> >
> > 18:16 (cb53a04...) ~/petsc-karl$ ../arch-summit-opt64idx-pgi-cuda.py
> > PETSC_DIR=$PWD
> >
> ===
> >  Configuring PETSc to compile on your system
> >
> >
> ===
> >
> ===
> >
> > * WARNING: F77 (set to
> >
> /autofs/nccs-svm1_sw/summit/.swci/0-core/opt/spack/20180914/linux-rhel7-ppc64le/gcc-4.8.5/pgi-19.4-6acz4xyqjlpoaonjiiqjme2aknrfnzoy/linux
> >   use ./configure F77=$F77 if you really want to use that value
> **
> >
> >
> >
> ===
> >
> >
> >
> ===
> >
> > * WARNING: Using default optimization C flags -O
> >
> >You might consider manually
> setting
> > optimal optimization flags for your system with
> >
> >  COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for
> > examples
> >
> >
> ===
> >
> >
> >
> ===
> >
> > * WARNING: You have an older version of Gnu make,
> > it will work,
> > but may not support all the
> > parallel testing options. You can install the
> >   latest
> > Gnu make with your package manager, such as brew or macports, or use
> >
> > the --download-make option to get the latest Gnu make warning
> > message *
> >
> >
> ===
> >
> >   TESTING: configureMPIEXEC from
> > config.packages.MPI(config/BuildSystem/config/packages/MPI.py:174)
> >
> >
> ***
> >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> > details):
> >
> ---
> > Unable to run jsrun -g 1 with option "-n 1"
> > Error: It is only possible to use js commands within a job allocation
> > unless CSM is running
> > 09-25-2019 18:20:13:224 108023 main: Error initializing RM connection.
> > Exiting.
> >
> ***
> >
> > 18:20 1 (cb53a04...) ~/petsc-karl$
> >
> >
> > > [note: the branch is rebased - so 'git pull' won't work -(as you can
> > > see from the "(forced update)" message - and '<>' status from git
> > > prompt on balay/fix-mpiexec-shell-escape). So perhaps its easier to
> > > deal with in detached mode - which makes this obvious]
> > >
> >
> > I got this <> and "fixed" it by deleting the branch and repulling it. I
> > guess I needed to fetch also.
> >
> > Mark
> >
> >
> > >
> > > Satish
> > >
> > >
> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >
> > > > I will test this now but 
> > > >
> > > > 17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
> > > > remote: Enumerating objects: 119, done.
> > > > remote: Counting objects: 100% (119/119), done.
> > > > remote: Compressing objects: 100% (91/91), done.
> > > > remote: Total 119 (delta 49), reused 74 (delta 28)
> > > > Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
> > > > Resolving deltas: 100% (49/49), completed with 1 local objects.
> > > > >From https://gitlab.com/petsc/petsc
> > > >  + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
> > > > origin/balay/fix-mpiexec-shell-escape  (forced update)
> > > >  + 

Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
This log is from the wrong build. It says:

Defined "VERSION_GIT" to ""v3.11.3-2242-gb5e99a5""

i.e its not with commit cb53a04

Satish

On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> Here is the log.
> 
> On Wed, Sep 25, 2019 at 8:34 PM Mark Adams  wrote:
> 
> >
> >
> > On Wed, Sep 25, 2019 at 6:23 PM Balay, Satish  wrote:
> >
> >> > 18:16 (cb53a04...) ~/petsc-karl$
> >>
> >> So this is the commit I recommended you test against - and that's what
> >> you have got now. Please go ahead and test.
> >>
> >>
> > I sent the log for this. This is the output:
> >
> > 18:16 (cb53a04...) ~/petsc-karl$ ../arch-summit-opt64idx-pgi-cuda.py
> > PETSC_DIR=$PWD
> >
> > ===
> >  Configuring PETSc to compile on your system
> >
> >
> > ===
> > ===
> >
> > * WARNING: F77 (set to
> > /autofs/nccs-svm1_sw/summit/.swci/0-core/opt/spack/20180914/linux-rhel7-ppc64le/gcc-4.8.5/pgi-19.4-6acz4xyqjlpoaonjiiqjme2aknrfnzoy/linux
> >   use ./configure F77=$F77 if you really want to use that value **
> >
> >
> > ===
> >
> >
> > ===
> >
> > * WARNING: Using default optimization C flags -O
> >
> >You might consider manually setting
> > optimal optimization flags for your system with
> >
> >  COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for
> > examples
> >
> >  
> > ===
> >
> >
> > ===
> >
> > * WARNING: You have an older version of Gnu make,
> > it will work,
> > but may not support all the
> > parallel testing options. You can install the
> >   latest
> > Gnu make with your package manager, such as brew or macports, or use
> >
> > the --download-make option to get the latest Gnu make warning
> > message *
> >
> > ===
> >
> >   TESTING: configureMPIEXEC from
> > config.packages.MPI(config/BuildSystem/config/packages/MPI.py:174)
> >
> > ***
> >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> > details):
> >
> > ---
> > Unable to run jsrun -g 1 with option "-n 1"
> > Error: It is only possible to use js commands within a job allocation
> > unless CSM is running
> > 09-25-2019 18:20:13:224 108023 main: Error initializing RM connection.
> > Exiting.
> >
> > ***
> >
> > 18:20 1 (cb53a04...) ~/petsc-karl$
> >
> >
> >> [note: the branch is rebased - so 'git pull' won't work -(as you can
> >> see from the "(forced update)" message - and '<>' status from git
> >> prompt on balay/fix-mpiexec-shell-escape). So perhaps its easier to
> >> deal with in detached mode - which makes this obvious]
> >>
> >
> > I got this <> and "fixed" it by deleting the branch and repulling it. I
> > guess I needed to fetch also.
> >
> > Mark
> >
> >
> >>
> >> Satish
> >>
> >>
> >> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >>
> >> > I will test this now but 
> >> >
> >> > 17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
> >> > remote: Enumerating objects: 119, done.
> >> > remote: Counting objects: 100% (119/119), done.
> >> > remote: Compressing objects: 100% (91/91), done.
> >> > remote: Total 119 (delta 49), reused 74 (delta 28)
> >> > Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
> >> > Resolving deltas: 100% (49/49), completed with 1 local objects.
> >> > >From https://gitlab.com/petsc/petsc
> >> >  + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
> >> > origin/balay/fix-mpiexec-shell-escape  (forced update)
> >> >  + ffdc635...7eeb5f9 jczhang/feature-sf-on-gpu ->
> >> > origin/jczhang/feature-sf-on-gpu  (forced update)
> >> >cb9de97..f9ff08a  jolivet/fix-error-col-row ->
> >> > origin/jolivet/fix-error-col-row
> >> >40ea605..de5ad60  oanam/jacobf/cell-to-ref-mapping ->
> >> > origin/oanam/jacobf/cell-to-ref-mapping
> >> >  + ecac953...9fb579e stefanozampini/hypre-cuda-rebased ->
> >> > origin/stefanozampini/hypre-cuda-rebased  (forced update)
> >> > 18:16 balay/fix-mpiexec-shell-escape<> ~/petsc-karl$ git checkout
> >> > origin/balay/fix-mpiexec-shell-escape

Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
> Unable to run jsrun -g 1 with option "-n 1"
> Error: It is only possible to use js commands within a job allocation
> unless CSM is running


Nope  this is a different error message.

The message suggests - you can't run 'jsrun -g 1 -n 1 binary' Can you try this 
manually and see
what you get?

jsrun -g 1 -n 1 printenv

Satish


On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> On Wed, Sep 25, 2019 at 6:23 PM Balay, Satish  wrote:
> 
> > > 18:16 (cb53a04...) ~/petsc-karl$
> >
> > So this is the commit I recommended you test against - and that's what
> > you have got now. Please go ahead and test.
> >
> >
> I sent the log for this. This is the output:
> 
> 18:16 (cb53a04...) ~/petsc-karl$ ../arch-summit-opt64idx-pgi-cuda.py
> PETSC_DIR=$PWD
> ===
>  Configuring PETSc to compile on your system
> 
> ===
> ===
> 
> * WARNING: F77 (set to
> /autofs/nccs-svm1_sw/summit/.swci/0-core/opt/spack/20180914/linux-rhel7-ppc64le/gcc-4.8.5/pgi-19.4-6acz4xyqjlpoaonjiiqjme2aknrfnzoy/linux
>   use ./configure F77=$F77 if you really want to use that value **
> 
> 
> ===
> 
> 
> ===
> 
> * WARNING: Using default optimization C flags -O
> 
>You might consider manually setting
> optimal optimization flags for your system with
> 
>  COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for
> examples
> 
>  
> ===
> 
> 
> ===
> 
> * WARNING: You have an older version of Gnu make,
> it will work,
> but may not support all the
> parallel testing options. You can install the
>   latest
> Gnu make with your package manager, such as brew or macports, or use
> 
> the --download-make option to get the latest Gnu make warning
> message *
> 
> ===
> 
>   TESTING: configureMPIEXEC from
> config.packages.MPI(config/BuildSystem/config/packages/MPI.py:174)
> 
> ***
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
> details):
> ---
> Unable to run jsrun -g 1 with option "-n 1"
> Error: It is only possible to use js commands within a job allocation
> unless CSM is running
> 09-25-2019 18:20:13:224 108023 main: Error initializing RM connection.
> Exiting.
> ***
> 
> 18:20 1 (cb53a04...) ~/petsc-karl$
> 
> 
> > [note: the branch is rebased - so 'git pull' won't work -(as you can
> > see from the "(forced update)" message - and '<>' status from git
> > prompt on balay/fix-mpiexec-shell-escape). So perhaps its easier to
> > deal with in detached mode - which makes this obvious]
> >
> 
> I got this <> and "fixed" it by deleting the branch and repulling it. I
> guess I needed to fetch also.
> 
> Mark
> 
> 
> >
> > Satish
> >
> >
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > I will test this now but 
> > >
> > > 17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
> > > remote: Enumerating objects: 119, done.
> > > remote: Counting objects: 100% (119/119), done.
> > > remote: Compressing objects: 100% (91/91), done.
> > > remote: Total 119 (delta 49), reused 74 (delta 28)
> > > Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
> > > Resolving deltas: 100% (49/49), completed with 1 local objects.
> > > >From https://gitlab.com/petsc/petsc
> > >  + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
> > > origin/balay/fix-mpiexec-shell-escape  (forced update)
> > >  + ffdc635...7eeb5f9 jczhang/feature-sf-on-gpu ->
> > > origin/jczhang/feature-sf-on-gpu  (forced update)
> > >cb9de97..f9ff08a  jolivet/fix-error-col-row ->
> > > origin/jolivet/fix-error-col-row
> > >40ea605..de5ad60  oanam/jacobf/cell-to-ref-mapping ->
> > > origin/oanam/jacobf/cell-to-ref-mapping
> > >  + ecac953...9fb579e stefanozampini/hypre-cuda-rebased ->
> > > origin/stefanozampini/hypre-cuda-rebased  (forced update)
> > > 18:16 balay/fix-mpiexec-shell-escape<> ~/petsc-karl$ git checkout
> > > origin/balay/fix-mpiexec-shell-escape
> > > Note: checking out 'origin/balay/fix-mpiexec-shell-escape'.
> > >
> 

Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
On Wed, Sep 25, 2019 at 6:23 PM Balay, Satish  wrote:

> > 18:16 (cb53a04...) ~/petsc-karl$
>
> So this is the commit I recommended you test against - and that's what
> you have got now. Please go ahead and test.
>
>
I sent the log for this. This is the output:

18:16 (cb53a04...) ~/petsc-karl$ ../arch-summit-opt64idx-pgi-cuda.py
PETSC_DIR=$PWD
===
 Configuring PETSc to compile on your system

===
===

* WARNING: F77 (set to
/autofs/nccs-svm1_sw/summit/.swci/0-core/opt/spack/20180914/linux-rhel7-ppc64le/gcc-4.8.5/pgi-19.4-6acz4xyqjlpoaonjiiqjme2aknrfnzoy/linux
  use ./configure F77=$F77 if you really want to use that value **


===


===

* WARNING: Using default optimization C flags -O

   You might consider manually setting
optimal optimization flags for your system with

 COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for
examples

 ===


===

* WARNING: You have an older version of Gnu make,
it will work,
but may not support all the
parallel testing options. You can install the
  latest
Gnu make with your package manager, such as brew or macports, or use

the --download-make option to get the latest Gnu make warning
message *

===

  TESTING: configureMPIEXEC from
config.packages.MPI(config/BuildSystem/config/packages/MPI.py:174)

***
 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for
details):
---
Unable to run jsrun -g 1 with option "-n 1"
Error: It is only possible to use js commands within a job allocation
unless CSM is running
09-25-2019 18:20:13:224 108023 main: Error initializing RM connection.
Exiting.
***

18:20 1 (cb53a04...) ~/petsc-karl$


> [note: the branch is rebased - so 'git pull' won't work -(as you can
> see from the "(forced update)" message - and '<>' status from git
> prompt on balay/fix-mpiexec-shell-escape). So perhaps its easier to
> deal with in detached mode - which makes this obvious]
>

I got this <> and "fixed" it by deleting the branch and repulling it. I
guess I needed to fetch also.

Mark


>
> Satish
>
>
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>
> > I will test this now but 
> >
> > 17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
> > remote: Enumerating objects: 119, done.
> > remote: Counting objects: 100% (119/119), done.
> > remote: Compressing objects: 100% (91/91), done.
> > remote: Total 119 (delta 49), reused 74 (delta 28)
> > Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
> > Resolving deltas: 100% (49/49), completed with 1 local objects.
> > >From https://gitlab.com/petsc/petsc
> >  + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
> > origin/balay/fix-mpiexec-shell-escape  (forced update)
> >  + ffdc635...7eeb5f9 jczhang/feature-sf-on-gpu ->
> > origin/jczhang/feature-sf-on-gpu  (forced update)
> >cb9de97..f9ff08a  jolivet/fix-error-col-row ->
> > origin/jolivet/fix-error-col-row
> >40ea605..de5ad60  oanam/jacobf/cell-to-ref-mapping ->
> > origin/oanam/jacobf/cell-to-ref-mapping
> >  + ecac953...9fb579e stefanozampini/hypre-cuda-rebased ->
> > origin/stefanozampini/hypre-cuda-rebased  (forced update)
> > 18:16 balay/fix-mpiexec-shell-escape<> ~/petsc-karl$ git checkout
> > origin/balay/fix-mpiexec-shell-escape
> > Note: checking out 'origin/balay/fix-mpiexec-shell-escape'.
> >
> > You are in 'detached HEAD' state. You can look around, make experimental
> > changes and commit them, and you can discard any commits you make in this
> > state without impacting any branches by performing another checkout.
> >
> > If you want to create a new branch to retain commits you create, you may
> > do so (now or later) by using -b with the checkout command again.
> Example:
> >
> >   git checkout -b new_branch_name
> >
> > HEAD is now at cb53a04... mpiexec: fix shell escape of path-to-mpiexec
> only
> > when using autodected-path. Also spectrum MPI uses OMPI_MAJOR_VERSION
> etc -
> > so check if 

Re: [petsc-dev] TAP file and testing error

2019-09-25 Thread Scott Kruger via petsc-dev




Can you try rerunning after removing the trailing backslash to PETSC_ARCH?

Scott


On 9/25/19 1:51 PM, Stefano Zampini wrote:
If we specify a PETSC_ARCH with a trailing slash, the current testing 
makefile fails. Can this be fixed?


*zampins@vulture*:*~/Devel/petsc*$ make -f gmakefile.test test 
globsearch="*densecuda*" PETSC_ARCH=arch-gpu-double-unifmem/


touch: cannot touch 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_tap.log': No 
such file or directory


touch: cannot touch 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_err.log': No 
such file or directory--

Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
> 18:16 (cb53a04...) ~/petsc-karl$

So this is the commit I recommended you test against - and that's what
you have got now. Please go ahead and test.

[note: the branch is rebased - so 'git pull' won't work -(as you can
see from the "(forced update)" message - and '<>' status from git
prompt on balay/fix-mpiexec-shell-escape). So perhaps its easier to
deal with in detached mode - which makes this obvious]

Satish


On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> I will test this now but 
> 
> 17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
> remote: Enumerating objects: 119, done.
> remote: Counting objects: 100% (119/119), done.
> remote: Compressing objects: 100% (91/91), done.
> remote: Total 119 (delta 49), reused 74 (delta 28)
> Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
> Resolving deltas: 100% (49/49), completed with 1 local objects.
> >From https://gitlab.com/petsc/petsc
>  + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
> origin/balay/fix-mpiexec-shell-escape  (forced update)
>  + ffdc635...7eeb5f9 jczhang/feature-sf-on-gpu ->
> origin/jczhang/feature-sf-on-gpu  (forced update)
>cb9de97..f9ff08a  jolivet/fix-error-col-row ->
> origin/jolivet/fix-error-col-row
>40ea605..de5ad60  oanam/jacobf/cell-to-ref-mapping ->
> origin/oanam/jacobf/cell-to-ref-mapping
>  + ecac953...9fb579e stefanozampini/hypre-cuda-rebased ->
> origin/stefanozampini/hypre-cuda-rebased  (forced update)
> 18:16 balay/fix-mpiexec-shell-escape<> ~/petsc-karl$ git checkout
> origin/balay/fix-mpiexec-shell-escape
> Note: checking out 'origin/balay/fix-mpiexec-shell-escape'.
> 
> You are in 'detached HEAD' state. You can look around, make experimental
> changes and commit them, and you can discard any commits you make in this
> state without impacting any branches by performing another checkout.
> 
> If you want to create a new branch to retain commits you create, you may
> do so (now or later) by using -b with the checkout command again. Example:
> 
>   git checkout -b new_branch_name
> 
> HEAD is now at cb53a04... mpiexec: fix shell escape of path-to-mpiexec only
> when using autodected-path. Also spectrum MPI uses OMPI_MAJOR_VERSION etc -
> so check if mpiexec supports --oversubscribe - before using it.
> 18:16 (cb53a04...) ~/petsc-karl$
> 
> On Wed, Sep 25, 2019 at 5:58 PM Balay, Satish  wrote:
> 
> > Defined "VERSION_GIT" to ""v3.11.3-2242-gb5e99a5""
> >
> > This is not the latest state - It should be:
> >
> > commit cb53a042369fb946804f53931a88b58e10588da1 (HEAD ->
> > balay/fix-mpiexec-shell-escape, origin/balay/fix-mpiexec-shell-escape)
> >
> > Try:
> >
> > git fetch
> > git checkout origin/balay/fix-mpiexec-shell-escape
> >
> > Satish
> >
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > On Wed, Sep 25, 2019 at 4:57 PM Balay, Satish  wrote:
> > >
> > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > > >
> > > > > I did test this and sent the log (error).
> > > >
> > > > Mark,
> > > >
> > > > I made more changes - can you retry again - and resend log.
> > > >
> > > > Satish
> > > >
> > >
> >
> >
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
I will test this now but 

17:52 balay/fix-mpiexec-shell-escape= ~/petsc-karl$ git fetch
remote: Enumerating objects: 119, done.
remote: Counting objects: 100% (119/119), done.
remote: Compressing objects: 100% (91/91), done.
remote: Total 119 (delta 49), reused 74 (delta 28)
Receiving objects: 100% (119/119), 132.88 KiB | 0 bytes/s, done.
Resolving deltas: 100% (49/49), completed with 1 local objects.
>From https://gitlab.com/petsc/petsc
 + b5e99a5...cb53a04 balay/fix-mpiexec-shell-escape ->
origin/balay/fix-mpiexec-shell-escape  (forced update)
 + ffdc635...7eeb5f9 jczhang/feature-sf-on-gpu ->
origin/jczhang/feature-sf-on-gpu  (forced update)
   cb9de97..f9ff08a  jolivet/fix-error-col-row ->
origin/jolivet/fix-error-col-row
   40ea605..de5ad60  oanam/jacobf/cell-to-ref-mapping ->
origin/oanam/jacobf/cell-to-ref-mapping
 + ecac953...9fb579e stefanozampini/hypre-cuda-rebased ->
origin/stefanozampini/hypre-cuda-rebased  (forced update)
18:16 balay/fix-mpiexec-shell-escape<> ~/petsc-karl$ git checkout
origin/balay/fix-mpiexec-shell-escape
Note: checking out 'origin/balay/fix-mpiexec-shell-escape'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at cb53a04... mpiexec: fix shell escape of path-to-mpiexec only
when using autodected-path. Also spectrum MPI uses OMPI_MAJOR_VERSION etc -
so check if mpiexec supports --oversubscribe - before using it.
18:16 (cb53a04...) ~/petsc-karl$

On Wed, Sep 25, 2019 at 5:58 PM Balay, Satish  wrote:

> Defined "VERSION_GIT" to ""v3.11.3-2242-gb5e99a5""
>
> This is not the latest state - It should be:
>
> commit cb53a042369fb946804f53931a88b58e10588da1 (HEAD ->
> balay/fix-mpiexec-shell-escape, origin/balay/fix-mpiexec-shell-escape)
>
> Try:
>
> git fetch
> git checkout origin/balay/fix-mpiexec-shell-escape
>
> Satish
>
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>
> > On Wed, Sep 25, 2019 at 4:57 PM Balay, Satish  wrote:
> >
> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >
> > > > I did test this and sent the log (error).
> > >
> > > Mark,
> > >
> > > I made more changes - can you retry again - and resend log.
> > >
> > > Satish
> > >
> >
>
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
Defined "VERSION_GIT" to ""v3.11.3-2242-gb5e99a5""

This is not the latest state - It should be:

commit cb53a042369fb946804f53931a88b58e10588da1 (HEAD -> 
balay/fix-mpiexec-shell-escape, origin/balay/fix-mpiexec-shell-escape)

Try:

git fetch
git checkout origin/balay/fix-mpiexec-shell-escape

Satish

On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> On Wed, Sep 25, 2019 at 4:57 PM Balay, Satish  wrote:
> 
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > I did test this and sent the log (error).
> >
> > Mark,
> >
> > I made more changes - can you retry again - and resend log.
> >
> > Satish
> >
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> I did test this and sent the log (error).

Mark,

I made more changes - can you retry again - and resend log.

Satish


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
> Yes, it's supported, but it's a little different than what "-n" usually
> does in mpiexec, where it means the number of processes. For 'jsrun', it
> means the number of resource sets, which is multiplied by the "tasks per
> resource set" specified by "-a" to get the MPI process count. I think if we
> can specify that "-a 1" is part of our "mpiexec", then we should be OK with
> using -n as PETSc normally does.
>

jsrun does not run with just -n on SUMMIT. I have found that it works with
adding -g 1.


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
I did test this and sent the log (error).

On Wed, Sep 25, 2019 at 2:58 PM Balay, Satish  wrote:

> I made changes and asked to retest with the latest changes.
>
> Satish
>
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>
> > Oh, and I tested the branch and it didn't work. file was attached.
> >
> > On Wed, Sep 25, 2019 at 2:38 PM Mark Adams  wrote:
> >
> > >
> > >
> > > On Wed, Sep 25, 2019 at 2:23 PM Balay, Satish 
> wrote:
> > >
> > >> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >>
> > >> > On Wed, Sep 25, 2019 at 12:44 PM Balay, Satish 
> > >> wrote:
> > >> >
> > >> > > Can you retry with updated balay/fix-mpiexec-shell-escape branch?
> > >> > >
> > >> > >
> > >> > > current mpiexec interface/code in petsc is messy.
> > >> > >
> > >> > > Its primarily needed for the test suite. But then - you can't
> easily
> > >> > > run the test suite on machines like summit.
> > >> > >
> > >> > > Also - it assumes mpiexec provided supports '-n 1'. However if one
> > >> > > provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1"
> -
> > >> > > what is the appropriate thing here?
> > >> > >
> > >> >
> > >> > jsrun does take -n. It just has other args. I am trying to check if
> it
> > >> > requires other args. I thought it did but let me check.
> > >>
> > >>
> > >>
> https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/
> > >>
> > >> -n  --nrs   Number of resource sets
> > >>
> > >>
> > > -n is still supported. There are two versions of everything. One letter
> > > ones and more explanatory ones.
> > >
> > > In fact they have a nice little tool to viz layouts and they give you
> the
> > > command line with this short form, eg,
> > >
> > > https://jsrunvisualizer.olcf.ornl.gov/?s1f0o01n6c4g1r14d1b21l0=
> > >
> > >
> > >
> > >> Beta2 Change (October 17):
> > >> -n was be replaced by -nnodes
> > >>
> > >> So its not the same functionality as 'mpiexec -n'
> > >>
> > >
> > > I am still waiting for an interactive shell to test just -n. That
> really
> > > should run
> > >
> > >
> > >>
> > >> Either way - please try the above branch
> > >
> > >
> > >> Satish
> > >>
> > >> >
> > >> >
> > >> > >
> > >> > > And then configure needs to run some binaries for some checks -
> here
> > >> > > perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI
> defaults
> > >> > > to ncore]. So perhaps mpiexec is required for this purpose on
> summit?
> > >> > >
> > >> > > And then there is this code to escape spaces in path - for
> > >> > > windows. [but we have to make sure this is not in code-path for
> user
> > >> > > specified --with-mpiexec="jsrun -g 1"
> > >> > >
> > >> > > Satish
> > >> > >
> > >> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >> > >
> > >> > > > No luck,
> > >> > > >
> > >> > > > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish <
> ba...@mcs.anl.gov>
> > >> > > wrote:
> > >> > > >
> > >> > > > > Mark,
> > >> > > > >
> > >> > > > > Can you try the fix in branch balay/fix-mpiexec-shell-escape
> and
> > >> see
> > >> > > if it
> > >> > > > > works?
> > >> > > > >
> > >> > > > > Satish
> > >> > > > >
> > >> > > > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> > >> > > > >
> > >> > > > > > Mark,
> > >> > > > > >
> > >> > > > > > Can you send configure.log from
> mark/fix-cuda-with-gamg-pintocpu
> > >> > > branch?
> > >> > > > > >
> > >> > > > > > Satish
> > >> > > > > >
> > >> > > > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >> > > > > >
> > >> > > > > > > I double checked that a clean build of your (master)
> branch
> > >> has
> > >> > > this
> > >> > > > > error
> > >> > > > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may
> > >> include
> > >> > > > > stuff
> > >> > > > > > > from Barry that is not yet in master, works.
> > >> > > > > > >
> > >> > > > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > >> > > > > > > petsc-dev@mcs.anl.gov> wrote:
> > >> > > > > > >
> > >> > > > > > > >
> > >> > > > > > > >
> > >> > > > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > >> > > > > > > > > I am using karlrupp/fix-cuda-streams, merged with
> master,
> > >> and I
> > >> > > > > get this
> > >> > > > > > > > > error:
> > >> > > > > > > > >
> > >> > > > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
> > >> > > --oversubscribe -n
> > >> > > > > 1
> > >> > > > > > > > > printenv']":
> > >> > > > > > > > > Error, invalid argument:  1
> > >> > > > > > > > >
> > >> > > > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to
> work
> > >> but I
> > >> > > did
> > >> > > > > edit
> > >> > > > > > > > > the jsrun command but Karl's branch still fails.
> (SUMMIT
> > >> was
> > >> > > down
> > >> > > > > today
> > >> > > > > > > > > so there could have been updates).
> > >> > > > > > > > >
> > >> > > > > > > > > Any suggestions?
> > >> > > > > > > >
> > >> > > > > > > > Looks very much like a systems issue to me.
> > >> > > > > > > >
> > >> > > > > > > > Best 

Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
I made changes and asked to retest with the latest changes.

Satish

On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> Oh, and I tested the branch and it didn't work. file was attached.
> 
> On Wed, Sep 25, 2019 at 2:38 PM Mark Adams  wrote:
> 
> >
> >
> > On Wed, Sep 25, 2019 at 2:23 PM Balay, Satish  wrote:
> >
> >> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >>
> >> > On Wed, Sep 25, 2019 at 12:44 PM Balay, Satish 
> >> wrote:
> >> >
> >> > > Can you retry with updated balay/fix-mpiexec-shell-escape branch?
> >> > >
> >> > >
> >> > > current mpiexec interface/code in petsc is messy.
> >> > >
> >> > > Its primarily needed for the test suite. But then - you can't easily
> >> > > run the test suite on machines like summit.
> >> > >
> >> > > Also - it assumes mpiexec provided supports '-n 1'. However if one
> >> > > provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1" -
> >> > > what is the appropriate thing here?
> >> > >
> >> >
> >> > jsrun does take -n. It just has other args. I am trying to check if it
> >> > requires other args. I thought it did but let me check.
> >>
> >>
> >> https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/
> >>
> >> -n  --nrs   Number of resource sets
> >>
> >>
> > -n is still supported. There are two versions of everything. One letter
> > ones and more explanatory ones.
> >
> > In fact they have a nice little tool to viz layouts and they give you the
> > command line with this short form, eg,
> >
> > https://jsrunvisualizer.olcf.ornl.gov/?s1f0o01n6c4g1r14d1b21l0=
> >
> >
> >
> >> Beta2 Change (October 17):
> >> -n was be replaced by -nnodes
> >>
> >> So its not the same functionality as 'mpiexec -n'
> >>
> >
> > I am still waiting for an interactive shell to test just -n. That really
> > should run
> >
> >
> >>
> >> Either way - please try the above branch
> >
> >
> >> Satish
> >>
> >> >
> >> >
> >> > >
> >> > > And then configure needs to run some binaries for some checks - here
> >> > > perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
> >> > > to ncore]. So perhaps mpiexec is required for this purpose on summit?
> >> > >
> >> > > And then there is this code to escape spaces in path - for
> >> > > windows. [but we have to make sure this is not in code-path for user
> >> > > specified --with-mpiexec="jsrun -g 1"
> >> > >
> >> > > Satish
> >> > >
> >> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >> > >
> >> > > > No luck,
> >> > > >
> >> > > > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish 
> >> > > wrote:
> >> > > >
> >> > > > > Mark,
> >> > > > >
> >> > > > > Can you try the fix in branch balay/fix-mpiexec-shell-escape and
> >> see
> >> > > if it
> >> > > > > works?
> >> > > > >
> >> > > > > Satish
> >> > > > >
> >> > > > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> >> > > > >
> >> > > > > > Mark,
> >> > > > > >
> >> > > > > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu
> >> > > branch?
> >> > > > > >
> >> > > > > > Satish
> >> > > > > >
> >> > > > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >> > > > > >
> >> > > > > > > I double checked that a clean build of your (master) branch
> >> has
> >> > > this
> >> > > > > error
> >> > > > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may
> >> include
> >> > > > > stuff
> >> > > > > > > from Barry that is not yet in master, works.
> >> > > > > > >
> >> > > > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> >> > > > > > > petsc-dev@mcs.anl.gov> wrote:
> >> > > > > > >
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> >> > > > > > > > > I am using karlrupp/fix-cuda-streams, merged with master,
> >> and I
> >> > > > > get this
> >> > > > > > > > > error:
> >> > > > > > > > >
> >> > > > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
> >> > > --oversubscribe -n
> >> > > > > 1
> >> > > > > > > > > printenv']":
> >> > > > > > > > > Error, invalid argument:  1
> >> > > > > > > > >
> >> > > > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work
> >> but I
> >> > > did
> >> > > > > edit
> >> > > > > > > > > the jsrun command but Karl's branch still fails. (SUMMIT
> >> was
> >> > > down
> >> > > > > today
> >> > > > > > > > > so there could have been updates).
> >> > > > > > > > >
> >> > > > > > > > > Any suggestions?
> >> > > > > > > >
> >> > > > > > > > Looks very much like a systems issue to me.
> >> > > > > > > >
> >> > > > > > > > Best regards,
> >> > > > > > > > Karli
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> >
> >>
> >>
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
Oh, and I tested the branch and it didn't work. file was attached.

On Wed, Sep 25, 2019 at 2:38 PM Mark Adams  wrote:

>
>
> On Wed, Sep 25, 2019 at 2:23 PM Balay, Satish  wrote:
>
>> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>>
>> > On Wed, Sep 25, 2019 at 12:44 PM Balay, Satish 
>> wrote:
>> >
>> > > Can you retry with updated balay/fix-mpiexec-shell-escape branch?
>> > >
>> > >
>> > > current mpiexec interface/code in petsc is messy.
>> > >
>> > > Its primarily needed for the test suite. But then - you can't easily
>> > > run the test suite on machines like summit.
>> > >
>> > > Also - it assumes mpiexec provided supports '-n 1'. However if one
>> > > provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1" -
>> > > what is the appropriate thing here?
>> > >
>> >
>> > jsrun does take -n. It just has other args. I am trying to check if it
>> > requires other args. I thought it did but let me check.
>>
>>
>> https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/
>>
>> -n  --nrs   Number of resource sets
>>
>>
> -n is still supported. There are two versions of everything. One letter
> ones and more explanatory ones.
>
> In fact they have a nice little tool to viz layouts and they give you the
> command line with this short form, eg,
>
> https://jsrunvisualizer.olcf.ornl.gov/?s1f0o01n6c4g1r14d1b21l0=
>
>
>
>> Beta2 Change (October 17):
>> -n was be replaced by -nnodes
>>
>> So its not the same functionality as 'mpiexec -n'
>>
>
> I am still waiting for an interactive shell to test just -n. That really
> should run
>
>
>>
>> Either way - please try the above branch
>
>
>> Satish
>>
>> >
>> >
>> > >
>> > > And then configure needs to run some binaries for some checks - here
>> > > perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
>> > > to ncore]. So perhaps mpiexec is required for this purpose on summit?
>> > >
>> > > And then there is this code to escape spaces in path - for
>> > > windows. [but we have to make sure this is not in code-path for user
>> > > specified --with-mpiexec="jsrun -g 1"
>> > >
>> > > Satish
>> > >
>> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>> > >
>> > > > No luck,
>> > > >
>> > > > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish 
>> > > wrote:
>> > > >
>> > > > > Mark,
>> > > > >
>> > > > > Can you try the fix in branch balay/fix-mpiexec-shell-escape and
>> see
>> > > if it
>> > > > > works?
>> > > > >
>> > > > > Satish
>> > > > >
>> > > > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
>> > > > >
>> > > > > > Mark,
>> > > > > >
>> > > > > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu
>> > > branch?
>> > > > > >
>> > > > > > Satish
>> > > > > >
>> > > > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>> > > > > >
>> > > > > > > I double checked that a clean build of your (master) branch
>> has
>> > > this
>> > > > > error
>> > > > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may
>> include
>> > > > > stuff
>> > > > > > > from Barry that is not yet in master, works.
>> > > > > > >
>> > > > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
>> > > > > > > petsc-dev@mcs.anl.gov> wrote:
>> > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
>> > > > > > > > > I am using karlrupp/fix-cuda-streams, merged with master,
>> and I
>> > > > > get this
>> > > > > > > > > error:
>> > > > > > > > >
>> > > > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
>> > > --oversubscribe -n
>> > > > > 1
>> > > > > > > > > printenv']":
>> > > > > > > > > Error, invalid argument:  1
>> > > > > > > > >
>> > > > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work
>> but I
>> > > did
>> > > > > edit
>> > > > > > > > > the jsrun command but Karl's branch still fails. (SUMMIT
>> was
>> > > down
>> > > > > today
>> > > > > > > > > so there could have been updates).
>> > > > > > > > >
>> > > > > > > > > Any suggestions?
>> > > > > > > >
>> > > > > > > > Looks very much like a systems issue to me.
>> > > > > > > >
>> > > > > > > > Best regards,
>> > > > > > > > Karli
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> >
>>
>>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mills, Richard Tran via petsc-dev
On 9/25/19 11:38 AM, Mark Adams via petsc-dev wrote:
[...]
> jsrun does take -n. It just has other args. I am trying to check if it
> requires other args. I thought it did but let me check.

https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/

-n  --nrs   Number of resource sets


-n is still supported. There are two versions of everything. One letter ones 
and more explanatory ones.
Yes, it's supported, but it's a little different than what "-n" usually does in 
mpiexec, where it means the number of processes. For 'jsrun', it means the 
number of resource sets, which is multiplied by the "tasks per resource set" 
specified by "-a" to get the MPI process count. I think if we can specify that 
"-a 1" is part of our "mpiexec", then we should be OK with using -n as PETSc 
normally does.

--Richard

In fact they have a nice little tool to viz layouts and they give you the 
command line with this short form, eg,

https://jsrunvisualizer.olcf.ornl.gov/?s1f0o01n6c4g1r14d1b21l0=


Beta2 Change (October 17):
-n was be replaced by -nnodes

So its not the same functionality as 'mpiexec -n'

I am still waiting for an interactive shell to test just -n. That really should 
run


Either way - please try the above branch

Satish

>
>
> >
> > And then configure needs to run some binaries for some checks - here
> > perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
> > to ncore]. So perhaps mpiexec is required for this purpose on summit?
> >
> > And then there is this code to escape spaces in path - for
> > windows. [but we have to make sure this is not in code-path for user
> > specified --with-mpiexec="jsrun -g 1"
> >
> > Satish
> >
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > No luck,
> > >
> > > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish 
> > > mailto:ba...@mcs.anl.gov>>
> > wrote:
> > >
> > > > Mark,
> > > >
> > > > Can you try the fix in branch balay/fix-mpiexec-shell-escape and see
> > if it
> > > > works?
> > > >
> > > > Satish
> > > >
> > > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> > > >
> > > > > Mark,
> > > > >
> > > > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu
> > branch?
> > > > >
> > > > > Satish
> > > > >
> > > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > > > >
> > > > > > I double checked that a clean build of your (master) branch has
> > this
> > > > error
> > > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> > > > stuff
> > > > > > from Barry that is not yet in master, works.
> > > > > >
> > > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > > > > > petsc-dev@mcs.anl.gov> wrote:
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > > > > > I am using karlrupp/fix-cuda-streams, merged with master, and I
> > > > get this
> > > > > > > > error:
> > > > > > > >
> > > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
> > --oversubscribe -n
> > > > 1
> > > > > > > > printenv']":
> > > > > > > > Error, invalid argument:  1
> > > > > > > >
> > > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I
> > did
> > > > edit
> > > > > > > > the jsrun command but Karl's branch still fails. (SUMMIT was
> > down
> > > > today
> > > > > > > > so there could have been updates).
> > > > > > > >
> > > > > > > > Any suggestions?
> > > > > > >
> > > > > > > Looks very much like a systems issue to me.
> > > > > > >
> > > > > > > Best regards,
> > > > > > > Karli
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > >
> >
> >
>




Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> On Wed, Sep 25, 2019 at 12:44 PM Balay, Satish  wrote:
> 
> > Can you retry with updated balay/fix-mpiexec-shell-escape branch?
> >
> >
> > current mpiexec interface/code in petsc is messy.
> >
> > Its primarily needed for the test suite. But then - you can't easily
> > run the test suite on machines like summit.
> >
> > Also - it assumes mpiexec provided supports '-n 1'. However if one
> > provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1" -
> > what is the appropriate thing here?
> >
> 
> jsrun does take -n. It just has other args. I am trying to check if it
> requires other args. I thought it did but let me check.

https://www.olcf.ornl.gov/for-users/system-user-guides/summitdev-quickstart-guide/

-n  --nrs   Number of resource sets

Beta2 Change (October 17):
-n was be replaced by -nnodes

So its not the same functionality as 'mpiexec -n'

Either way - please try the above branch

Satish

> 
> 
> >
> > And then configure needs to run some binaries for some checks - here
> > perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
> > to ncore]. So perhaps mpiexec is required for this purpose on summit?
> >
> > And then there is this code to escape spaces in path - for
> > windows. [but we have to make sure this is not in code-path for user
> > specified --with-mpiexec="jsrun -g 1"
> >
> > Satish
> >
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > No luck,
> > >
> > > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish 
> > wrote:
> > >
> > > > Mark,
> > > >
> > > > Can you try the fix in branch balay/fix-mpiexec-shell-escape and see
> > if it
> > > > works?
> > > >
> > > > Satish
> > > >
> > > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> > > >
> > > > > Mark,
> > > > >
> > > > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu
> > branch?
> > > > >
> > > > > Satish
> > > > >
> > > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > > > >
> > > > > > I double checked that a clean build of your (master) branch has
> > this
> > > > error
> > > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> > > > stuff
> > > > > > from Barry that is not yet in master, works.
> > > > > >
> > > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > > > > > petsc-dev@mcs.anl.gov> wrote:
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > > > > > I am using karlrupp/fix-cuda-streams, merged with master, and I
> > > > get this
> > > > > > > > error:
> > > > > > > >
> > > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
> > --oversubscribe -n
> > > > 1
> > > > > > > > printenv']":
> > > > > > > > Error, invalid argument:  1
> > > > > > > >
> > > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I
> > did
> > > > edit
> > > > > > > > the jsrun command but Karl's branch still fails. (SUMMIT was
> > down
> > > > today
> > > > > > > > so there could have been updates).
> > > > > > > >
> > > > > > > > Any suggestions?
> > > > > > >
> > > > > > > Looks very much like a systems issue to me.
> > > > > > >
> > > > > > > Best regards,
> > > > > > > Karli
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > >
> >
> >
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
On Wed, Sep 25, 2019 at 12:44 PM Balay, Satish  wrote:

> Can you retry with updated balay/fix-mpiexec-shell-escape branch?
>
>
> current mpiexec interface/code in petsc is messy.
>
> Its primarily needed for the test suite. But then - you can't easily
> run the test suite on machines like summit.
>
> Also - it assumes mpiexec provided supports '-n 1'. However if one
> provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1" -
> what is the appropriate thing here?
>

jsrun does take -n. It just has other args. I am trying to check if it
requires other args. I thought it did but let me check.


>
> And then configure needs to run some binaries for some checks - here
> perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
> to ncore]. So perhaps mpiexec is required for this purpose on summit?
>
> And then there is this code to escape spaces in path - for
> windows. [but we have to make sure this is not in code-path for user
> specified --with-mpiexec="jsrun -g 1"
>
> Satish
>
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
>
> > No luck,
> >
> > On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish 
> wrote:
> >
> > > Mark,
> > >
> > > Can you try the fix in branch balay/fix-mpiexec-shell-escape and see
> if it
> > > works?
> > >
> > > Satish
> > >
> > > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> > >
> > > > Mark,
> > > >
> > > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu
> branch?
> > > >
> > > > Satish
> > > >
> > > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > > >
> > > > > I double checked that a clean build of your (master) branch has
> this
> > > error
> > > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> > > stuff
> > > > > from Barry that is not yet in master, works.
> > > > >
> > > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > > > > petsc-dev@mcs.anl.gov> wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > > > > I am using karlrupp/fix-cuda-streams, merged with master, and I
> > > get this
> > > > > > > error:
> > > > > > >
> > > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1
> --oversubscribe -n
> > > 1
> > > > > > > printenv']":
> > > > > > > Error, invalid argument:  1
> > > > > > >
> > > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I
> did
> > > edit
> > > > > > > the jsrun command but Karl's branch still fails. (SUMMIT was
> down
> > > today
> > > > > > > so there could have been updates).
> > > > > > >
> > > > > > > Any suggestions?
> > > > > >
> > > > > > Looks very much like a systems issue to me.
> > > > > >
> > > > > > Best regards,
> > > > > > Karli
> > > > > >
> > > > >
> > > >
> > >
> > >
> >
>
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
Can you retry with updated balay/fix-mpiexec-shell-escape branch?


current mpiexec interface/code in petsc is messy.

Its primarily needed for the test suite. But then - you can't easily
run the test suite on machines like summit.

Also - it assumes mpiexec provided supports '-n 1'. However if one
provides non-standard mpiexec such as --with-mpiexec="jsrun -g 1" -
what is the appropriate thing here?

And then configure needs to run some binaries for some checks - here
perhaps '-n 1' doesn't matter. [MPICH defaults to 1, OpenMPI defaults
to ncore]. So perhaps mpiexec is required for this purpose on summit?

And then there is this code to escape spaces in path - for
windows. [but we have to make sure this is not in code-path for user
specified --with-mpiexec="jsrun -g 1"

Satish

On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> No luck,
> 
> On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish  wrote:
> 
> > Mark,
> >
> > Can you try the fix in branch balay/fix-mpiexec-shell-escape and see if it
> > works?
> >
> > Satish
> >
> > On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
> >
> > > Mark,
> > >
> > > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu branch?
> > >
> > > Satish
> > >
> > > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> > >
> > > > I double checked that a clean build of your (master) branch has this
> > error
> > > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> > stuff
> > > > from Barry that is not yet in master, works.
> > > >
> > > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > > > petsc-dev@mcs.anl.gov> wrote:
> > > >
> > > > >
> > > > >
> > > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > > > I am using karlrupp/fix-cuda-streams, merged with master, and I
> > get this
> > > > > > error:
> > > > > >
> > > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n
> > 1
> > > > > > printenv']":
> > > > > > Error, invalid argument:  1
> > > > > >
> > > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did
> > edit
> > > > > > the jsrun command but Karl's branch still fails. (SUMMIT was down
> > today
> > > > > > so there could have been updates).
> > > > > >
> > > > > > Any suggestions?
> > > > >
> > > > > Looks very much like a systems issue to me.
> > > > >
> > > > > Best regards,
> > > > > Karli
> > > > >
> > > >
> > >
> >
> >
> 



Re: [petsc-dev] What happened to FC_DEPFLAGS?

2019-09-25 Thread Balay, Satish via petsc-dev
I have a fix in balay/fix_FC_DEPFLAGS

https://gitlab.com/petsc/petsc/merge_requests/2105

Can you give it a try?

Satish

On Wed, 25 Sep 2019, Lisandro Dalcin via petsc-dev wrote:

> $ make -f gmakefile print VAR=C_DEPFLAGS
> -MMD -MP
> $ make -f gmakefile print VAR=CXX_DEPFLAGS
> -MMD -MP
> $ make -f gmakefile print VAR=FC_DEPFLAGS
> 
> 
> Somehow the configure test code is not being executed:
> 
> $ grep FC_ configure.log
>   Initialized FC_LINKER_FLAGS to []
> Defined make macro "FC_LINKER_SLFLAG" to "-Wl,-rpath,"
>   Defined make macro "FC_VERSION" to "GNU Fortran (GCC) 9.2.1
> 20190827 (Red Hat 9.2.1-1)"
> Defined make macro "MPIFC_SHOW" to "gfortran -m64 -O2 -fPIC
> -Wl,-z,noexecstack -I/usr/include/mpich-x86_64
> -I/usr/lib64/gfortran/modules/mpich -L/usr/lib64/mpich/lib -lmpifort
> -Wl,-rpath -Wl,/usr/lib64/mpich/lib -Wl,--enable-new-dtags -lmpi"
> Defined make macro "FC_DEFINE_FLAG" to "-D"
>   Defined make macro "FC_FLAGS" to " -Wall -ffree-line-length-0
> -Wno-unused-dummy-argument -O0 -g3  "
>   Defined make macro "FC_SUFFIX" to "o"
>   Defined make macro "FC_LINKER" to "mpif90"
>   Defined make macro "FC_LINKER_FLAGS" to "   -Wall
> -ffree-line-length-0 -Wno-unused-dummy-argument -O0 -g3 "
>   Defined make macro "FC_MODULE_FLAG" to "-I"
>   Defined make macro "FC_MODULE_OUTPUT_FLAG" to "-J"
>   Defined make macro "PETSC_FC_INCLUDES" to
> "-I/home/devel/petsc/dev/include
> -I/home/devel/petsc/dev/arch-linux2-c-debug/include"
>   Defined make macro "PETSC_FC_INCLUDES_INSTALL" to
> "-I/home/devel/petsc/dev/include
> -I/home/devel/petsc/dev/arch-linux2-c-debug/include"
> FC_DEFINE_FLAG = -D
> FC_VERSION = GNU Fortran (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1)
> MPIFC_SHOW = gfortran -m64 -O2 -fPIC -Wl,-z,noexecstack
> -I/usr/include/mpich-x86_64 -I/usr/lib64/gfortran/modules/mpich
> -L/usr/lib64/mpich/lib -lmpifort -Wl,-rpath -Wl,/usr/lib64/mpich/lib
> -Wl,--enable-new-dtags -lmpi
> FC_MODULE_OUTPUT_FLAG = -J
> FC_LINKER_FLAGS =-Wall -ffree-line-length-0 -Wno-unused-dummy-argument
> -O0 -g3
> FC_FLAGS =  -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -O0 -g3
> PETSC_FC_INCLUDES_INSTALL = -I/home/devel/petsc/dev/include
> -I/home/devel/petsc/dev/arch-linux2-c-debug/include
> PETSC_FC_INCLUDES = -I/home/devel/petsc/dev/include
> -I/home/devel/petsc/dev/arch-linux2-c-debug/include
> FC_LINKER = mpif90
> FC_MODULE_FLAG = -I
> FC_SUFFIX = o
> FC_LINKER_SLFLAG = -Wl,-rpath,
> 
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
Let me know if you still want me to test this fix.

On Wed, Sep 25, 2019 at 10:01 AM Balay, Satish  wrote:

> Mark,
>
> Can you try the fix in branch balay/fix-mpiexec-shell-escape and see if it
> works?
>
> Satish
>
> On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:
>
> > Mark,
> >
> > Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu branch?
> >
> > Satish
> >
> > On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> >
> > > I double checked that a clean build of your (master) branch has this
> error
> > > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> stuff
> > > from Barry that is not yet in master, works.
> > >
> > > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > > petsc-dev@mcs.anl.gov> wrote:
> > >
> > > >
> > > >
> > > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > > I am using karlrupp/fix-cuda-streams, merged with master, and I
> get this
> > > > > error:
> > > > >
> > > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n
> 1
> > > > > printenv']":
> > > > > Error, invalid argument:  1
> > > > >
> > > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did
> edit
> > > > > the jsrun command but Karl's branch still fails. (SUMMIT was down
> today
> > > > > so there could have been updates).
> > > > >
> > > > > Any suggestions?
> > > >
> > > > Looks very much like a systems issue to me.
> > > >
> > > > Best regards,
> > > > Karli
> > > >
> > >
> >
>
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
On Wed, Sep 25, 2019 at 8:51 AM Karl Rupp  wrote:

>
> > I double checked that a clean build of your (master) branch has this
> > error by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include
> > stuff from Barry that is not yet in master, works.
>
> so did master work recently (i.e. right before my branch got merged)?
>

This problem is from master:

10:16 1 (d1fb55d...)|BISECTING ~/petsc-karl$ git bisect bad
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[0542e31a63bf85c93992c9e34728883db83474ac] Large number of fixes,
optimizations for configure, speeds up the configure
10:18 (0542e31...)|BISECTING ~/petsc-karl$



> Best regards,
> Karli
>
>
>
> >
> > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev
> > mailto:petsc-dev@mcs.anl.gov>> wrote:
> >
> >
> >
> > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> >  > I am using karlrupp/fix-cuda-streams, merged with master, and I
> > get this
> >  > error:
> >  >
> >  > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe
> -n 1
> >  > printenv']":
> >  > Error, invalid argument:  1
> >  >
> >  > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I
> > did edit
> >  > the jsrun command but Karl's branch still fails. (SUMMIT was down
> > today
> >  > so there could have been updates).
> >  >
> >  > Any suggestions?
> >
> > Looks very much like a systems issue to me.
> >
> > Best regards,
> > Karli
> >
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
Mark,

Can you try the fix in branch balay/fix-mpiexec-shell-escape and see if it 
works?

Satish

On Wed, 25 Sep 2019, Balay, Satish via petsc-dev wrote:

> Mark,
> 
> Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu branch?
> 
> Satish
> 
> On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:
> 
> > I double checked that a clean build of your (master) branch has this error
> > by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include stuff
> > from Barry that is not yet in master, works.
> > 
> > On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> > petsc-dev@mcs.anl.gov> wrote:
> > 
> > >
> > >
> > > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > > I am using karlrupp/fix-cuda-streams, merged with master, and I get this
> > > > error:
> > > >
> > > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n 1
> > > > printenv']":
> > > > Error, invalid argument:  1
> > > >
> > > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did edit
> > > > the jsrun command but Karl's branch still fails. (SUMMIT was down today
> > > > so there could have been updates).
> > > >
> > > > Any suggestions?
> > >
> > > Looks very much like a systems issue to me.
> > >
> > > Best regards,
> > > Karli
> > >
> > 
> 



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Balay, Satish via petsc-dev
Mark,

Can you send configure.log from mark/fix-cuda-with-gamg-pintocpu branch?

Satish

On Wed, 25 Sep 2019, Mark Adams via petsc-dev wrote:

> I double checked that a clean build of your (master) branch has this error
> by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include stuff
> from Barry that is not yet in master, works.
> 
> On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
> 
> >
> >
> > On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > > I am using karlrupp/fix-cuda-streams, merged with master, and I get this
> > > error:
> > >
> > > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n 1
> > > printenv']":
> > > Error, invalid argument:  1
> > >
> > > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did edit
> > > the jsrun command but Karl's branch still fails. (SUMMIT was down today
> > > so there could have been updates).
> > >
> > > Any suggestions?
> >
> > Looks very much like a systems issue to me.
> >
> > Best regards,
> > Karli
> >
> 



[petsc-dev] getting eigen estimates from GAMG to CHEBY

2019-09-25 Thread Mark Adams via petsc-dev
It's been a few years since we lost the ability to cache the eigen
estimates, that smoothed aggregation computes, to chebyshev smoothers. I'd
like to see if we bring this back.

This is slightly (IMO) complicated by the fact that the smoother PC may not
be Jacobi, but I think it is close enough (and an overestimate probably).
Maybe provide a chevy option to chebyshev_recompute_eig_est.

What do people think?


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Karl Rupp via petsc-dev



I double checked that a clean build of your (master) branch has this 
error by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include 
stuff from Barry that is not yet in master, works.


so did master work recently (i.e. right before my branch got merged)?

Best regards,
Karli





On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev 
mailto:petsc-dev@mcs.anl.gov>> wrote:




On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
 > I am using karlrupp/fix-cuda-streams, merged with master, and I
get this
 > error:
 >
 > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n 1
 > printenv']":
 > Error, invalid argument:  1
 >
 > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I
did edit
 > the jsrun command but Karl's branch still fails. (SUMMIT was down
today
 > so there could have been updates).
 >
 > Any suggestions?

Looks very much like a systems issue to me.

Best regards,
Karli



Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Mark Adams via petsc-dev
I double checked that a clean build of your (master) branch has this error
by my branch (mark/fix-cuda-with-gamg-pintocpu), which may include stuff
from Barry that is not yet in master, works.

On Wed, Sep 25, 2019 at 5:26 AM Karl Rupp via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:

>
>
> On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
> > I am using karlrupp/fix-cuda-streams, merged with master, and I get this
> > error:
> >
> > Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n 1
> > printenv']":
> > Error, invalid argument:  1
> >
> > My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did edit
> > the jsrun command but Karl's branch still fails. (SUMMIT was down today
> > so there could have been updates).
> >
> > Any suggestions?
>
> Looks very much like a systems issue to me.
>
> Best regards,
> Karli
>


Re: [petsc-dev] error with karlrupp/fix-cuda-streams

2019-09-25 Thread Karl Rupp via petsc-dev




On 9/25/19 11:12 AM, Mark Adams via petsc-dev wrote:
I am using karlrupp/fix-cuda-streams, merged with master, and I get this 
error:


Could not execute "['jsrun -g\\ 1 -c\\ 1 -a\\ 1 --oversubscribe -n 1 
printenv']":

Error, invalid argument:  1

My branch mark/fix-cuda-with-gamg-pintocpu seems to work but I did edit 
the jsrun command but Karl's branch still fails. (SUMMIT was down today 
so there could have been updates).


Any suggestions?


Looks very much like a systems issue to me.

Best regards,
Karli