[petsc-dev] Comparing binary output files in test harness

2021-02-18 Thread Blaise A Bourdin
Hi,

I would like to write better tests for my exodus I/O functions and compare the 
binary files written to the drive instead of the output of the examples.
For instance, would it be possible to do the following:
ex26 -i  -o output1.exo; mpirun -np 2 ex26 -i  
-o output2.exo; exodiff output1.exo output2.exo
And check the result of exodiff, or run exodiff between output1.exo or 
output2.exo and a stored binary result?

Regards,
Blaise

-- 
A.K. & Shirley Barton Professor of  Mathematics
Adjunct Professor of Mechanical Engineering
Adjunct of the Center for Computation & Technology
Louisiana State University, Lockett Hall Room 344, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 Web 
http://www.math.lsu.edu/~bourdin



Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-18 Thread Junchao Zhang
On Thu, Feb 18, 2021 at 4:04 PM Fande Kong  wrote:

>
>
> On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang 
> wrote:
>
>> VecScatter (i.e., SF, the two are the same thing) setup (building various
>> index lists, rank lists) is done on the CPU.  is1, is2 must be host data.
>>
>
> Just out of curiosity, is1 and is2 can not be created on a GPU device in
> the first place? That being said, it is technically impossible? Or we just
> did not implement them yet?
>
Simply because we do not have an ISCUDA class.


>
> Fande,
>
>
>> When the SF is used to communicate device data, indices are copied to the
>> device..
>>
>> --Junchao Zhang
>>
>>
>> On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan 
>> wrote:
>>
>>> I'm trying to understand how VecScatters work with GPU-native Kokkos
>>> Vecs.
>>>
>>> Specifically, I'm interested in what will happen in code like in
>>> src/vec/vec/tests/ex22.c,
>>>
>>> ierr = VecScatterCreate(x,is1,y,is2,&ctx);CHKERRQ(ierr);
>>>
>>> (from
>>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44
>>> )
>>>
>>> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the
>>> command line. But is1 and is2 are (I think), always
>>> CPU/host data. Assuming that the scatter itself can happen on the GPU,
>>> the indices must make it to the device somehow - are they copied there when
>>> the scatter is created? Is there a way to create the scatter using indices
>>> already on the GPU (Maybe using SF more directly)?
>>>
>>>


Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-18 Thread Fande Kong
On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang 
wrote:

> VecScatter (i.e., SF, the two are the same thing) setup (building various
> index lists, rank lists) is done on the CPU.  is1, is2 must be host data.
>

Just out of curiosity, is1 and is2 can not be created on a GPU device in
the first place? That being said, it is technically impossible? Or we just
did not implement them yet?

Fande,


> When the SF is used to communicate device data, indices are copied to the
> device..
>
> --Junchao Zhang
>
>
> On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan 
> wrote:
>
>> I'm trying to understand how VecScatters work with GPU-native Kokkos
>> Vecs.
>>
>> Specifically, I'm interested in what will happen in code like in
>> src/vec/vec/tests/ex22.c,
>>
>> ierr = VecScatterCreate(x,is1,y,is2,&ctx);CHKERRQ(ierr);
>>
>> (from
>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44
>> )
>>
>> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the
>> command line. But is1 and is2 are (I think), always
>> CPU/host data. Assuming that the scatter itself can happen on the GPU,
>> the indices must make it to the device somehow - are they copied there when
>> the scatter is created? Is there a way to create the scatter using indices
>> already on the GPU (Maybe using SF more directly)?
>>
>>


Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-18 Thread Junchao Zhang
VecScatter (i.e., SF, the two are the same thing) setup (building various
index lists, rank lists) is done on the CPU.  is1, is2 must be host data.
When the SF is used to communicate device data, indices are copied to the
device..

--Junchao Zhang


On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan 
wrote:

> I'm trying to understand how VecScatters work with GPU-native Kokkos Vecs.
>
> Specifically, I'm interested in what will happen in code like in
> src/vec/vec/tests/ex22.c,
>
> ierr = VecScatterCreate(x,is1,y,is2,&ctx);CHKERRQ(ierr);
>
> (from
> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44)
>
> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the
> command line. But is1 and is2 are (I think), always
> CPU/host data. Assuming that the scatter itself can happen on the GPU, the
> indices must make it to the device somehow - are they copied there when the
> scatter is created? Is there a way to create the scatter using indices
> already on the GPU (Maybe using SF more directly)?
>
>


[petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-18 Thread Patrick Sanan
I'm trying to understand how VecScatters work with GPU-native Kokkos Vecs. 

Specifically, I'm interested in what will happen in code like in 
src/vec/vec/tests/ex22.c, 

ierr = VecScatterCreate(x,is1,y,is2,&ctx);CHKERRQ(ierr);

(from https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44 
)

Here, x and y can be set to type KOKKOS using -vec_type kokkos at the command 
line. But is1 and is2 are (I think), always
CPU/host data. Assuming that the scatter itself can happen on the GPU, the 
indices must make it to the device somehow - are they copied there when the 
scatter is created? Is there a way to create the scatter using indices already 
on the GPU (Maybe using SF more directly)?



[petsc-dev] RSE and Postdoc openings at CU Boulder

2021-02-18 Thread Jed Brown
My research group has openings for a Research Software Engineer and a Postdoc. 
Details and application links below; feel free to email me with questions. 


## Research Software Engineer

CU Boulder’s PSAAP Multidisciplinary Simulation Center for Micromorphic 
Multiphysics Porous and Particulate Materials Simulations Within Exascale 
Computing Workflows has an opening for a *Research Software Engineer* to 
co-lead development of robust, extensible open source software for 
extreme-scale simulation of large-deformation composite 
poro-elasto-visco-plastic media across a broad range of regimes with 
experimental validation and coordination with micromorphic multiscale models.

Successful applicants will have strong written and verbal communication skills, 
and interest in working with an interdisciplinary team to apply the following 
to real-world problems:

* collaborative software development and devops (Git, continuous integration, 
etc.);
* maintainable, high-performance programming techniques for CPUs and GPUs;
* finite element and material-point discretizations;
* computational mechanics/inelasticity;
* parallel algebraic solvers such as PETSc; and
* scalable data-intensive computing.

This position can start immediately and is remote-friendly, especially during 
the pandemic.

https://jobs.colorado.edu/jobs/JobDetail/Research-Associate/28703


## Postdoc

We also have an immediate opening for a *Postdoc* to conduct research in 
collaboration with the DOE Exascale Computing Project’s co-design Center for 
Efficient Exascale Discretization (CEED) on the development of robust and 
efficient methods for high order/compatible PDE discretization and multilevel 
solvers, including deployment in open source libraries. The project is 
especially interested in strategies to provide performance portability on 
emerging architectures and novel parallelization techniques to improve time to 
solution in the strong scaling limit. The methods will be applied in a variety 
of applications areas including sustainable energy and geophysics.

Successful applicants will have strong written and verbal communication skills 
to collaborate with a distributed inter-disciplinary team and disseminate 
results via publications and presentations, as well as an interest in research 
and development of high-quality community software infrastructure in areas 
including, but not limited to:

* element-based PDE discretization;
* high-performance computing on emerging architectures, including CPU and GPUs;
* scalable algebraic solvers;
* applications in fluid and solid mechanics; and
* data-intensive PDE workflows.

This position can start immediately and is remote-friendly, especially during 
the pandemic.

https://jobs.colorado.edu/jobs/JobDetail/PostDoctoral-Associate/28691


The University of Colorado Boulder is committed to building a culturally 
diverse community of faculty, staff, and students dedicated to contributing to 
an inclusive campus environment. We are an Equal Opportunity employer. We offer 
a competitive salary and a comprehensive benefits package.


Re: [petsc-dev] TSSetConvergedReason(ts,TS_CONVERGED_USER);

2021-02-18 Thread Mark Adams
OK, makes sense. I didn't know monitors were called before the step.
Thanks,

On Thu, Feb 18, 2021 at 7:25 AM Matthew Knepley  wrote:

> Stefano is right. Changes in convergence should probably go in TSPostStep.
>
>   Thanks,
>
>  Matt
>
> On Thu, Feb 18, 2021 at 7:22 AM Stefano Zampini 
> wrote:
>
>> Mark
>>
>> monitors are not supposed to change the TS. You can think at monitors
>> being 'const' methods of the TS. Also, TSMonitor is called at the beginning
>> of each step , see here
>> https://gitlab.com/petsc/petsc/-/blob/master/src/ts/interface/ts.c#L4169
>>
>>
>> Il giorno gio 18 feb 2021 alle ore 15:16 Mark Adams  ha
>> scritto:
>>
>>>  TSSetConvergedReason(ts,TS_CONVERGED_USER);
>>>
>>> does not seem to stop the iteration in a user monitor function. I have
>>> verified that it works from a post step method. Is this intentional?
>>>
>>> Mark
>>>
>>
>>
>> --
>> Stefano
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-dev] TSSetConvergedReason(ts,TS_CONVERGED_USER);

2021-02-18 Thread Matthew Knepley
Stefano is right. Changes in convergence should probably go in TSPostStep.

  Thanks,

 Matt

On Thu, Feb 18, 2021 at 7:22 AM Stefano Zampini 
wrote:

> Mark
>
> monitors are not supposed to change the TS. You can think at monitors
> being 'const' methods of the TS. Also, TSMonitor is called at the beginning
> of each step , see here
> https://gitlab.com/petsc/petsc/-/blob/master/src/ts/interface/ts.c#L4169
>
>
> Il giorno gio 18 feb 2021 alle ore 15:16 Mark Adams  ha
> scritto:
>
>>  TSSetConvergedReason(ts,TS_CONVERGED_USER);
>>
>> does not seem to stop the iteration in a user monitor function. I have
>> verified that it works from a post step method. Is this intentional?
>>
>> Mark
>>
>
>
> --
> Stefano
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] TSSetConvergedReason(ts,TS_CONVERGED_USER);

2021-02-18 Thread Stefano Zampini
Mark

monitors are not supposed to change the TS. You can think at monitors being
'const' methods of the TS. Also, TSMonitor is called at the beginning of
each step , see here
https://gitlab.com/petsc/petsc/-/blob/master/src/ts/interface/ts.c#L4169


Il giorno gio 18 feb 2021 alle ore 15:16 Mark Adams  ha
scritto:

>  TSSetConvergedReason(ts,TS_CONVERGED_USER);
>
> does not seem to stop the iteration in a user monitor function. I have
> verified that it works from a post step method. Is this intentional?
>
> Mark
>


-- 
Stefano


[petsc-dev] TSSetConvergedReason(ts,TS_CONVERGED_USER);

2021-02-18 Thread Mark Adams
 TSSetConvergedReason(ts,TS_CONVERGED_USER);

does not seem to stop the iteration in a user monitor function. I have
verified that it works from a post step method. Is this intentional?

Mark