Re: [petsc-users] issue of MatCreateDense in the CUDA codes

2024-10-02 Thread Matthew Knepley
On Wed, Oct 2, 2024 at 6:11 AM 刘浪天 via petsc-users 
wrote:

> I cannot declare everything as PetscScalar, my strategy is computing the
> elements of matrix on GPU blocks by blocks and copying them back to the
> CPU. Finally computing the eigenvalues using SLEPc on CPU.
>

Then you have to either

a) have a temporary array where you copy the GPU results to the CPU type, or

b) if you know the types are the same size, you can cast the pointer.

  Thanks,

 Matt


>  Langtian Liu Institute for Theorectical Physics,
> Justus-Liebig-University Giessen Heinrich-Buff-Ring 16, 35392 Giessen
> Germany email: langtian@icloud.com Tel: (+49)641 99 33342
>
> On Oct 2, 2024, at 11:31 AM, Jose E. Roman  wrote:
>
>
> Does it work if you declare everything as PetscScalar instead of
> cuDoubleComplex?
>
> El 2 oct 2024, a las 11:23, 刘浪天  escribió:
>
> Hi Jose,
>
> Since my matrix is two large, I cannot create the Mat on GPU. So I still
> want to create and compute the eigenvalues of this matrix on CPU using
> SLEPc.
>
> Best,
>  Langtian Liu Institute for Theorectical Physics,
> Justus-Liebig-University Giessen Heinrich-Buff-Ring 16, 35392 Giessen
> Germany email: langtian@icloud.com Tel: (+49)641 99 33342
>
> On Oct 2, 2024, at 11:18 AM, Jose E. Roman  wrote:
>
>
> For the CUDA case you should use MatCreateDenseCUDA() instead of
> MatCreateDense(). With this you pass a pointer with the data on the GPU
> memory. But I guess "new cuDoubleComplex[dim*dim]" is allocating on the
> CPU, you should use cudaMalloc() instead.
>
> Jose
>
>
> El 2 oct 2024, a las 10:56, 刘浪天 via petsc-users 
> escribió:
>
> Hi all,
>
> I am using the PETSc and SLEPc to solve the Faddeev equation of baryons. I
> encounter a problem of function MatCreateDense when changing from CPU to
> CPU-GPU computations.
> At first, I write the codes in purely CPU computation in the following way
> and it works.
> ```
> Eigen::MatrixXcd H_KER;
> Eigen::MatrixXcd G0;
> printf("\nCompute the propagator matrix.\n");
> prop_matrix_nucleon_sc_av(Mn, pp_nodes, cos1_nodes);
> printf("\nCompute the propagator matrix done.\n");
> printf("\nCompute the kernel matrix.\n");
> bse_kernel_nucleon_sc_av(Mn, pp_nodes, pp_weights, cos1_nodes,
> cos1_weights);
> printf("\nCompute the kernel matrix done.\n");
> printf("\nCompute the full kernel matrix by multiplying kernel and
> propagator matrix.\n");
> MatrixXcd kernel_temp = H_KER * G0;
> printf("\nCompute the full kernel matrix done.\n");
>
> // Solve the eigen system with SLEPc
> printf("\nSolve the eigen system in the rest frame.\n");
> // Get the size of the Eigen matrix
> int nRows = (int) kernel_temp.rows();
> int nCols = (int) kernel_temp.cols();
> // Create PETSc matrix and share the data of kernel_temp
> Mat kernel;
> PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE,
> nRows, nCols, kernel_temp.data(), &kernel));
> PetscCall(MatAssemblyBegin(kernel, MAT_FINAL_ASSEMBLY));
> PetscCall(MatAssemblyEnd(kernel, MAT_FINAL_ASSEMBLY));
> ```
> Now I change to compute the propagator and kernel matrices in GPU and then
> compute the largest eigenvalues in CPU using SLEPc in the ways below.
> ```
> cuDoubleComplex *h_propmat;
> cuDoubleComplex *h_kernelmat;
> int dim = EIGHT * NP * NZ;
> printf("\nCompute the propagator matrix.\n");
> prop_matrix_nucleon_sc_av_cuda(Mn, pp_nodes.data(), cos1_nodes.data());
> printf("\nCompute the propagator matrix done.\n");
> printf("\nCompute the kernel matrix.\n");
> kernel_matrix_nucleon_sc_av_cuda(Mn, pp_nodes.data(), pp_weights.data(),
> cos1_nodes.data(), cos1_weights.data());
> printf("\nCompute the kernel matrix done.\n");
> printf("\nCompute the full kernel matrix by multiplying kernel and
> propagator matrix.\n");
> // Map the raw arrays to Eigen matrices (column-major order)
> auto *h_kernel_temp = new cuDoubleComplex [dim*dim];
>
> matmul_cublas_cuDoubleComplex(h_kernelmat,h_propmat,h_kernel_temp,dim,dim,dim);
> printf("\nCompute the full kernel matrix done.\n");
>
> // Solve the eigen system with SLEPc
> printf("\nSolve the eigen system in the rest frame.\n");
> int nRows = dim;
> int nCols = dim;
> // Create PETSc matrix and share the data of kernel_temp
> Mat kernel;
> auto* h_kernel = (std::complex*)(h_kernel_temp);
> PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE,
> nRows, nCols, h_kernel_temp, &kernel));
> PetscCall(MatAssemblyBegin(kernel, MAT_FINAL_ASSEMBLY));
> PetscCall(MatAssemblyEnd(kernel, MAT_FINAL_ASSEMBLY));
> But in this case, the compiler told me that the MatCreateDense function
> uses the data pointer as type of "thrust::complex" instead of
> "std::complex".
> I am sure I only configured and install PETSc in purely CPU without GPU
> and this codes are written in the host function.
> Why the function changes its behavior? Did you also meet this problem when
> writing the cuda codes and how to solve this problem.
> I tried to copy the data to a new thrust::complex matrix but thi

Re: [petsc-users] ISView() in PETSc 3.22

2024-10-01 Thread Matthew Knepley
On Tue, Oct 1, 2024 at 8:29 PM Adrian Croucher 
wrote:

> On 2/10/24 11:39 am, Adrian Croucher wrote:
>
> > So I think I will need to disable the compression in my code. Is there
> > a function call I can use to do that, to make sure it's always done
> > without users needing to pass -is_view_compress 0?
> >
> Looks like there is no specific function to do this, but I can use
> PetscOptionsSetValue().
>
> Maybe worth having something about this new compression stuff in the
> PETSc 3.22 change log 
> (https://urldefense.us/v3/__https://petsc.org/release/changes/322/__;!!G_uCfscf7eWS!YT4Ki8iXApGvjhAbuUGKbRNIKj2t-ysAWb_4Us3bGuW6-CR7e5bklc4UHe09hsB0RZeuLqSuTgHGswrT9Rlk$
>  ), in case
> it trips anyone else up?
>

Yep, I will put it in.

  Thanks,

Matt


> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YT4Ki8iXApGvjhAbuUGKbRNIKj2t-ysAWb_4Us3bGuW6-CR7e5bklc4UHe09hsB0RZeuLqSuTgHGs0Cnxtn3$
  



Re: [petsc-users] ISView() in PETSc 3.22

2024-10-01 Thread Matthew Knepley
On Mon, Sep 30, 2024 at 10:15 PM Adrian Croucher 
wrote:

> hi, I am testing my (Fortran) code on PETSc 3.22 and have got it to
> build. However I am getting some unusual new behaviour when I write an
> IS to an HDF5 file using ISView().
>
> The attached minimal example shows the issue. It creates a simple
> 10-element IS and writes it to HDF5. With previous versions of PETSc
> this would give me a 10x1 dataset containing the values 0 - 9, as expected.
>
> When I run it with PETSc 3.22 (in serial), I again get the expected
> values written on stdout, so it looks like the IS itself is correct. But
> in the HDF5 file I get a 1x3 dataset containing the values (10,1,0).
>
> Has something changed here?
>

Yes. We now compress IS data sets by default. You can turn it off using
-is_view_compress 0. I am not sure
what the best way to manage this is, but it makes a huge difference in file
size for checkpointing.

  Thanks,

 Matt


> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ejndUdBS4n9snwyvw_Y8jvSAdi6ga-LyUR7oasMzcbtNb3GQ_PbPcBenOALwXgN3HaoVuMUgTtv9bCTCkhv7$
  



Re: [petsc-users] Questions DMPlex

2024-09-30 Thread Matthew Knepley
On Mon, Sep 30, 2024 at 6:50 AM Karthikeyan Chockalingam - STFC UKRI via
petsc-users  wrote:

> Hi,
>
>
>
> We have been using PETSc’s block version of the AIJ matrix format to
> implement fully coupled multiphysics problems using finite elements.
>
>
>
> Are there specific advantages to moving toward DMPlex for finite
> element-based coupled multi-physics implementation?
>

DMPlex is intended to help manage unstructured grids. It can read/write
meshes and functions over them, layout data over the grid
in parallel, compute local-to-global maps, modify meshes, and also provides
some tools for assembling functions and operators.


> Can you still access Hypre?
>

Yes. Hypre interacts with the solvers. DMPlex is only there to help in
assembly.


> Does the format support running on GPUs?
>

Running the solver on GPUs is unchanged. However, for assembling on GPUs,
there are at least two options:

1) Do everything yourself using the information provided by DMPlex. This is
what Mark does in dmplexland.c for
assembling the Landau operator. It is hard, but some people can do it.

2) Use a library. LibCEED is a library for assembling on the GPU, and
DMPlex provides support for interacting with it. There
are DMPlex examples in the LibCEED distribution. This is what I do for
assembling on GPUs.

  Thanks,

 Matt


> Thank you.
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
>
>
> --
>
> *Karthik Chockalingam, Ph.D.*
>
> Senior Research Software Engineer
>
> High Performance Systems Engineering Group
>
> Hartree Centre | Science and Technology Facilities Council
>
> karthikeyan.chockalin...@stfc.ac.uk
>
>
>
>  [image: signature_3970890138]
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bEvuhZjmSUukrYMa5jGF9VlNNxVETTLlIXH3aoOO8d6G9TdvzgX4lzosRefIY55K1fOFTKHxQQ4s_dvd0Yga$
  



Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-09-26 Thread Matthew Knepley
On Thu, Sep 26, 2024 at 7:18 PM MIGUEL MOLINOS PEREZ  wrote:

> I see, you mean:
>
> Create the ghost particles at the local cell with the same properties as
> particle 1 (duplicate the original particle) but different value
> DMSwarmField_rank. Then, call DMSwarmMigrate(*,PETSC_FALSE) so we do the
> migration and delete the local copies of the particle 1.  Right?
>

Yep. I think it will work, from what I know about BASIC.

  Thanks,

 Matt


> Thanks,
> Miguel
>
> On Sep 26, 2024, at 11:09 PM, Matthew Knepley  wrote:
>
> On Thu, Sep 26, 2024 at 11:20 AM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Thank you Matt.
>>
>> Okey, let me have a careful look to the DMSwarmMigrate_Push_Basic 
>> implementation
>> to see if there is some workaround.
>>
>> The idea of adding new particles is interesting. However, in that case,
>> we need to initialize the new (ghost) particles using the fields of the
>> “real” particle, right? This can be done using something like:
>>
>> VecGhostUpdateBegin 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/VecGhostUpdateBegin/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpmGXFVK-$
>>  >(Vec 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/Vec/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpjFc2Mxy$
>>  > globalout,InsertMode 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/InsertMode/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpmLgCZH7$
>>  > ADD_VALUES 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/ADD_VALUES/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpurYQI7-$
>>  >, ScatterMode 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/ScatterMode/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpk7OXKyZ$
>>  > SCATTER_REVERSE 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/SCATTER_REVERSE/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpm4lnFqo$
>>  >);VecGhostUpdateEnd 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/VecGhostUpdateEnd/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpuKpVNR3$
>>  >(Vec 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/Vec/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpjFc2Mxy$
>>  > globalout,InsertMode 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/InsertMode/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpmLgCZH7$
>>  > ADD_VALUES 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/ADD_VALUES/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpurYQI7-$
>>  >, ScatterMode 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/ScatterMode/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpk7OXKyZ$
>>  > SCATTER_REVERSE 
>> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/SCATTER_REVERSE/__;!!G_uCfscf7eWS!emloiQzBtgzgIIVq1midB7Rd9bkkr7wwUgGr8mzcWk2AzIkn7e_2Rr5Nkxd2ysaNUXWFp1jLd98xpm4lnFqo$
>>  >);
>>
>> for the particle fields (?).
>>
>
> I think we can just copy from the local particle. For example, suppose I
> decide that particle 1 should go to rank 5, 12, and 27. Then
> I first set p1.rank = 5, then I add two new particles with the same values
> as particle 1, but with rank = 12 and 27. Then when I call migrate, it will
> move these three particles to the correct processes, and delete the
> original particles and the copies from the local set.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Miguel
>>
>>
>> On Sep 26, 2024, at 3:53 PM, Matthew Knepley  wrote:
>>
>> On Thu, Sep 26, 2024 at 6:31 AM MIGUEL MOLINOS PEREZ 
>> wrote:
>>
>>> Hi Matt et al,
>>>
>>> I’ve been working on the scheme that you proposed to create ghost
>>> particles (atoms in my case), and it works! With a couple of caveats:
>>> -1º In general the overlap particles will be migrate from their own rank
>>> to more than one neighbor rank, this is specially relevant for those
>>> located close to the corners

Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-09-26 Thread Matthew Knepley
On Thu, Sep 26, 2024 at 11:20 AM MIGUEL MOLINOS PEREZ 
wrote:

> Thank you Matt.
>
> Okey, let me have a careful look to the DMSwarmMigrate_Push_Basic 
> implementation
> to see if there is some workaround.
>
> The idea of adding new particles is interesting. However, in that case, we
> need to initialize the new (ghost) particles using the fields of the
> “real” particle, right? This can be done using something like:
>
> VecGhostUpdateBegin 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/VecGhostUpdateBegin/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgp0DqZlu$
>  >(Vec 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/Vec/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgod4XH0M$
>  > globalout,InsertMode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/InsertMode/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgo6Ycd6D$
>  > ADD_VALUES 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/ADD_VALUES/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgqRE6BHz$
>  >, ScatterMode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/ScatterMode/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNglNJk_18$
>  > SCATTER_REVERSE 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/SCATTER_REVERSE/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgvURZcIx$
>  >);VecGhostUpdateEnd 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/VecGhostUpdateEnd/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgnpOD3Mg$
>  >(Vec 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/Vec/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgod4XH0M$
>  > globalout,InsertMode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/InsertMode/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgo6Ycd6D$
>  > ADD_VALUES 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/ADD_VALUES/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgqRE6BHz$
>  >, ScatterMode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/ScatterMode/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNglNJk_18$
>  > SCATTER_REVERSE 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Vec/SCATTER_REVERSE/__;!!G_uCfscf7eWS!aikPEw2N9TJ1IX4nFiTcOEsUOqcmEW2AwEUjS3Pw-WhS71aoiH_YonU7jU5aHVRNC-CqA7OIsJSNgvURZcIx$
>  >);
>
> for the particle fields (?).
>

I think we can just copy from the local particle. For example, suppose I
decide that particle 1 should go to rank 5, 12, and 27. Then
I first set p1.rank = 5, then I add two new particles with the same values
as particle 1, but with rank = 12 and 27. Then when I call migrate, it will
move these three particles to the correct processes, and delete the
original particles and the copies from the local set.

  Thanks,

 Matt


> Thanks,
> Miguel
>
>
> On Sep 26, 2024, at 3:53 PM, Matthew Knepley  wrote:
>
> On Thu, Sep 26, 2024 at 6:31 AM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Hi Matt et al,
>>
>> I’ve been working on the scheme that you proposed to create ghost
>> particles (atoms in my case), and it works! With a couple of caveats:
>> -1º In general the overlap particles will be migrate from their own rank
>> to more than one neighbor rank, this is specially relevant for those
>> located close to the corners. Therefore, you'll need to call DMSwarmMigrate
>> several times (27 times for 3D cells), during the migration process.
>>
>
> That is terrible. Let's just fix DMSwarmMigrate to have a mode that sends
> the particle to all overlapping neighbors at once. It can't be that hard.
>
>
>> -2º You need to set DMSWARM_MIGRATE_BASIC. Otherwise the proposed
>> algorithm will not work at all!
>>
>
> Oh, I should have thought of that. Sorry.
>
> I can help code up that extension. Can you take a quick look at the BASIC
> code? Right now, we just use the rank attached to the particle
> to send it. We could have an arrays of ranks, but that seems crazy, and
> would blow up particle storage. How about just adding new particles
> with the other ranks right before migration?
>
&

Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-09-26 Thread Matthew Knepley
On Thu, Sep 26, 2024 at 6:31 AM MIGUEL MOLINOS PEREZ  wrote:

> Hi Matt et al,
>
> I’ve been working on the scheme that you proposed to create ghost
> particles (atoms in my case), and it works! With a couple of caveats:
> -1º In general the overlap particles will be migrate from their own rank
> to more than one neighbor rank, this is specially relevant for those
> located close to the corners. Therefore, you'll need to call DMSwarmMigrate
> several times (27 times for 3D cells), during the migration process.
>

That is terrible. Let's just fix DMSwarmMigrate to have a mode that sends
the particle to all overlapping neighbors at once. It can't be that hard.


> -2º You need to set DMSWARM_MIGRATE_BASIC. Otherwise the proposed
> algorithm will not work at all!
>

Oh, I should have thought of that. Sorry.

I can help code up that extension. Can you take a quick look at the BASIC
code? Right now, we just use the rank attached to the particle
to send it. We could have an arrays of ranks, but that seems crazy, and
would blow up particle storage. How about just adding new particles
with the other ranks right before migration?

   Thanks,

 Matt


> Hope this helps to other folks!
>
> I have a follow-up question about periodic bcc on this context, should I
> open a new thread of keep posting here?
>
> Thanks,
> Miguel
>
> On Aug 7, 2024, at 4:22 AM, MIGUEL MOLINOS PEREZ  wrote:
>
> Thanks Matt, I think I'll start by making a small program as a proof of
> concept. Then, if it works I'll implement it in my code and I'll be happy
> to share it too :-)
>
> Miguel
>
> On Aug 4, 2024, at 3:30 AM, Matthew Knepley  wrote:
>
> On Fri, Aug 2, 2024 at 7:15 PM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Thanks again Matt, that makes a lot more sense !!
>>
>> Just to check that we are on the same page. You are saying:
>>
>> 1. create a field define a field called "owner rank" for each particle.
>>
>> 2. Identify the phantom particles and modify the internal variable
>> defined by the DMSwarmField_rank variable.
>>
>> 3. Call DMSwarmMigrate(*,PETSC_FALSE), do the calculations using the new
>> local vector including the ghost particles.
>>
>> 4. Then, once the calculations are done, rename the DMSwarmField_rank
>> variable using the "owner rank" variable and call
>> DMSwarmMigrate(*,PETSC_FALSE) once again.
>>
>
> I don't think we need this last step. We can just remove those ghost
> particles for the next step I think.
>
>   Thanks,
>
>  Matt
>
>
>> Thank you,
>> Miguel
>>
>>
>> On Aug 2, 2024, at 5:33 PM, Matthew Knepley  wrote:
>>
>> On Fri, Aug 2, 2024 at 11:15 AM MIGUEL MOLINOS PEREZ 
>> wrote:
>>
>>> Thank you Matt for your time,
>>>
>>> What you describe seems to me the ideal approach.
>>>
>>> 1) Add a particle field 'ghost' that identifies ghost vs owned
>>> particles. I think it needs options OWNED, OVERLAP, and GHOST
>>>
>>> This means, locally, I need to allocate Nlocal + ghost particles
>>> (duplicated) for my model?
>>>
>>
>> I would do it another way. I would allocate the particles with no overlap
>> and set them up. Then I would identify the halo particles, mark them as
>> OVERLAP, call DMSwarmMigrate(), and mark the migrated particles as GHOST,
>> then unmark the OVERLAP particles. Shoot! That marking will not work since
>> we cannot tell the difference between particles we received and particles
>> we sent. Okay, instead of the `ghost` field we need an `owner rank` field.
>> So then we
>>
>> 1) Setup the non-overlapping particles
>>
>> 2) Identify the halo particles
>>
>> 3) Change the `rank`, but not the `owner rank`
>>
>> 4) Call DMSwarmMigrate()
>>
>> Now we can identify ghost particles by the `owner rank`
>>
>>
>>> If that so, how to do the communication between the ghost particles
>>> living in the rank i and their “real” counterpart in the rank j.
>>>
>>> Algo, as an alternative, what about:
>>> 1) Use an IS tag which contains, for each rank, a list of the global
>>> index of the neighbors particles outside of the rank.
>>> 2) Use VecCreateGhost to create a new vector which contains extra local
>>> space for the ghost components of the vector.
>>> 3) Use VecScatterCreate, VecScatterBegin, and VecScatterEnd to do the
>>> transference of data between a vector obtained with
>>> DMSwarmCreateGlobalVectorFromField
>>> 4) Do necessary computations using 

Re: [petsc-users] Mat indices for DMPlex jacobian

2024-09-23 Thread Matthew Knepley
On Mon, Sep 23, 2024 at 5:33 AM Matteo Semplice via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear petsc,
>
>  I need to hand-code a jacobian and I can't figure out how to
> translate the DMplex points/fields to matrix indices.
>
> My DMPlex has a section with m fields per cell (which makes for n>m dof
> per cell since some fields are vector). Say I want to insert an nxn
> block for the row corresponding to cell c and coloumn of its neighbour
> d. I guess that I should call either MatSetValues/MatSetValuesLocal or
> the blocked variants, but how do I find the row/col indices to pass in
> starting from the c/d dmplex points? And, while I am at it, which
> MatSetValues version standard/Local standard/blocked is best?
>
> I looked for a petsc example, but fails to find it: if there's one, can
> you just point me to it?
>

1. In FEM, the main unit of indices is usually the closure, so we provide

  
https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMPlex/DMPlexGetClosureIndices/__;!!G_uCfscf7eWS!fDvGoz5OloH7xryMIAIqyYsslt8U4HqegW7HeYDlwRPbkTloMU6k0kze8NlG7S2DxDuY2nUXRQcBIiuqLU1o$
 

which is what is used inside of

  
https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMPlex/DMPlexMatSetClosure/__;!!G_uCfscf7eWS!fDvGoz5OloH7xryMIAIqyYsslt8U4HqegW7HeYDlwRPbkTloMU6k0kze8NlG7S2DxDuY2nUXRQcBIi82N3yY$
 

2. These indices are calculated using the global section, so you can just

  DMGetGlobalSection(dm, &gs);

and then use PetscSectionGetDof() and PetscSectionGetOffset(), knowing that
off-process values are encoded as -(off + 1), so you need to convert those.

Does this make sense?

  Thanks,

 Matt


> Thanks in advance.
>
> Matteo Semplice
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fDvGoz5OloH7xryMIAIqyYsslt8U4HqegW7HeYDlwRPbkTloMU6k0kze8NlG7S2DxDuY2nUXRQcBIjjfDigI$
  



Re: [petsc-users] Inquiry about reading an exodus II file from coreform Cubit

2024-09-22 Thread Matthew Knepley
On Sun, Sep 22, 2024 at 9:26 AM Blaise Bourdin  wrote:

> On Sep 21, 2024, at 9:54 AM, neil liu  wrote:
>
> Caution: External email.
>
> Thanks a lot, David. That works.
> Then I tried another example from Cubit. The script is defined as follows,
>
> reset
> brick x 1
> mesh vol 1
> block 1 volume 1
> sideset 1 surface 1
> block 2 surface 1 #sides of these elements can now be referenced in a
> sideset
> block 2 element type quad
> sideset 2 curve 1
>
> Then the mesh was exported as a exodus file (attached), and imported into
> Petsc.
> Here, the code was stuck by the function, DMPlexCreateExodus().
> I think maybe this is due to block 1 is 3D while block 2 is 2D.
> But this seems necessary to define an edge using sideset.
>
>
>
> That is corrrect.
> As far as I understand, one of the few assumptions in dmplex is that
> “cells” have the same topological dimension
> Here sideset 1 and 2 have different dimensions.
> As far as I remember, exodusII does not have “Edge sets”, so your best bet
> would be to define a node set for curve 1.
>

It is, of course, possible to mark edges. The problem is that ExodusII
makes it very hard to connect the edges in your sideset
to edges in the actual mesh, which is really what you want. This, in my
opinion, is why ExodusII is a failure as a mesh format.
If we can explain, I you can make this connection, I have no problem
writing this code.

  Thanks,

 Matt


> Regards,
> Blaise
>
>
> Thanks,
>
> Xiaodong
>
> On Fri, Sep 20, 2024 at 3:15 PM David Andrs  wrote:
>
>> Before you export the mesh from Cubit, change the element type to
>> something like QUAD4. PETSc does not automatically remap SHELL elements to
>> QUADs.
>>
>> --
>> David
>>
>> On Fri, Sep 20, 2024 at 8:05 AM neil liu  wrote:
>>
>>> Dear Petsc developers and users,
>>>
>>> I am trying to read an exodus II file from coreform cubit, but without
>>> success,
>>> Then I used petsc's built-in exodus file,
>>> /share/petsc/datafiles/meshes/sevenside.exo.
>>> This file can be read by petsc successfully.
>>>
>>> And I did a test. This file, sevenside.exo, is imported into coreform
>>> Cubit, and then is saved as a new exodus file. This new exodus file can not
>>> be read by petsc successfully.
>>>
>>> [0]PETSC ERROR: Invalid argument
>>> [0]PETSC ERROR: Unrecognized element type SHELL
>>> [0]PETSC ERROR: See 
>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!dee9Na-tSTsDKqZOWq2BLt5MPj0L04TPre2lr-cwZEDmoI_Cb8bezH_J6E6K7vJynTpnCxV-jbPjt4XvcUgN$
>>>  
>>> 
>>>  for
>>> trouble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, Apr 26, 2024
>>> [0]PETSC ERROR: ./app on a arch-linux-c-opt named localhost.localdomain
>>> by xiaodongliu Fri Sep 20 09:36:16 2024
>>> [0]PETSC ERROR: Configure options -download-mumps -download-scalapack
>>> --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack
>>> --download-mpich --with-scalar-type=complex --download-triangle
>>> --with-debugging=no --download-parmetis --download-metis -download-ptscotch
>>> --download-bison --download-hdf5
>>> -download-mmg=/home/xiaodongliu/Documents/3rdPartyLib/d5c43d1bcefe598d51428f6a7fee10ec29478b79.tar.gz
>>> --download-ctetgen --download-pragmatic --download-eigen
>>> --download-netcdf=/home/xiaodongliu/Documents/3rdPartyLib/netcdf-c-4.9.2-p1.tar.gz
>>> --download-zlib --download-pnetcdf --download-exodusii
>>> [0]PETSC ERROR: #1 ExodusGetCellType_Internal() at
>>> /home/xiaodongliu/Documents/petsc-with-docs-3.21.1/petsc-3.21.1/src/dm/impls/plex/plexexodusii.c:1470
>>> [0]PETSC ERROR: #2 DMPlexCreateExodus() at
>>> /home/xiaodongliu/Documents/petsc-with-docs-3.21.1/petsc-3.21.1/src/dm/impls/plex/plexexodusii.c:1551
>>> [0]PETSC ERROR: #3 DMPlexCreateExodusFromFile() at
>>> /home/xiaodongliu/Documents/petsc-with-docs-3.21.1/petsc-3.21.1/src/dm/impls/plex/plexexodusii.c:1390
>>>
>>> Thanks a lot.
>>>
>>> Xiaodong
>>> Compose:
>>> Inquiry about reading an exodus II file from coreform Cubit
>>> [image: Minimize][image: Pop-out][image: Close]
>>> PETSc users list
>>>
>> 
>
>
> —
> Canada Research Chair in Mathematical and Computational Aspects of Solid
> Mechanics (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
> https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!dee9Na-tSTsDKqZOWq2BLt5MPj0L04TPre2lr-cwZEDmoI_Cb8bezH_J6E6K7vJynTpnCxV-jbPjtwBSB1Pe$
>  
> 
> | +1 (905) 525 9140 ext. 27243
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which t

Re: [petsc-users] Error: PCHYPRESetPoissonMatrix_HYPRE

2024-09-19 Thread Matthew Knepley
On Thu, Sep 19, 2024 at 9:33 AM Karthikeyan Chockalingam - STFC UKRI via
petsc-users  wrote:

> Hello,
>
>
>
> I would like to make the following hypre call
> HYPRE_AMSSetBetaPoissonMatrix(ams, NULL);
>
>
>
> So it does look like ams_beta_is_zero has to be true
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L325__;Iw!!G_uCfscf7eWS!eK8fCGlsadPLSpAOiMsKgfe5Rvn5AFmIy6c98fmu8MdOAd4PzvTw4sthhEHfPWCQCq2wOpwArn1DAETA90wv$
>  
> 
>
>
>
> So would I end up calling PCHYPRESetPoissonMatrix_HYPRE with isalpha being
> false.
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L1640__;Iw!!G_uCfscf7eWS!eK8fCGlsadPLSpAOiMsKgfe5Rvn5AFmIy6c98fmu8MdOAd4PzvTw4sthhEHfPWCQCq2wOpwArn1DAAWh3H1U$
>  
> 
>
>
>
> But I get the following error from Libmesh: is it because of version
> incompatibility between PETSc and Hypre?
>
>
>
> vector_fe_ex3.C:141:3: error: 'PCHYPRESetPoissonMatrix_HYPRE' was not
> declared in this scope; did you mean 'PCHYPRESetBetaPoissonMatrix'?
>
>   141 |   PCHYPRESetPoissonMatrix_HYPRE(pc, A, false);
>
>   |   ^~~~
>

You are misunderstanding the API organization. You are intended to call
PCHYPRESetBetaPoissonMatrix(), which
calls PCHYPRESetPoissonMatrix_HYPRE(..., PETSC_FALSE);

  Thanks,

 Matt


> I look forward to your response.
>
>
>
> Kind regards,
>
> Karthik
>
>
>
> --
>
> *Karthik Chockalingam, Ph.D.*
>
> Senior Research Software Engineer
>
> High Performance Systems Engineering Group
>
> Hartree Centre | Science and Technology Facilities Council
>
> karthikeyan.chockalin...@stfc.ac.uk
>
>
>
>  [image: signature_3970890138]
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eK8fCGlsadPLSpAOiMsKgfe5Rvn5AFmIy6c98fmu8MdOAd4PzvTw4sthhEHfPWCQCq2wOpwArn1DAB18Lnc5$
  



Re: [petsc-users] Inquiry about metric-based adaptation using MMG in petsc

2024-09-15 Thread Matthew Knepley
On Sun, Sep 15, 2024 at 7:25 AM neil liu  wrote:

> Thanks a lot, Jed.
> I traced  the code and found the following issues,
> 1) in *PetscErrorCode DMPlexMetricNormalize(DM dm, Vec metricIn,
> PetscBool restrictSizes, PetscBool restrictAnisotropy, Vec metricOut, Vec
> determinant),*
> the metricIn is zero
> *. And this seems not reasonable. *
> *Go back to,*
> 2) DMAdaptorAdapt_Sequence_Private(DMAdaptor adaptor, Vec inx, PetscBool
> doSolve, DM *adm, Vec *ax)
> with -sol_adapt_loc_pre_view, -adapt_gradient_view and
> -adapt_hessian_view, all these three vectors are zero.
> This is not reasonable.
>
> *Go back to, *
> *3) *PetscErrorCode DMAdaptorAdapt(DMAdaptor adaptor, Vec x,
> DMAdaptationStrategy strategy, DM *adm, Vec *ax)
> x is zero here.
> Go back to,
> *4)  *PetscCall(SNESSolve(snes, NULL, u));
> The initial guess u is zero. It seems this is the reason. A little weird.
> An intiial guess with zero should be resonable.
>
> It seems maybe something is not reasonable for the logic in
> *DMPlexMetricNormalize.*
>

Is the solve not doing anything? We need at least one solution in order to
adapt. Did you look at SNES ex27? I adaptively refine that one.

  Thanks,

   Matt


> *Thanks,*
>
>
> *Xiaodong *
>
>
>
>
>
>
>
> On Fri, Sep 13, 2024 at 2:22 PM Jed Brown  wrote:
>
>> This error message is coming from the following:
>>
>> $ rg -B4 'Global metric normalization factor must be in'
>> src/dm/impls/plex/plexmetric.c
>> 1308-PetscCall(PetscDSSetObjective(ds, 0, detMFunc));
>> 1309-PetscCall(DMPlexComputeIntegralFEM(dmDet, determinant,
>> &integral, NULL));
>> 1310-  }
>> 1311-  realIntegral = PetscRealPart(integral);
>> 1312:  PetscCheck(realIntegral > 1.0e-30, PETSC_COMM_SELF,
>> PETSC_ERR_ARG_OUTOFRANGE, "Global metric normalization factor must be in
>> (0, inf). Is the input metric positive-definite?");
>>
>> Perhaps you can independently check what integrand is being provided.
>> It's probably zero or negative. You could apply this patch so the error
>> message will report a value.
>>
>>
>> diff --git i/src/dm/impls/plex/plexmetric.c
>> w/src/dm/impls/plex/plexmetric.c
>> index 61caeed28de..906cb394027 100644
>> --- i/src/dm/impls/plex/plexmetric.c
>> +++ w/src/dm/impls/plex/plexmetric.c
>> @@ -1309,7 +1309,7 @@ PetscErrorCode DMPlexMetricNormalize(DM dm, Vec
>> metricIn, PetscBool restrictSize
>>  PetscCall(DMPlexComputeIntegralFEM(dmDet, determinant, &integral,
>> NULL));
>>}
>>realIntegral = PetscRealPart(integral);
>> -  PetscCheck(realIntegral > 1.0e-30, PETSC_COMM_SELF,
>> PETSC_ERR_ARG_OUTOFRANGE, "Global metric normalization factor must be in
>> (0, inf). Is the input metric positive-definite?");
>> +  PetscCheck(realIntegral > 1.0e-30, PETSC_COMM_SELF,
>> PETSC_ERR_ARG_OUTOFRANGE, "Global metric normalization factor %g must be in
>> (0, inf). Is the input metric positive-definite?", (double)realIntegral);
>>factGlob = PetscPowReal(target / realIntegral, 2.0 / dim);
>>
>>/* Apply local scaling */
>>
>>
>> neil liu  writes:
>>
>> > Dear Petsc developers and users,
>> >
>> > I am  trying to explore adaptive mesh refinement (tetradedra) with
>> Petsc.
>> > It seems case 12 ($PETSC DIR/src/snes/tutorials/ex12.c, from paper,
>> >
>> https://urldefense.us/v3/__https://arxiv.org/pdf/2201.02806__;!!G_uCfscf7eWS!ZEwaHkvYc_IrjycFr8LiNi-nZcFqu0ZpsydAAomxptQ6C0xkBs3qhn5ba31Z4vipKf4mTrqDm8A5S65DDD_xFg$
>> ) is a good example.
>> > However when I tried,
>> > ./ex12 -run_type full -dm_plex_dim 3 -dm_distribute -dm_plex_box_faces
>> > 10,10,10 -bc_type dirichlet -petscspace_degree 1 -variable_coefficient
>> ball
>> > -snes_converged
>> > _reason ::ascii_info_detail -ksp_type cg -pc_type sor
>> -snes_adapt_sequence
>> > 3 -adaptor_target_num 1000 -dm_plex_metric_h_max  0.5  -dm_adaptor mmg
>> >   L_2 Error: 1.55486
>> >   Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE
>> iterations 2
>> >
>> > it shows the following error,
>> > [0]PETSC ERROR: - Error Message
>> > --
>> > [0]PETSC ERROR: Argument out of range
>> > [0]PETSC ERROR: Global metric normalization factor must be in (0, inf).
>> Is
>> > the input metric positive-definite?
>> >
>> > Do  you have any suggestions here?
>> >
>> > Thanks,
>> >
>> > Xiaodong
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!djDLOY8Rh_OQw8vhb4B7V0-zm0LToQcf2kEcTsKIYyUymnyy49THra-UsRtW-B_YH1Fm5t5I2bSpjfhAEoLt$
  



Re: [petsc-users] Hypre AMS usage

2024-09-13 Thread Matthew Knepley
On Fri, Sep 13, 2024 at 4:56 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> I would like to call HYPRE_AMSSetBetaPoissonMatrix from Hypre
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L325__;Iw!!G_uCfscf7eWS!ZJD-6o5hrlVpL8ybDZph6UhwQZfSRsXh6qrc8JSZFe5X5WVUdp4E5IL8wq6uq6DL0z1mDj78-_2eOZtqQC8-$
>  
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L329__;Iw!!G_uCfscf7eWS!ZJD-6o5hrlVpL8ybDZph6UhwQZfSRsXh6qrc8JSZFe5X5WVUdp4E5IL8wq6uq6DL0z1mDj78-_2eOV6Ojr7J$
>  
>
>
>
> Look like ams_beta_is_zero has to be true
>

I don't think that is true:
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blame/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L329__;Iw!!G_uCfscf7eWS!ZJD-6o5hrlVpL8ybDZph6UhwQZfSRsXh6qrc8JSZFe5X5WVUdp4E5IL8wq6uq6DL0z1mDj78-_2eOdRLti-4$
 

  Thanks,

 Matt


>
>
So would I end up calling PCHYPRESetPoissonMatrix_HYPRE with isalpha being
> false?
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L1640__;Iw!!G_uCfscf7eWS!ZJD-6o5hrlVpL8ybDZph6UhwQZfSRsXh6qrc8JSZFe5X5WVUdp4E5IL8wq6uq6DL0z1mDj78-_2eOfWoAnkR$
>  
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
> *From: *Matthew Knepley 
> *Date: *Tuesday, 10 September 2024 at 19:22
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Tue, Sep 10, 2024 at 1:16 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Hi Matt,
>
>
>
> I am not sure, if I understand how to read the source code, let’s take the
> below line
>
>
>
> PetscCall(PetscOptionsReal("-pc_hypre_ams_relax_weight", "Relaxation
> weight for AMS smoother", "None", jac->as_relax_weight,
> &jac->as_relax_weight, &flag3))
>
> (Q1) Am I doing it right by setting as in the line below? How does the
> line below match up to the above? Since the above line has more arguments
>
> petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weight", "1.0")
>
>
>
> Yes. All options take a single string argument. The other arguments to the
> function help present it to the user (say through a GUI).
>
>
>
> (Q2) How do I go about setting the four parameters below using
> PetscOptionsSetValue?
>
>
>
> It looks like you have 5 parameters, and you would call
>
>
>
> petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_amg_alpha_options",
> "10,21,32,43,54")
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> PetscCall(PetscOptionsIntArray("-pc_hypre_ams_amg_alpha_options", "AMG
> options for vector Poisson", "None", jac->as_amg_alpha_opts, &n, &flag2));
>
>   if (flag || flag2) {
>
> PetscCallExternal(HYPRE_AMSSetAlphaAMGOptions, jac->hsolver,
> jac->as_amg_alpha_opts[0], /* AMG coarsen type */
>
>
> jac->as_amg_alpha_opts[1],    /*
> AMG agg_levels */
>
>   jac->as_amg_alpha_opts[2],
>   /* AMG relax_type */
>
>   jac->as_amg_alpha_theta,
> jac->as_amg_alpha_opts[3],   /* AMG interp_type */
>
>
> jac->as_amg_alpha_opts[4]);   /*
> AMG Pmax */
>
>   }
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 22:00
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 4:32 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Great. Thank you for letting me know.
>
> I got the reference to KSP as well from libmesh and I am not creating a
> new KSP.
>
> This time around, I didn’t have to set the PC type and it seems to work.
>
>
>
> Excellent.
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr 

Re: [petsc-users] how to visualize the matordering?

2024-09-13 Thread Matthew Knepley
On Fri, Sep 13, 2024 at 2:40 AM Klaij, Christiaan  wrote:

> Thanks Barry, that works fine. Any chance to access the reordered matrix
> behind the plot?
>

You can make it with MatPermute().

  Thanks,

 Matt


> Chris
>
> 
> From: Barry Smith 
> Sent: Tuesday, September 10, 2024 3:09 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] how to visualize the matordering?
>
> You don't often get email from bsm...@petsc.dev. Learn why this is
> important<
> https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!fbUhaFTo3vSK-SuD9ZJ8eCD-wcGaA2dp_iFxbwerHHOlCS7JVWRhadkbjdVxrrInqzkq9zYDYfR4UOeiWk_xAmk$
> >
>
>You can save the plot in an image file that does not require X. For
> example
>
> -mat_view_ordering draw:image:filename
>
>Several image formats are available.
>
>
>Barry
>
>
> On Sep 10, 2024, at 7:36 AM, Klaij, Christiaan  wrote:
>
> I'm saving a petsc mat to file and loading it into python to make a spy
> plot of its sparsity pattern, so far so good.
>
> Now I would like to compare with a reordened pattern, say rcm. I've notice
> the option -mat_view_ordering draw, but do not have X Windows on this
> machine. What is the recommended way to get the reordened matrix for
> inspection?
>
> Chris
> 
> dr. ir. Christiaan   Klaij
>  |  Senior Researcher|  Research & Development
> T +31 317 49 33 44 |
> c.kl...@marin.nl  |
> https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fbUhaFTo3vSK-SuD9ZJ8eCD-wcGaA2dp_iFxbwerHHOlCS7JVWRhadkbjdVxrrInqzkq9zYDYfR4UOeiAGVa6nA$
> <
> https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!ZlpBqrxu1xOtAsOWM9zgbuOyULFV7FmZJfrJqgSnScyCr8z-tfyvuf3b2aXiy8qER3gQEm4Qu44oMudMb3ZvYBA$
> >
> <
> https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!ZlpBqrxu1xOtAsOWM9zgbuOyULFV7FmZJfrJqgSnScyCr8z-tfyvuf3b2aXiy8qER3gQEm4Qu44oMudMrq6nsrQ$
> >
> <
> https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!ZlpBqrxu1xOtAsOWM9zgbuOyULFV7FmZJfrJqgSnScyCr8z-tfyvuf3b2aXiy8qER3gQEm4Qu44oMudMuNr3Ucw$
> >
> <
> https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!ZlpBqrxu1xOtAsOWM9zgbuOyULFV7FmZJfrJqgSnScyCr8z-tfyvuf3b2aXiy8qER3gQEm4Qu44oMudMadib6bE$
> >
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cqVEpFXzlZDy9mxC3QVCfvcTsImy_ZZq-sc3ewVbiGF-PD6JhuYL6Yonrh06ghc5fTXhH0dOYpci2Sb8_vJr$
  



Re: [petsc-users] Hypre AMS usage

2024-09-10 Thread Matthew Knepley
On Tue, Sep 10, 2024 at 1:16 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> Hi Matt,
>
>
>
> I am not sure, if I understand how to read the source code, let’s take the
> below line
>
>
>
> PetscCall(PetscOptionsReal("-pc_hypre_ams_relax_weight", "Relaxation
> weight for AMS smoother", "None", jac->as_relax_weight,
> &jac->as_relax_weight, &flag3))
>
> (Q1) Am I doing it right by setting as in the line below? How does the
> line below match up to the above? Since the above line has more arguments
>
> petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weight", "1.0")
>

Yes. All options take a single string argument. The other arguments to the
function help present it to the user (say through a GUI).


> (Q2) How do I go about setting the four parameters below using
> PetscOptionsSetValue?
>

It looks like you have 5 parameters, and you would call

petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_amg_alpha_options",
"10,21,32,43,54")

  Thanks,

 Matt


> PetscCall(PetscOptionsIntArray("-pc_hypre_ams_amg_alpha_options", "AMG
> options for vector Poisson", "None", jac->as_amg_alpha_opts, &n, &flag2));
>
>   if (flag || flag2) {
>
> PetscCallExternal(HYPRE_AMSSetAlphaAMGOptions, jac->hsolver,
> jac->as_amg_alpha_opts[0], /* AMG coarsen type */
>
>
> jac->as_amg_alpha_opts[1],/*
> AMG agg_levels */
>
>   jac->as_amg_alpha_opts[2],
>   /* AMG relax_type */
>
>   jac->as_amg_alpha_theta,
> jac->as_amg_alpha_opts[3],   /* AMG interp_type */
>
>
> jac->as_amg_alpha_opts[4]);   /*
> AMG Pmax */
>
>   }
>
> Kind regards,
>
> Karthik.
>
>
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 22:00
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 4:32 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Great. Thank you for letting me know.
>
> I got the reference to KSP as well from libmesh and I am not creating a
> new KSP.
>
> This time around, I didn’t have to set the PC type and it seems to work.
>
>
>
> Excellent.
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_type", "2");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_times", "1");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weight",
> "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-pc_hypre_ams_omega", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 21:11
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 4:03 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> I never explicitly called KSPGetPC(). I am embedding AMS in a libmesh
> (fem) example.  So, I only got the reference to pc from libmesh and set
>  petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
> However, you are creating a _new_ KSP, so your PC will not match it. This
> is not doing what you want.
>
>
>
>   Thanks

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 4:32 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> Great. Thank you for letting me know.
>
> I got the reference to KSP as well from libmesh and I am not creating a
> new KSP.
>
> This time around, I didn’t have to set the PC type and it seems to work.
>

Excellent.

  Thanks,

 Matt


>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_type", "2");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_times", "1");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weight",
> "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-pc_hypre_ams_omega", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 21:11
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 4:03 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> I never explicitly called KSPGetPC(). I am embedding AMS in a libmesh
> (fem) example.  So, I only got the reference to pc from libmesh and set
>  petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
> However, you are creating a _new_ KSP, so your PC will not match it. This
> is not doing what you want.
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 20:57
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 3:38 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> I didn’t know how to check the pc type but adding
>
> petscErr = PCSetType(pc, "hypre");
>
> before the two functions made it to work.
>
>
>
> But how does it work from the command line?
>
>
>
> For summary:
>
>
>
>   KSPCreate(mesh.comm().get(), &ksp);
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_type", "2");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_times", "1");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weigh", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-pc_hypre_ams_omega", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
> Where is the KSPGetPC() call?
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
>   // Set pc type
>
>   petscErr = PCSetType(pc, "hypre");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 4:03 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> I never explicitly called KSPGetPC(). I am embedding AMS in a libmesh
> (fem) example.  So, I only got the reference to pc from libmesh and set  
> petscErr
> = PCHYPRESetDiscreteGradient(pc, par_G);
>

However, you are creating a _new_ KSP, so your PC will not match it. This
is not doing what you want.

  Thanks,

 Matt


>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 20:57
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 3:38 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> I didn’t know how to check the pc type but adding
>
> petscErr = PCSetType(pc, "hypre");
>
> before the two functions made it to work.
>
>
>
> But how does it work from the command line?
>
>
>
> For summary:
>
>
>
>   KSPCreate(mesh.comm().get(), &ksp);
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_type", "2");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_times", "1");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weigh", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-pc_hypre_ams_omega", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
> Where is the KSPGetPC() call?
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
>   // Set pc type
>
>   petscErr = PCSetType(pc, "hypre");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 19:46
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 2:18 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Calling the two functions after KSPSetFromOptions() did not work either.
>
>
>
> Can you check that the PC has the correct type when you call it?
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> Everything works from the command line.
>
> I haven’t set KSPSetOperators, not sure if that is an issue.
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 19:09
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 1:21 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Hi Stefano,
>
>
>
> Thank you. That was helpful. I tried the following:
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_l

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 3:38 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> I didn’t know how to check the pc type but adding
>
> petscErr = PCSetType(pc, "hypre");
>
> before the two functions made it to work.
>
>
>
> But how does it work from the command line?
>
>
>
> For summary:
>
>
>
>   KSPCreate(mesh.comm().get(), &ksp);
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_type", "2");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_times", "1");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_ams_relax_weigh", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-pc_hypre_ams_omega", "1.0");
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>

Where is the KSPGetPC() call?

  Thanks,

 Matt


>   // Set pc type
>
>   petscErr = PCSetType(pc, "hypre");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 19:46
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 2:18 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Calling the two functions after KSPSetFromOptions() did not work either.
>
>
>
> Can you check that the PC has the correct type when you call it?
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> Everything works from the command line.
>
> I haven’t set KSPSetOperators, not sure if that is an issue.
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 19:09
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 1:21 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Hi Stefano,
>
>
>
> Thank you. That was helpful. I tried the following:
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
>
>
> It errored though I have set PCHYPRESetEdgeConstantVectors.
>
>
>
> My guess is that the "pc" was not yet Hypre, since that type was not set
> until KSPSetFromOptions() was called. So you need
>
> to call those two functions after that call.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> But my program works, without PetscOptionsSetValue and by passing
> everything on the command line.
>
>
>
> *[0]PETSC ERROR: - Error Message
> --*
>
> [0]PETSC ERROR: HYPRE AMS preconditioner needs either the coordinate
> vectors via PCSetCoordinates() or the edge constant vectors via
> 

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 2:18 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> Calling the two functions after KSPSetFromOptions() did not work either.
>

Can you check that the PC has the correct type when you call it?

  Thanks,

 Matt


> Everything works from the command line.
>
> I haven’t set KSPSetOperators, not sure if that is an issue.
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, 9 September 2024 at 19:09
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *Stefano Zampini , petsc-users@mcs.anl.gov
> 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Mon, Sep 9, 2024 at 1:21 PM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalin...@stfc.ac.uk> wrote:
>
> Hi Stefano,
>
>
>
> Thank you. That was helpful. I tried the following:
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
>
>
> It errored though I have set PCHYPRESetEdgeConstantVectors.
>
>
>
> My guess is that the "pc" was not yet Hypre, since that type was not set
> until KSPSetFromOptions() was called. So you need
>
> to call those two functions after that call.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> But my program works, without PetscOptionsSetValue and by passing
> everything on the command line.
>
>
>
> *[0]PETSC ERROR: - Error Message
> --*
>
> [0]PETSC ERROR: HYPRE AMS preconditioner needs either the coordinate
> vectors via PCSetCoordinates() or the edge constant vectors via
> PCHYPRESetEdgeConstantVectors() or the interpolation matrix via
> PCHYPRESetInterpolations()
>
> [0]PETS C ERROR: WARNING! There are unused option(s) set! Could be the
> program crashed before usage or a spelling mistake, etc!
>
> [0]PETSC ERROR:   Option left: name:-ksp_converged_reason (no value)
> source: code
>
> [0]PETSC ERROR:   Option left: name:-options_left (no value) source: code
>
> [0]PETSC ERROR: See 
> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!d7Ibtwj8EdetyxS7Mw_puBNyeptxAZzQX6w-kad_7Pk9sQ8i2I_lWTXeRzcQ081aNcHWfekFUU1zMGa7WR82$
>   for trouble shooting.
>
> [0]PETSC ERROR: Petsc Release Version 3.20.3, unknown
>
> [0]PETSC ERROR: ./example-dbg on a arch-moose named HC20210312 by
> karthikeyan.chockalingam Mon Sep  9 18:10:13 2024
>
> [0]PETSC ERROR: Configure options --with-64-bit-indices
> --with-cxx-dialect=C++17 --with-debugging=no --with-fortran-bindings=0
> --with-mpi=1 --with-openmp=1 --with-shared-libraries=1 --with-sowing=0
> --download-fblaslapack=1 --download-hypre=1 --download-metis=1
> --download-mumps=1 --download-ptscotch=1 --download-parmetis=1
> --download-scalapack=1 --download-slepc=1 --download-strumpack=1
> --download-superlu_dist=1 --download-bison=1 --download-hdf5=1
> --with-hdf5-fortran-bindings=0
> --download-hdf5-configure-arguments="--with-zlib" --with-make-np=4
>
> [0]PETSC ERROR: #1 PCSetUp_HYPRE() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/pc/impls/hypre/hypre.c:295
>
> [0]PETSC ERROR: #2 PCSetUp() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/pc/interface/precon.c:1080
>
> [0]PETSC ERROR: #3 KSPSetUp() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:415
>
> [0]PETSC ERROR: #4 KSPSolve_Private() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:833
>
> [0]PETSC ERROR: #5 KSPSolve() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:1080
>
> libMesh terminating:
>
> HYPRE AMS preconditioner needs either the coordinate vectors via
> PCSetCoordinates() or the edge constant vectors via
> PCHYPRESetEdgeConstantVectors() o

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 1:21 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> Hi Stefano,
>
>
>
> Thank you. That was helpful. I tried the following:
>
>
>
>   petscErr = PetscOptionsSetValue(NULL,"-ksp_type", "gmres");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_type", "hypre");
>
>   petscErr = PetscOptionsSetValue(NULL,"-pc_hypre_type", "ams");
>
>
>
>   // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_view", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_monitor_true_residual",
> NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-ksp_converged_reason", NULL);
>
>   petscErr = PetscOptionsSetValue(NULL, "-options_left", NULL);
>
>   petscErr = KSPSetFromOptions(ksp);
>
>
>
>
>
> It errored though I have set PCHYPRESetEdgeConstantVectors.
>

My guess is that the "pc" was not yet Hypre, since that type was not set
until KSPSetFromOptions() was called. So you need
to call those two functions after that call.

  Thanks,

Matt


> But my program works, without PetscOptionsSetValue and by passing
> everything on the command line.
>
>
>
> *[0]PETSC ERROR: - Error Message
> --*
>
> [0]PETSC ERROR: HYPRE AMS preconditioner needs either the coordinate
> vectors via PCSetCoordinates() or the edge constant vectors via
> PCHYPRESetEdgeConstantVectors() or the interpolation matrix via
> PCHYPRESetInterpolations()
>
> [0]PETS C ERROR: WARNING! There are unused option(s) set! Could be the
> program crashed before usage or a spelling mistake, etc!
>
> [0]PETSC ERROR:   Option left: name:-ksp_converged_reason (no value)
> source: code
>
> [0]PETSC ERROR:   Option left: name:-options_left (no value) source: code
>
> [0]PETSC ERROR: See 
> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fe84MxfHZLtDHZMk46RBJyizvPamDZ_ysHWfHCyD26Z_SGQv2oMDYu7yeAV_WtK5hoqAqUuepnGqGC-mIHaN$
>   for trouble shooting.
>
> [0]PETSC ERROR: Petsc Release Version 3.20.3, unknown
>
> [0]PETSC ERROR: ./example-dbg on a arch-moose named HC20210312 by
> karthikeyan.chockalingam Mon Sep  9 18:10:13 2024
>
> [0]PETSC ERROR: Configure options --with-64-bit-indices
> --with-cxx-dialect=C++17 --with-debugging=no --with-fortran-bindings=0
> --with-mpi=1 --with-openmp=1 --with-shared-libraries=1 --with-sowing=0
> --download-fblaslapack=1 --download-hypre=1 --download-metis=1
> --download-mumps=1 --download-ptscotch=1 --download-parmetis=1
> --download-scalapack=1 --download-slepc=1 --download-strumpack=1
> --download-superlu_dist=1 --download-bison=1 --download-hdf5=1
> --with-hdf5-fortran-bindings=0
> --download-hdf5-configure-arguments="--with-zlib" --with-make-np=4
>
> [0]PETSC ERROR: #1 PCSetUp_HYPRE() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/pc/impls/hypre/hypre.c:295
>
> [0]PETSC ERROR: #2 PCSetUp() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/pc/interface/precon.c:1080
>
> [0]PETSC ERROR: #3 KSPSetUp() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:415
>
> [0]PETSC ERROR: #4 KSPSolve_Private() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:833
>
> [0]PETSC ERROR: #5 KSPSolve() at
> /Users/karthikeyan.chockalingam/moose/petsc/src/ksp/ksp/interface/itfunc.c:1080
>
> libMesh terminating:
>
> HYPRE AMS preconditioner needs either the coordinate vectors via
> PCSetCoordinates() or the edge constant vectors via
> PCHYPRESetEdgeConstantVectors() or the interpolation matrix via
> PCHYPRESetInterpolations()
>
>
>
>
>
> *From: *Stefano Zampini 
> *Date: *Monday, 9 September 2024 at 17:02
> *To: *Matthew Knepley 
> *Cc: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>, petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> I would say the best way is to look at the source code
>
>
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/hypre/hypre.c?ref_type=heads*L2061__;Iw!!G_uCfscf7eWS!fe84MxfHZLtDHZMk46RBJyizvPamDZ_ysHWfHCyD26Z_SGQv2oMDYu7yeAV_WtK5hoqAqUuepnGqGGrHyHo_$
>  
>
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-

Re: [petsc-users] Hypre AMS usage

2024-09-09 Thread Matthew Knepley
On Mon, Sep 9, 2024 at 10:17 AM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalin...@stfc.ac.uk> wrote:

> Hi Matt,
>
>
>
> You mentioned it doesn’t hurt to set the smoothing flags
>
>
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L965__;Iw!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWDbKFEP9UgAwQR8$
>  
> <https://urldefense.us/v3/__https:/github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L965__;Iw!!G_uCfscf7eWS!biuCOXkV-qD-e8iTXiwaD9XJ0EWuqr2PidZJdPqbnFzCshQFo-mclGSvGBOPuGSTkjPvMKnIfJzWos7QShcSitpwyrhL55yOMB-v$>
>
> Do you know where I can look for the equivalent PETSc commands? Thank you.
>

Yes, this is described here: 
https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCHYPRE/__;!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWDbKFEP9e0RfePC$
 
The best way is to run with -help or look at the source.

  Thanks,

 Matt


>
>
> Kind regards,
>
> Karthik.
>
>
>
> *From: *Matthew Knepley 
> *Date: *Friday, 6 September 2024 at 17:57
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalin...@stfc.ac.uk>
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] Hypre AMS usage
>
> On Fri, Sep 6, 2024 at 11:37 AM Karthikeyan Chockalingam - STFC UKRI via
> petsc-users  wrote:
>
> Hello,
>
>
>
> I am trying to use the Hypre AMS preconditioner for the first time.
>
>
>
> I am following the example problem from Hypre
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L954__;Iw!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWDbKFEP9ceRKXFn$
>  
> <https://urldefense.us/v3/__https:/github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L954__;Iw!!G_uCfscf7eWS!biuCOXkV-qD-e8iTXiwaD9XJ0EWuqr2PidZJdPqbnFzCshQFo-mclGSvGBOPuGSTkjPvMKnIfJzWos7QShcSitpwyrhL5ySgA0XM$>
>
>
>
> I have so far successfully set the discrete gradient operator and vertex
> co-ordinates,
>
>
>
> // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
> Do I need to set the following smoothing options?
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L965__;Iw!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWDbKFEP9UgAwQR8$
>  
> <https://urldefense.us/v3/__https:/github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L965__;Iw!!G_uCfscf7eWS!biuCOXkV-qD-e8iTXiwaD9XJ0EWuqr2PidZJdPqbnFzCshQFo-mclGSvGBOPuGSTkjPvMKnIfJzWos7QShcSitpwyrhL55yOMB-v$>
>
>
>
> It cannot hurt. I would set them to begin with.
>
>
>
> Also, do I need to convert from MATMPIAIJ to CSR?
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L984__;Iw!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWDbKFEP9culzuUp$
>  
> <https://urldefense.us/v3/__https:/github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L984__;Iw!!G_uCfscf7eWS!biuCOXkV-qD-e8iTXiwaD9XJ0EWuqr2PidZJdPqbnFzCshQFo-mclGSvGBOPuGSTkjPvMKnIfJzWos7QShcSitpwyrhL53Trv1si$>
>
>
>
> No.
>
>
>
> What are the other PETSc calls to invoke AMS? Is there an example problem
> I can look at?
>
>
>
> I do not know. I don't think we have an example.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Thank you.
>
>
>
> Karthik.
>
>
>
> --
>
> *Karthik Chockalingam, Ph.D.*
>
> Senior Research Software Engineer
>
> High Performance Systems Engineering Group
>
> Hartree Centre | Science and Technology Facilities Council
>
> karthikeyan.chockalin...@stfc.ac.uk
>
>
>
>  [image: signature_3970890138]
>
>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dM_jKT_vX0NCVNcY9XOTTUaTSW-TF2CgOkB2UDeZ2ztYXapdmLlSBT4XhV7PknN3ewygGWD

Re: [petsc-users] Hypre AMS usage

2024-09-06 Thread Matthew Knepley
On Fri, Sep 6, 2024 at 11:37 AM Karthikeyan Chockalingam - STFC UKRI via
petsc-users  wrote:

> Hello,
>
>
>
> I am trying to use the Hypre AMS preconditioner for the first time.
>
>
>
> I am following the example problem from Hypre
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L954__;Iw!!G_uCfscf7eWS!Z1YtcjIl3JraPfN9GbBHhno-LcLOXJurvRZ3MKzzbqSMDLtq5x7_DICKdBj5idJnKPOPq7kmcpCvAzu6MVKD$
>  
> 
>
>
>
> I have so far successfully set the discrete gradient operator and vertex
> co-ordinates,
>
>
>
> // Set discrete gradient
>
>   petscErr = PCHYPRESetDiscreteGradient(pc, par_G);
>
>
>
>   // Set vertex coordinates
>
>   petscErr = PCHYPRESetEdgeConstantVectors(pc, par_xvec, par_yvec,
> par_zvec);
>
>
>
> Do I need to set the following smoothing options?
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L965__;Iw!!G_uCfscf7eWS!Z1YtcjIl3JraPfN9GbBHhno-LcLOXJurvRZ3MKzzbqSMDLtq5x7_DICKdBj5idJnKPOPq7kmcpCvA-1nohaB$
>  
> 
>

It cannot hurt. I would set them to begin with.


> Also, do I need to convert from MATMPIAIJ to CSR?
>
>
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre/blob/3caa81955eb8d1b4e35d9b450e27cf6d07b50f6e/src/examples/ex15.c*L984__;Iw!!G_uCfscf7eWS!Z1YtcjIl3JraPfN9GbBHhno-LcLOXJurvRZ3MKzzbqSMDLtq5x7_DICKdBj5idJnKPOPq7kmcpCvA8aGosCn$
>  
> 
>

No.


> What are the other PETSc calls to invoke AMS? Is there an example problem
> I can look at?
>

I do not know. I don't think we have an example.

  Thanks,

Matt


> Thank you.
>
>
>
> Karthik.
>
>
>
> --
>
> *Karthik Chockalingam, Ph.D.*
>
> Senior Research Software Engineer
>
> High Performance Systems Engineering Group
>
> Hartree Centre | Science and Technology Facilities Council
>
> karthikeyan.chockalin...@stfc.ac.uk
>
>
>
>  [image: signature_3970890138]
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z1YtcjIl3JraPfN9GbBHhno-LcLOXJurvRZ3MKzzbqSMDLtq5x7_DICKdBj5idJnKPOPq7kmcpCvAyAOkwy1$
  



Re: [petsc-users] KSPSolve + MUMPS memory growth issues

2024-09-05 Thread Matthew Knepley
On Thu, Sep 5, 2024 at 2:46 PM Corbijn van Willenswaard, Lars (UT) <
l.j.corbijnvanwillenswa...@utwente.nl> wrote:

> Thank you, that makes testing so much easier. So far, I’ve been able to
> shrink the matrix (now only 64x64) and see that it still has growing memory
> usage over time. Unfortunately, I’ve no access to a linux machine right
> now, so running through valgrind like Barry suggested has to wait.
>

Just to check. Your matrix, with that simple code, producing growing
memory? If you send the matrix, I will find the problem.

  Thanks,

 Matt


> Lars
>
>
>
> *From: *Matthew Knepley 
> *Date: *Thursday, 5 September 2024 at 19:56
> *To: *"Corbijn van Willenswaard, Lars (UT)" <
> l.j.corbijnvanwillenswa...@utwente.nl>
> *Cc: *"petsc-users@mcs.anl.gov" 
> *Subject: *Re: [petsc-users] KSPSolve + MUMPS memory growth issues
>
>
>
> On Thu, Sep 5, 2024 at 1:40 PM Corbijn van Willenswaard, Lars (UT) via
> petsc-users  wrote:
>
> Dear PETSc,
>
> For the last months I’ve struggled with a solver that I wrote for a FEM
> eigenvalue problem running out of memory. I’ve traced it to KSPSolve +
> MUMPS being the issue, but I'm getting stuck on digging deeper.
>
> The reason I suspect the KSPSolve/MUMPS is that when commenting out the
> KSPSolve the memory stays constant while running the rest of the algorithm.
> Of course, the algorithm also converges to a different result in this
> setup. When changing the KSP statement to
> for(int i = 0; i < 1; i++) KSPSolve(A_, vec1_, vec2_);
> the memory grows faster than when running the algorithm. Logging shows
> that the program never the terminating i=100M. Measuring the memory growth
> using ps (started debugging before I knew of PETSc's features) I see a
> growth in the RSS on a single compute node of up to 300MB/min for this
> artificial case. Real cases grow more like 60MB/min/node, which causes a
> kill due to memory exhaustion after about 2-3 days.
>
> Locally (Mac) I've been able to reproduce this both with 6 MPI processes
> and with a single one. Instrumenting the code to show differences in
> PetscMemoryGetCurrentUsage (full code below), shows that the memory
> increases every step at the start, but also does at later iterations (small
> excerpt from the output):
> rank stepmemory (increase since prev step)
>  0   6544 current 39469056(  8192)
>  0   7086 current 39477248(  8192)
>  0   7735 current 39497728( 20480)
>  0   9029 current 39501824(  4096)
> A similar output is visible in a run with 6 ranks, where there does not
> seem to be a pattern as to which of the ranks increases at which step.
> (Note I've checked PetscMallocGetCurrentUsage, but that is constant)
>
> Switching the solver to petsc's own solver on a single rank does not show
> a memory increase after the first solve. Changing the solve to overwrite
> the vector will result in a few increases after the first solve, but these
> do not seem to repeat. So, changes like VecCopy(vec2, vec1_); KSPSolve(A_,
> vec1_, vec1_);.
>
> Does anyone have an idea on how to further dig into this problem?
>
>
>
> I think the best way is to construct the simplest code that reproduces
> your problem. For example, we could save your matrix in a binary file
>
>
>
>   -ksp_view_mat binary:mat.bin
>
>
>
> and then use a very simple code:
>
>
>
> #include 
>
> int main(int argc, char **argv)
> {
>   PetscViewer viewer;
>   Mat A;
>   Vec b, x;
>
>   PetscCall(PetscInitialize(&argc, &argv, NULL, NULL));
>   PetscCall(PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat.bin",
> PETSC_MODE_READ, &viewer));
>   PetscCall(MatLoad(A, viewer));
>   PetscCall(PetscViewerDestroy(&viewer));
>   PetscCall(MatCreateVecs(A, &x, &b));
>   PetscCall(VecSet(b, 1.));
>
>   PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp));
>   PetscCall(KSPSetOperators(ksp, A, A));
>   PetscCall(KSPSetFromOptions(ksp));
>   for (PetscInt i = 0; i < 10; ++i) PetscCall(KSPSolve(ksp, b, x));
>   PetscCall(KSPDestroy(&ksp));
>
>   PetscCall(MatDestroy(&A));
>   PetscCall(VecDestroy(&b));
>   PetscCall(VecDestroy(&x));
>   PetscCall(PetscFinalize());
>   return(0);
> }
>
>
>
> and see if you get memory increase.
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
> Kind regards,
> Lars Corbijn
>
>
> Instrumentation:
>
> PetscLogDouble lastCurrent, current;
> int rank;
> MPI_Comm_rank(PETSC_COMM_WORLD, &rank);
> for(int i = 0; i < 1; ++i) {
> PetscMemoryGetCurrentUsage(&lastCurrent);
> KSPSolve(A_, vec1_, vec2_)

Re: [petsc-users] KSPSolve + MUMPS memory growth issues

2024-09-05 Thread Matthew Knepley
On Thu, Sep 5, 2024 at 1:40 PM Corbijn van Willenswaard, Lars (UT) via
petsc-users  wrote:

> Dear PETSc,
>
> For the last months I’ve struggled with a solver that I wrote for a FEM
> eigenvalue problem running out of memory. I’ve traced it to KSPSolve +
> MUMPS being the issue, but I'm getting stuck on digging deeper.
>
> The reason I suspect the KSPSolve/MUMPS is that when commenting out the
> KSPSolve the memory stays constant while running the rest of the algorithm.
> Of course, the algorithm also converges to a different result in this
> setup. When changing the KSP statement to
> for(int i = 0; i < 1; i++) KSPSolve(A_, vec1_, vec2_);
> the memory grows faster than when running the algorithm. Logging shows
> that the program never the terminating i=100M. Measuring the memory growth
> using ps (started debugging before I knew of PETSc's features) I see a
> growth in the RSS on a single compute node of up to 300MB/min for this
> artificial case. Real cases grow more like 60MB/min/node, which causes a
> kill due to memory exhaustion after about 2-3 days.
>
> Locally (Mac) I've been able to reproduce this both with 6 MPI processes
> and with a single one. Instrumenting the code to show differences in
> PetscMemoryGetCurrentUsage (full code below), shows that the memory
> increases every step at the start, but also does at later iterations (small
> excerpt from the output):
> rank stepmemory (increase since prev step)
>  0   6544 current 39469056(  8192)
>  0   7086 current 39477248(  8192)
>  0   7735 current 39497728( 20480)
>  0   9029 current 39501824(  4096)
> A similar output is visible in a run with 6 ranks, where there does not
> seem to be a pattern as to which of the ranks increases at which step.
> (Note I've checked PetscMallocGetCurrentUsage, but that is constant)
>
> Switching the solver to petsc's own solver on a single rank does not show
> a memory increase after the first solve. Changing the solve to overwrite
> the vector will result in a few increases after the first solve, but these
> do not seem to repeat. So, changes like VecCopy(vec2, vec1_); KSPSolve(A_,
> vec1_, vec1_);.
>
> Does anyone have an idea on how to further dig into this problem?
>

I think the best way is to construct the simplest code that reproduces your
problem. For example, we could save your matrix in a binary file

  -ksp_view_mat binary:mat.bin

and then use a very simple code:

#include 

int main(int argc, char **argv)
{
  PetscViewer viewer;
  Mat A;
  Vec b, x;

  PetscCall(PetscInitialize(&argc, &argv, NULL, NULL));
  PetscCall(PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat.bin",
PETSC_MODE_READ, &viewer));
  PetscCall(MatLoad(A, viewer));
  PetscCall(PetscViewerDestroy(&viewer));
  PetscCall(MatCreateVecs(A, &x, &b));
  PetscCall(VecSet(b, 1.));

  PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp));
  PetscCall(KSPSetOperators(ksp, A, A));
  PetscCall(KSPSetFromOptions(ksp));
  for (PetscInt i = 0; i < 10; ++i) PetscCall(KSPSolve(ksp, b, x));
  PetscCall(KSPDestroy(&ksp));

  PetscCall(MatDestroy(&A));
  PetscCall(VecDestroy(&b));
  PetscCall(VecDestroy(&x));
  PetscCall(PetscFinalize());
  return(0);
}

and see if you get memory increase.

  Thanks,

Matt


> Kind regards,
> Lars Corbijn
>
>
> Instrumentation:
>
> PetscLogDouble lastCurrent, current;
> int rank;
> MPI_Comm_rank(PETSC_COMM_WORLD, &rank);
> for(int i = 0; i < 1; ++i) {
> PetscMemoryGetCurrentUsage(&lastCurrent);
> KSPSolve(A_, vec1_, vec2_);
> PetscMemoryGetCurrentUsage(¤t);
> if(current != lastCurrent) {
> std::cout << std::setw(2) << rank << " " << std::setw(6) << i
>   << " current " << std::setw(8) << (int) current <<
> std::right
>   << "(" << std::setw(6) << (int)(current - lastCurrent)
> << ")"
>   << std::endl;
> }
> lastCurrent = current;
> }
>
>
> Matrix details
> The matrix A in question is created from a complex valued matrix C_ (type
> mataij) using the following code (modulo renames). Theoretically it should
> be a Laplacian with phase-shift periodic boundary conditions
> MatHermitianTranspose(C_, MAT_INITIAL_MATRIX, &Y_);
> MatProductCreate(C_, Y_, NULL, & A_);
> MatProductSetType(A_, MATPRODUCT_AB);
> MatProductSetFromOptions(A_);
> MatProductSymbolic(A_);
> MatProductNumeric(A_);
>
> Petsc arguments: -log_view_memory -log_view :petsc.log -ksp_type preonly
> -pc_type lu -pc_factor_mat_solver_type mumps -bv_matmult vecs -memory_view
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aOL-SsOXXPOKEgMX-Lf2zBM59ObfrSKQMpa_bh8gN0Lwam9yu6vKE1ZBQW3nM26gVAYbVnMsu_zXglv_gOeR$
  


Re: [petsc-users] Fortran: PetscDSRestoreTabulation + PetscDSGetTabulation

2024-08-29 Thread Matthew Knepley
On Thu, Aug 29, 2024 at 9:57 AM Martin Diehl 
wrote:

> Dear PETSc team,
>
> I have a question regarding the use of PetscDSGetTabulation from
> Fortran.
> PetscDSGetTabulation has a slightly different function signature
> between Fortran and C. In addition, there is an (undocumented)
> PetscDSRestoreTabulation in Fortran which cleans up the arrays. Calling
> it results in a segmentation fault.
>
> I believe that PetscDSRestoreTabulation is not needed. At least our
> Fortran FEM code compiles and runs without it. However, we have
> convergence issues that we don't understand so any suspicious code is
> currently under investigation.
>

This may be due to my weak Fortran knowledge. Here is the code


https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/dt/interface/f90-custom/zdtdsf90.c?ref_type=heads__;!!G_uCfscf7eWS!dBU6FLrC9bckJQhQgaPX-SxZbtbXKaPqvirTeDpSB_7r8Pn1M2Lo4ZkCq70i-eFj3KAT-qA_gjQDfjsxLbf2$
 

I call F90Array1dCreate() in the GetTabulation and F90Array1dDestroy() in
the RestoreTabulation(), which I thought
was right. However, I remember something about interface declarations,
which have now moved somewhere I cannot find.

Barry, is the interface declaration for this function correct?

  Thanks,

  Matt


> best regards,
> Martin
>
> --
> KU Leuven
> Department of Computer Science
> Department of Materials Engineering
> Celestijnenlaan 200a
> 3001 Leuven, Belgium
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dBU6FLrC9bckJQhQgaPX-SxZbtbXKaPqvirTeDpSB_7r8Pn1M2Lo4ZkCq70i-eFj3KAT-qA_gjQDfgnXdIvK$
  



Re: [petsc-users] Strong scaling concerns for PCBDDC with Vector FEM

2024-08-20 Thread Matthew Knepley
On Tue, Aug 20, 2024 at 2:31 PM neil liu  wrote:

> Thanks a lot for this explanation, Matt. I will explore whether the matrix
> has the same size and spaisity.
>

I think it is much more likely that you just exhausted bandwidth on the
node.

  Thanks,

Matt


> On Tue, Aug 20, 2024 at 1:45 PM Matthew Knepley  wrote:
>
>> On Tue, Aug 20, 2024 at 1:36 PM neil liu  wrote:
>>
>>> Hi, Matt,
>>> I think the time listed here represents the maximum total time across
>>> different processors.
>>>
>>> Thanks a lot.
>>>  2 cpus
>>>   4 cpus   8 cpus
>>> Event  Count Time (sec)  Count
>>>Time (sec)Count Time (sec)
>>>Max RatioMaxRatio   Max
>>> RatioMax Ratio   Max RatioMax Ratio
>>> VecMDot  530 1.0 7.8320e+01 1.0 5301.0
>>>  4.3285e+01 1.1   530   1.0  3.0476e+01   1.1
>>> VecMAXPY  534 1.0 9.2954e+01 1.0 5341.0
>>> 4.8378e+01 1.1  534   1.0  3.0798e+01   1.1
>>> MatMult  8055 1.0 2.4608e+02 1.08103   1.0
>>> 1.2663e+02 1.0  8367 1.0   8.2942e+01 1.1
>>>
>>
>> For the number of calls listed.
>>
>> 1) The number of MatMults goes up, so you should normalize for that, but
>> you still have about 1.6 speedup. However, this is
>> all multiplications. Are we sure they have the same size and sparsity?
>>
>> 2) MAXPY is also 1.6
>>
>> 3) MDot probably does not see the latency of one node, so again it is not
>> speeding up as you might want.
>>
>> This looks like you are using a single node with 2, 4, and 8 procs. The
>> memory bandwidth is exhausted sometime before 8 procs
>> (maybe 6), so you cease to see speedup. You can check this by running
>> `make streams` on the node.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> On Tue, Aug 20, 2024 at 1:16 PM Matthew Knepley 
>>> wrote:
>>>
>>>> On Tue, Aug 20, 2024 at 1:10 PM neil liu  wrote:
>>>>
>>>>> Thanks a lot for your explanation, Stefano. Very helpful.
>>>>> Yes. I am using dmplex to read a tetrahdra mesh from gmsh. With
>>>>> parmetis, the scaling performance is improved a lot.
>>>>> I will read your paper about how to change the basis for Nedelec
>>>>> elements.
>>>>>
>>>>> cpu #time for 500 ksp steps  (s)   parallel efficiency
>>>>> 2   546
>>>>> 4   224   120%
>>>>> 8   170   80%
>>>>> This results are much better than previous attempt. Then I checked the
>>>>> time spent by several Petsc built-in functions for the ksp solver.
>>>>>
>>>>> Functions  time(2 cpus) time(4 cpus)  time(8 cpus)
>>>>> VecMDot   78.3243.2830.47
>>>>> VecMAXPY   92.9548.3730.798
>>>>> MatMult  246.08   126.6382.94
>>>>>
>>>>> It seems from cpu 4 to cpu 8, the scaling is not as good as from cpu 2
>>>>> to cpu 4.
>>>>> Am I  missing something?
>>>>>
>>>>
>>>> Did you normalize by the number of calls?
>>>>
>>>>   Thanks,
>>>>
>>>>  Matt
>>>>
>>>>
>>>>> Thanks a lot,
>>>>>
>>>>> Xiaodong
>>>>>
>>>>>
>>>>> On Mon, Aug 19, 2024 at 4:15 AM Stefano Zampini <
>>>>> stefano.zamp...@gmail.com> wrote:
>>>>>
>>>>>> It seems you are using DMPLEX to handle the mesh, correct?
>>>>>> If so, you should configure using --download-parmetis to have a
>>>>>> better domain decomposition since the default one just splits the cells 
>>>>>> in
>>>>>> chunks as they are ordered.
>>>>>> This results in a large number of primal dofs on average (191, from
>>>>>> the  output of ksp_view)
>>>>>> ...
>>>>>> Primaldofs   : 176 204 191
>>>>>> ...

Re: [petsc-users] Strong scaling concerns for PCBDDC with Vector FEM

2024-08-20 Thread Matthew Knepley
On Tue, Aug 20, 2024 at 1:36 PM neil liu  wrote:

> Hi, Matt,
> I think the time listed here represents the maximum total time across
> different processors.
>
> Thanks a lot.
>  2 cpus
> 4 cpus   8 cpus
> Event  Count Time (sec)  Count
>  Time (sec)Count Time (sec)
>Max RatioMaxRatio   Max Ratio
>   Max Ratio   Max RatioMax Ratio
> VecMDot  530 1.0 7.8320e+01 1.0 5301.0
>  4.3285e+01 1.1   530   1.0  3.0476e+01   1.1
> VecMAXPY  534 1.0 9.2954e+01 1.0 5341.0
> 4.8378e+01 1.1  534   1.0  3.0798e+01   1.1
> MatMult  8055 1.0 2.4608e+02 1.08103   1.0
> 1.2663e+02 1.0  8367 1.0   8.2942e+01 1.1
>

For the number of calls listed.

1) The number of MatMults goes up, so you should normalize for that, but
you still have about 1.6 speedup. However, this is
all multiplications. Are we sure they have the same size and sparsity?

2) MAXPY is also 1.6

3) MDot probably does not see the latency of one node, so again it is not
speeding up as you might want.

This looks like you are using a single node with 2, 4, and 8 procs. The
memory bandwidth is exhausted sometime before 8 procs
(maybe 6), so you cease to see speedup. You can check this by running `make
streams` on the node.

  Thanks,

 Matt


> On Tue, Aug 20, 2024 at 1:16 PM Matthew Knepley  wrote:
>
>> On Tue, Aug 20, 2024 at 1:10 PM neil liu  wrote:
>>
>>> Thanks a lot for your explanation, Stefano. Very helpful.
>>> Yes. I am using dmplex to read a tetrahdra mesh from gmsh. With
>>> parmetis, the scaling performance is improved a lot.
>>> I will read your paper about how to change the basis for Nedelec
>>> elements.
>>>
>>> cpu #time for 500 ksp steps  (s)   parallel efficiency
>>> 2   546
>>> 4   224   120%
>>> 8   170   80%
>>> This results are much better than previous attempt. Then I checked the
>>> time spent by several Petsc built-in functions for the ksp solver.
>>>
>>> Functions  time(2 cpus) time(4 cpus)  time(8 cpus)
>>> VecMDot   78.3243.2830.47
>>> VecMAXPY   92.9548.3730.798
>>> MatMult  246.08   126.6382.94
>>>
>>> It seems from cpu 4 to cpu 8, the scaling is not as good as from cpu 2
>>> to cpu 4.
>>> Am I  missing something?
>>>
>>
>> Did you normalize by the number of calls?
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks a lot,
>>>
>>> Xiaodong
>>>
>>>
>>> On Mon, Aug 19, 2024 at 4:15 AM Stefano Zampini <
>>> stefano.zamp...@gmail.com> wrote:
>>>
>>>> It seems you are using DMPLEX to handle the mesh, correct?
>>>> If so, you should configure using --download-parmetis to have a better
>>>> domain decomposition since the default one just splits the cells in chunks
>>>> as they are ordered.
>>>> This results in a large number of primal dofs on average (191, from
>>>> the  output of ksp_view)
>>>> ...
>>>> Primaldofs   : 176 204 191
>>>> ...
>>>> that slows down the solver setup.
>>>>
>>>> Again, you should not use approximate local solvers with BDDC unless
>>>> you know what you are doing.
>>>> The theory for approximate solvers for BDDC is small and only for SPD
>>>> problems.
>>>> Looking at the output of log_view, coarse problem setup (PCBDDCCSet),
>>>> and primal functions setup (PCBDDCCorr) costs 35 + 63 seconds, 
>>>> respectively.
>>>> Also, the 500 application of the GAMG preconditioner for the Neumann
>>>> solver (PCBDDCNeuS) takes 129 seconds out of the 400 seconds of the total
>>>> solve time.
>>>>
>>>> PCBDDCTopo 1 1.0 3.1563e-01 1.0 1.11e+06 3.4 1.6e+03
>>>> 3.9e+04 3.8e+01  0  0  1  0  2   0  0  1  0  219
>>>> PCBDDCLKSP 2 1.0 2.0423e+00 1.7 9.31e+08 1.2 0.0e+00
>>>> 0.0e+00 2.0e+00  0  0  0  0  0   0  0  0  0  0  3378
>>>> PCBDDCLWor 1 1.0 3.9178e-02 13.4 0.00e+00 0.0 0.0e+00
>>>&g

Re: [petsc-users] Strong scaling concerns for PCBDDC with Vector FEM

2024-08-20 Thread Matthew Knepley
On Tue, Aug 20, 2024 at 1:10 PM neil liu  wrote:

> Thanks a lot for your explanation, Stefano. Very helpful.
> Yes. I am using dmplex to read a tetrahdra mesh from gmsh. With parmetis,
> the scaling performance is improved a lot.
> I will read your paper about how to change the basis for Nedelec elements.
>
> cpu #time for 500 ksp steps  (s)   parallel efficiency
> 2   546
> 4   224   120%
> 8   170   80%
> This results are much better than previous attempt. Then I checked the
> time spent by several Petsc built-in functions for the ksp solver.
>
> Functions  time(2 cpus) time(4 cpus)  time(8 cpus)
> VecMDot   78.3243.2830.47
> VecMAXPY   92.9548.3730.798
> MatMult  246.08   126.6382.94
>
> It seems from cpu 4 to cpu 8, the scaling is not as good as from cpu 2 to
> cpu 4.
> Am I  missing something?
>

Did you normalize by the number of calls?

  Thanks,

 Matt


> Thanks a lot,
>
> Xiaodong
>
>
> On Mon, Aug 19, 2024 at 4:15 AM Stefano Zampini 
> wrote:
>
>> It seems you are using DMPLEX to handle the mesh, correct?
>> If so, you should configure using --download-parmetis to have a better
>> domain decomposition since the default one just splits the cells in chunks
>> as they are ordered.
>> This results in a large number of primal dofs on average (191, from the
>> output of ksp_view)
>> ...
>> Primaldofs   : 176 204 191
>> ...
>> that slows down the solver setup.
>>
>> Again, you should not use approximate local solvers with BDDC unless you
>> know what you are doing.
>> The theory for approximate solvers for BDDC is small and only for SPD
>> problems.
>> Looking at the output of log_view, coarse problem setup (PCBDDCCSet), and
>> primal functions setup (PCBDDCCorr) costs 35 + 63 seconds, respectively.
>> Also, the 500 application of the GAMG preconditioner for the Neumann
>> solver (PCBDDCNeuS) takes 129 seconds out of the 400 seconds of the total
>> solve time.
>>
>> PCBDDCTopo 1 1.0 3.1563e-01 1.0 1.11e+06 3.4 1.6e+03 3.9e+04
>> 3.8e+01  0  0  1  0  2   0  0  1  0  219
>> PCBDDCLKSP 2 1.0 2.0423e+00 1.7 9.31e+08 1.2 0.0e+00 0.0e+00
>> 2.0e+00  0  0  0  0  0   0  0  0  0  0  3378
>> PCBDDCLWor 1 1.0 3.9178e-02 13.4 0.00e+00 0.0 0.0e+00 0.0e+00
>> 1.0e+00  0  0  0  0  0   0  0  0  0  0 0
>> PCBDDCCorr 1 1.0 6.3981e+01 2.2 8.16e+10 1.6 0.0e+00 0.0e+00
>> 0.0e+00 11 11  0  0  0  11 11  0  0  0  8900
>> PCBDDCCSet 1 1.0 3.5453e+01 4564.9 1.06e+05 1.7 1.2e+03
>> 5.3e+03 5.0e+01  2  0  1  0  3   2  0  1  0  3 0
>> PCBDDCCKSP 1 1.0 6.3266e-01 1.3 0.00e+00 0.0 3.3e+02 1.1e+02
>> 2.2e+01  0  0  0  0  1   0  0  0  0  1 0
>> PCBDDCScal 1 1.0 6.8274e-03 1.3 1.11e+06 3.4 5.6e+01 3.2e+05
>> 0.0e+00  0  0  0  0  0   0  0  0  0  0   894
>> PCBDDCDirS  1000 1.0 6.0420e+00 3.5 6.64e+09 5.4 0.0e+00 0.0e+00
>> 0.0e+00  1  0  0  0  0   1  0  0  0  0  2995
>> PCBDDCNeuS   500 1.0 1.2901e+02 2.1 8.28e+10 1.2 0.0e+00 0.0e+00
>> 0.0e+00 22 12  0  0  0  22 12  0  0  0  4828
>> PCBDDCCoaS   500 1.0 5.8757e-01 1.8 1.09e+09 1.0 2.8e+04 7.4e+02
>> 5.0e+02  0  0 17  0 28   0  0 17  0 31 14901
>>
>> Finally, if I look at the residual history, I see a sharp decrease and a
>> very long plateau. This indicates a bad coarse space; as I said before,
>> there's no hope of finding a suitable coarse space without first changing
>> the basis of the Nedelec elements, which is done automatically if you
>> prescribe the discrete gradient operator (see the paper I have linked to in
>> my previous communication).
>>
>>
>>
>> Il giorno dom 18 ago 2024 alle ore 00:37 neil liu 
>> ha scritto:
>>
>>> Hi, Stefano,
>>> Please see the attached for the information with 4 and 8 CPUs for the
>>> complex matrix.
>>> I am solving Maxwell equations (Attahced) using 2nd-order Nedelec
>>> elements (two dofs each edge, and two dofs each face).
>>> The computational domain consists of different mediums, e.g., vacuum and
>>> substrate (different permitivity).
>>> The PML is used to truncate the computational domain, absorbing the
>>> outgoing wave and introducing complex numbers for the matrix.
>>>
>>> Thanks a lot for your suggestions. I will try MUMPS.
>>> For now, I just want to fiddle with Petsc's built-in features to know
>>> more about it.
>>> Yes. 5000 is larger. Smaller value. e.g., 30, converges very slowly.
>>>
>>> Thanks a lot.
>>>
>>> Have a good weekend.
>>>
>>>
>>> On Sat, Aug 17, 2024 at 9:23 AM Stefano Zampini <
>>> stefano.zamp...@gmail.com> wrote:
>>>
 Please include the output of -log_view -ksp_view -ksp_monitor to
 understand what's happening.

 Can you please share the equations you are solving so we can provide
 suggestions on the solver configura

Re: [petsc-users] Issue configuring PETSc with HYPRE in Polaris

2024-08-09 Thread Matthew Knepley
As a start, please send configure.log

  Thanks,

Matt

On Fri, Aug 9, 2024 at 1:17 PM Vanella, Marcos (Fed) via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi All, I keep running into this issue when trying to configure PETSc
> downloading HYPRE in Polaris.
> My modules are:
>
> export *MPICH_GPU_SUPPORT_ENABLED*=1
> module use /soft/modulefiles
> module load spack-pe-base cmake
> module unload darshan
> module load cudatoolkit-standalone PrgEnv-gnu cray-libsci
>
> and my configure line is:
>
> $./configure COPTFLAGS="-O2" CXXOPTFLAGS="-O2" FOPTFLAGS="-O2"
> FCOPTFLAGS="-O2" CUDAOPTFLAGS="-O2" --with-debugging=1
> --download-suitesparse --download-hypre --with-cuda --with-cc=cc
> --with-cxx=CC --with-fc=ftn --with-cudac=nvcc --with-cuda-arch=80
>
> What I see in the configure phase is:
>
> =
>  Configuring PETSc to compile on your system
>
> =
>
> =
>Trying to download 
> https://urldefense.us/v3/__https://bitbucket.org/petsc/pkg-sowing.git__;!!G_uCfscf7eWS!bC2nF0niuYrmvBqOKJhC2c7ynXepezhMCen7e9RqnIO_bj8qEvum1TAPesC1XjzU0AEgkVpR4B20xSeFpvUg$
>  
> 
> for SOWING
>
> =
>
> =
>   Running configure on SOWING; this may take several
> minutes
>
> =
>
> =
> Running make on SOWING; this may take several minutes
>
> =
>
> =
> Running make install on SOWING; this may take several
> minutes
>
> =
>
> =
>  Running arch-polaris-dbg/bin/bfort to generate Fortran
> stubs
>
> =
>
> =
> Trying to download 
> https://urldefense.us/v3/__https://github.com/DrTimothyAldenDavis/SuiteSparse__;!!G_uCfscf7eWS!bC2nF0niuYrmvBqOKJhC2c7ynXepezhMCen7e9RqnIO_bj8qEvum1TAPesC1XjzU0AEgkVpR4B20xQ43P-ld$
>  
> 
> for SUITESPARSE
>
> =
>
> =
>   Configuring SUITESPARSE with CMake; this may take several
> minutes
>
> =
>
> =
>  Compiling and installing SUITESPARSE; this may take several
> minutes
>
> =
>
> =
>   Trying to download 
> https://urldefense.us/v3/__https://github.com/hypre-space/hypre__;!!G_uCfscf7eWS!bC2nF0niuYrmvBqOKJhC2c7ynXepezhMCen7e9RqnIO_bj8qEvum1TAPesC1XjzU0AEgkVpR4B20xeK92H5d$
>  
> 
> for HYPRE
>
> =
>
> =
>   Running configure on HYPRE; this may take several minutes
>
> =
>
> =
>  Running make on HYPRE; this ma

Re: [petsc-users] Read/Write large dense matrix

2024-08-05 Thread Matthew Knepley
On Mon, Aug 5, 2024 at 1:26 PM Sreeram R Venkat  wrote:

> I do have 64 bit indices turned on. The problem I think is that the
> PetscMPIInt is always a 32 bit int, and that's what's overflowing
>

We should be using the large count support from MPI. However, it appears we
forgot somewhere. Would it be possible to
construct a simple example that I can run and find the error? You should be
able to just create a dense matrix of zeros with the
correct size.

  Thanks,

  Matt


> On Mon, Aug 5, 2024 at 12:25 PM Matthew Knepley  wrote:
>
>> On Mon, Aug 5, 2024 at 1:10 PM Sreeram R Venkat 
>> wrote:
>>
>>> I have a large dense matrix (size ranging from 5e4 to 1e5) that arises
>>> as a result of doing MatComputeOperator() on a MatShell. When the total
>>> number of nonzeros exceeds the 32 bit integer value, I get an error (MPI
>>> buffer size too big) when
>>> ZjQcmQRYFpfptBannerStart
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> ZjQcmQRYFpfptBannerEnd
>>> I have a large dense matrix (size ranging from 5e4 to 1e5) that arises
>>> as a result of doing MatComputeOperator() on a MatShell. When the total
>>> number of nonzeros exceeds the 32 bit integer value, I get an error (MPI
>>> buffer size too big) when trying to do MatView() on this to save to binary.
>>> Is there a way I can save this matrix to load again for later use?
>>>
>>
>> I think you need to reconfigure with --with-64-bit-indices.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> The other thing I tried was to save each column as a separate dataset in
>>> an hdf5 file. Then, I tried to load this in python, combine them to an np
>>> array, and then create/save a dense matrix with petsc4py. I was able to
>>> create the dense Mat, but the MatView() once again resulted in an error
>>> (out of memory).
>>>
>>> Thanks,
>>> Sreeram
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a-sxRcKHh_nd4gLTjiXZxx0nYU4_lvIBL8xVFhNVrOwEBeVFcnTWMFNkyHuJ15bZDhKacKWF1t8swumsFxgH$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a-sxRcKHh_nd4gLTjiXZxx0nYU4_lvIBL8xVFhNVrOwEBeVFcnTWMFNkyHuJ15bZDhKacKWF1t8swuTKLNGG$
>>  >
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a-sxRcKHh_nd4gLTjiXZxx0nYU4_lvIBL8xVFhNVrOwEBeVFcnTWMFNkyHuJ15bZDhKacKWF1t8swumsFxgH$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a-sxRcKHh_nd4gLTjiXZxx0nYU4_lvIBL8xVFhNVrOwEBeVFcnTWMFNkyHuJ15bZDhKacKWF1t8swuTKLNGG$
 >


Re: [petsc-users] Read/Write large dense matrix

2024-08-05 Thread Matthew Knepley
On Mon, Aug 5, 2024 at 1:10 PM Sreeram R Venkat  wrote:

> I have a large dense matrix (size ranging from 5e4 to 1e5) that arises as
> a result of doing MatComputeOperator() on a MatShell. When the total number
> of nonzeros exceeds the 32 bit integer value, I get an error (MPI buffer
> size too big) when
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> I have a large dense matrix (size ranging from 5e4 to 1e5) that arises as
> a result of doing MatComputeOperator() on a MatShell. When the total number
> of nonzeros exceeds the 32 bit integer value, I get an error (MPI buffer
> size too big) when trying to do MatView() on this to save to binary. Is
> there a way I can save this matrix to load again for later use?
>

I think you need to reconfigure with --with-64-bit-indices.

  Thanks,

 Matt


> The other thing I tried was to save each column as a separate dataset in
> an hdf5 file. Then, I tried to load this in python, combine them to an np
> array, and then create/save a dense matrix with petsc4py. I was able to
> create the dense Mat, but the MatView() once again resulted in an error
> (out of memory).
>
> Thanks,
> Sreeram
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZWJy1vnQRcamRkpD9AvtD6y9h9bvIfbWSTz0DllLxYWq7hwcAytyX_EC7cpuwneyYXURUCUm2lSCptmMMZy4$
  



Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-08-04 Thread Matthew Knepley
On Fri, Aug 2, 2024 at 7:15 PM MIGUEL MOLINOS PEREZ  wrote:

> Thanks again Matt, that makes a lot more sense !!
>
> Just to check that we are on the same page. You are saying:
>
> 1. create a field define a field called "owner rank" for each particle.
>
> 2. Identify the phantom particles and modify the internal variable defined
> by the DMSwarmField_rank variable.
>
> 3. Call DMSwarmMigrate(*,PETSC_FALSE), do the calculations using the new
> local vector including the ghost particles.
>
> 4. Then, once the calculations are done, rename the DMSwarmField_rank
> variable using the "owner rank" variable and call
> DMSwarmMigrate(*,PETSC_FALSE) once again.
>

I don't think we need this last step. We can just remove those ghost
particles for the next step I think.

  Thanks,

     Matt


> Thank you,
> Miguel
>
>
> On Aug 2, 2024, at 5:33 PM, Matthew Knepley  wrote:
>
> On Fri, Aug 2, 2024 at 11:15 AM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Thank you Matt for your time,
>>
>> What you describe seems to me the ideal approach.
>>
>> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
>> I think it needs options OWNED, OVERLAP, and GHOST
>>
>> This means, locally, I need to allocate Nlocal + ghost particles
>> (duplicated) for my model?
>>
>
> I would do it another way. I would allocate the particles with no overlap
> and set them up. Then I would identify the halo particles, mark them as
> OVERLAP, call DMSwarmMigrate(), and mark the migrated particles as GHOST,
> then unmark the OVERLAP particles. Shoot! That marking will not work since
> we cannot tell the difference between particles we received and particles
> we sent. Okay, instead of the `ghost` field we need an `owner rank` field.
> So then we
>
> 1) Setup the non-overlapping particles
>
> 2) Identify the halo particles
>
> 3) Change the `rank`, but not the `owner rank`
>
> 4) Call DMSwarmMigrate()
>
> Now we can identify ghost particles by the `owner rank`
>
>
>> If that so, how to do the communication between the ghost particles
>> living in the rank i and their “real” counterpart in the rank j.
>>
>> Algo, as an alternative, what about:
>> 1) Use an IS tag which contains, for each rank, a list of the global
>> index of the neighbors particles outside of the rank.
>> 2) Use VecCreateGhost to create a new vector which contains extra local
>> space for the ghost components of the vector.
>> 3) Use VecScatterCreate, VecScatterBegin, and VecScatterEnd to do the
>> transference of data between a vector obtained with
>> DMSwarmCreateGlobalVectorFromField
>> 4) Do necessary computations using the vectors created with
>> VecCreateGhost.
>>
>
> This is essentially what Migrate() does. I was trying to reuse the code.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Miguel
>>
>> On Aug 2, 2024, at 8:58 AM, Matthew Knepley  wrote:
>>
>> On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ 
>> wrote:
>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>>
>>> Dear all,
>>>
>>> I am implementing a Molecular Dynamics (MD) code using the DMSWARM 
>>> interface. In the MD simulations we evaluate on each particle (atoms) some 
>>> kind of scalar functional using data from the neighbouring atoms. My 
>>> problem lies in the parallel implementation of the model, because 
>>> sometimes, some of these neighbours lie on a different processor.
>>>
>>> This is usually solved by using ghost particles.  A similar approach (with 
>>> nodes instead) is already implemented for other PETSc mesh structures like 
>>> DMPlexConstructGhostCells. Unfortunately, I don't see this kind of 
>>> constructs for DMSWARM. Am I missing something?
>>>
>>> I this could be done by applying a buffer region by exploiting the 
>>> background DMDA mesh that I already use to do domain decomposition. Then 
>>> using the buffer region of each cell to locate the ghost particles and 
>>> finally using VecCreateGhost. Is this feasible? Or is there an easier 
>>> approach using other PETSc functions.
>>>
>>>
>> This is feasible, but it would be good to develop a set of best
>> practices, since we have been mainly focused on the case of non-redundant
>> particles. Here is how I think I would do what you want.
>>
>> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
>> I think it nee

Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-08-02 Thread Matthew Knepley
On Fri, Aug 2, 2024 at 11:15 AM MIGUEL MOLINOS PEREZ  wrote:

> Thank you Matt for your time,
>
> What you describe seems to me the ideal approach.
>
> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
> I think it needs options OWNED, OVERLAP, and GHOST
>
> This means, locally, I need to allocate Nlocal + ghost particles
> (duplicated) for my model?
>

I would do it another way. I would allocate the particles with no overlap
and set them up. Then I would identify the halo particles, mark them as
OVERLAP, call DMSwarmMigrate(), and mark the migrated particles as GHOST,
then unmark the OVERLAP particles. Shoot! That marking will not work since
we cannot tell the difference between particles we received and particles
we sent. Okay, instead of the `ghost` field we need an `owner rank` field.
So then we

1) Setup the non-overlapping particles

2) Identify the halo particles

3) Change the `rank`, but not the `owner rank`

4) Call DMSwarmMigrate()

Now we can identify ghost particles by the `owner rank`


> If that so, how to do the communication between the ghost particles living
> in the rank i and their “real” counterpart in the rank j.
>
> Algo, as an alternative, what about:
> 1) Use an IS tag which contains, for each rank, a list of the global
> index of the neighbors particles outside of the rank.
> 2) Use VecCreateGhost to create a new vector which contains extra local
> space for the ghost components of the vector.
> 3) Use VecScatterCreate, VecScatterBegin, and VecScatterEnd to do the
> transference of data between a vector obtained with
> DMSwarmCreateGlobalVectorFromField
> 4) Do necessary computations using the vectors created with VecCreateGhost
> .
>

This is essentially what Migrate() does. I was trying to reuse the code.

  Thanks,

 Matt


> Thanks,
> Miguel
>
> On Aug 2, 2024, at 8:58 AM, Matthew Knepley  wrote:
>
> On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Dear all, I am implementing a Molecular Dynamics (MD) code using the
>> DMSWARM interface. In the MD simulations we evaluate on each particle
>> (atoms) some kind of scalar functional using data from the neighbouring
>> atoms. My problem lies in the
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Dear all,
>>
>> I am implementing a Molecular Dynamics (MD) code using the DMSWARM 
>> interface. In the MD simulations we evaluate on each particle (atoms) some 
>> kind of scalar functional using data from the neighbouring atoms. My problem 
>> lies in the parallel implementation of the model, because sometimes, some of 
>> these neighbours lie on a different processor.
>>
>> This is usually solved by using ghost particles.  A similar approach (with 
>> nodes instead) is already implemented for other PETSc mesh structures like 
>> DMPlexConstructGhostCells. Unfortunately, I don't see this kind of 
>> constructs for DMSWARM. Am I missing something?
>>
>> I this could be done by applying a buffer region by exploiting the 
>> background DMDA mesh that I already use to do domain decomposition. Then 
>> using the buffer region of each cell to locate the ghost particles and 
>> finally using VecCreateGhost. Is this feasible? Or is there an easier 
>> approach using other PETSc functions.
>>
>>
> This is feasible, but it would be good to develop a set of best practices,
> since we have been mainly focused on the case of non-redundant particles.
> Here is how I think I would do what you want.
>
> 1) Add a particle field 'ghost' that identifies ghost vs owned particles.
> I think it needs options OWNED, OVERLAP, and GHOST
>
> 2) At some interval identify particles that should be sent to other
> processes as ghosts. I would call these "overlap particles". The
> determination
> seems application specific, so I would leave this determination to the
> user right now. We do two things to these particles
>
> a) Mark chosen particles as OVERLAP
>
> b) Change rank to process we are sending to
>
> 3) Call DMSwarmMigrate with PETSC_FALSE for the particle deletion flag
>
> 4) Mark OVERLAP particles as GHOST when they arrive
>
> There is one problem in the above algorithm. It does not allow sending
> particles to multiple ranks. We would have to do this
> in phases right now, or make a small adjustment to the interface allowing
> replication of particles when a set of ranks is specified.
>
>   THanks,
>
>  Matt
>
>
>> Thank you,
>> Miguel
>>
>>
>>
>
&

Re: [petsc-users] Ghost particles for DMSWARM (or similar)

2024-08-02 Thread Matthew Knepley
On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ  wrote:

> Dear all, I am implementing a Molecular Dynamics (MD) code using the
> DMSWARM interface. In the MD simulations we evaluate on each particle
> (atoms) some kind of scalar functional using data from the neighbouring
> atoms. My problem lies in the
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear all,
>
> I am implementing a Molecular Dynamics (MD) code using the DMSWARM interface. 
> In the MD simulations we evaluate on each particle (atoms) some kind of 
> scalar functional using data from the neighbouring atoms. My problem lies in 
> the parallel implementation of the model, because sometimes, some of these 
> neighbours lie on a different processor.
>
> This is usually solved by using ghost particles.  A similar approach (with 
> nodes instead) is already implemented for other PETSc mesh structures like 
> DMPlexConstructGhostCells. Unfortunately, I don't see this kind of constructs 
> for DMSWARM. Am I missing something?
>
> I this could be done by applying a buffer region by exploiting the background 
> DMDA mesh that I already use to do domain decomposition. Then using the 
> buffer region of each cell to locate the ghost particles and finally using 
> VecCreateGhost. Is this feasible? Or is there an easier approach using other 
> PETSc functions.
>
>
This is feasible, but it would be good to develop a set of best practices,
since we have been mainly focused on the case of non-redundant particles.
Here is how I think I would do what you want.

1) Add a particle field 'ghost' that identifies ghost vs owned particles. I
think it needs options OWNED, OVERLAP, and GHOST

2) At some interval identify particles that should be sent to other
processes as ghosts. I would call these "overlap particles". The
determination
seems application specific, so I would leave this determination to the
user right now. We do two things to these particles

a) Mark chosen particles as OVERLAP

b) Change rank to process we are sending to

3) Call DMSwarmMigrate with PETSC_FALSE for the particle deletion flag

4) Mark OVERLAP particles as GHOST when they arrive

There is one problem in the above algorithm. It does not allow sending
particles to multiple ranks. We would have to do this
in phases right now, or make a small adjustment to the interface allowing
replication of particles when a set of ranks is specified.

  THanks,

 Matt


> Thank you,
> Miguel
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fTP6CcczHauSge4FV5cI88RqYPhXISVNPhCpwU5IjmOea9z2VEtIlwEoPSlg5aJbEQzO0IQ8CIvAywPYjOAG$
  



Re: [petsc-users] Question regarding naming of fieldsplit splits

2024-08-02 Thread Matthew Knepley
: (fieldsplit_2_) 1 MPI process
> type: jacobi
>   type DIAGONAL
> linear system matrix = precond matrix:
> Mat Object: (fieldsplit_2_) 1 MPI process
>   type: seqaij
>   rows=243, cols=243
>   total: nonzeros=4473, allocated nonzeros=4473
>   total number of mallocs used during MatSetValues calls=0
> using I-node routines: found 85 nodes, limit used is 5
>   linear system matrix = precond matrix:
>   Mat Object: 1 MPI process
> type: seqaij
> rows=567, cols=567
> total: nonzeros=24353, allocated nonzeros=24353
> total number of mallocs used during MatSetValues calls=0
>   using I-node routines: found 173 nodes, limit used is 5
>
> --
> Dr. Sebastian Blauth
> Fraunhofer-Institut für
> Techno- und Wirtschaftsmathematik ITWM
> Abteilung Transportvorgänge
> Fraunhofer-Platz 1, 67663 Kaiserslautern
> Telefon: +49 631 31600-4968
> sebastian.bla...@itwm.fraunhofer.de
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!an6Idf-f7OiZlsU0N0Ftpr5mM5etD7GF_9ghya_ALFmQP_eL93oONwYYRLmLGz-0FSXHkB0bMsjj0e4-qdCV$
>  
>
> *From:* petsc-users  *On Behalf Of *Blauth,
> Sebastian
> *Sent:* Tuesday, July 2, 2024 11:47 AM
> *To:* Matthew Knepley 
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit
> splits
>
> Hi Matt,
>
> thanks fort he answer and clarification. Then I’ll work around this issue
> in python, where I set the options.
>
> Best,
> Sebastian
>
> --
> Dr. Sebastian Blauth
> Fraunhofer-Institut für
> Techno- und Wirtschaftsmathematik ITWM
> Abteilung Transportvorgänge
> Fraunhofer-Platz 1, 67663 Kaiserslautern
> Telefon: +49 631 31600-4968
> sebastian.bla...@itwm.fraunhofer.de
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!an6Idf-f7OiZlsU0N0Ftpr5mM5etD7GF_9ghya_ALFmQP_eL93oONwYYRLmLGz-0FSXHkB0bMsjj0e4-qdCV$
>  
>
> *From:* Matthew Knepley 
> *Sent:* Monday, July 1, 2024 4:30 PM
> *To:* Blauth, Sebastian 
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit
> splits
>
> On Mon, Jul 1, 2024 at 9:48 AM Blauth, Sebastian <
> sebastian.bla...@itwm.fraunhofer.de> wrote:
>
> Dear Matt,
>
> thanks a lot for your help. Unfortunately, for me these extra options do
> not have any effect, I still get the “u” and “p” fieldnames. Also, this
> would not help me to get rid of the “c” fieldname – on that level of the
> fieldsplit I am basically using your approach already, and still it does
> show up. The output of the -ksp_view is unchanged, so that I do not attach
> it here again. Maybe I misunderstood you?
>
>
> Oh, we make an exception for single fields, since we think you would want
> to use the name. I have to make an extra option to shut off naming.
>
>Thanks,
>
>  Matt
>
>
> Thanks for the help and best regards,
> Sebastian
>
> --
> Dr. Sebastian Blauth
> Fraunhofer-Institut für
> Techno- und Wirtschaftsmathematik ITWM
> Abteilung Transportvorgänge
> Fraunhofer-Platz 1, 67663 Kaiserslautern
> Telefon: +49 631 31600-4968
> sebastian.bla...@itwm.fraunhofer.de
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!an6Idf-f7OiZlsU0N0Ftpr5mM5etD7GF_9ghya_ALFmQP_eL93oONwYYRLmLGz-0FSXHkB0bMsjj0e4-qdCV$
>  
>
> *From:* Matthew Knepley 
> *Sent:* Monday, July 1, 2024 2:27 PM
> *To:* Blauth, Sebastian 
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit
> splits
>
> On Fri, Jun 28, 2024 at 4:05 AM Blauth, Sebastian <
> sebastian.bla...@itwm.fraunhofer.de> wrote:
>
> Hello everyone,
>
> I have a question regarding the naming convention using PETSc’s
> PCFieldsplit. I have been following
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!an6Idf-f7OiZlsU0N0Ftpr5mM5etD7GF_9ghya_ALFmQP_eL93oONwYYRLmLGz-0FSXHkB0bMsjj0Qyn5DYX$
>   to
> create a DMShell with FEniCS in order to customize PCFieldsplit for my
> application.
> I am using the following options, which work nicely for me:
>
> -ksp_type fgmres
> -pc_type fieldsplit
> -pc_fieldsplit_0_fields 0, 1
> -pc_fieldsplit_1_fields 2
> -pc_fieldsplit_type additive
> -fieldsplit_0_ksp_type fgmres
> -fieldsplit_0_pc_type fieldsplit
> -fieldsplit_0_pc_fieldsplit_type schur
> -fieldsplit_0_pc_fieldsplit_schur_fact_type full
> -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
> -fieldsplit_0_fieldsplit_u_ksp_type preonly
> -fieldsplit_0_fieldsplit_u_pc_type lu
> -fieldsplit_0_fieldsplit_p_ksp_type cg
> -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14
> 

Re: [petsc-users] How to combine different element types into a single DMPlex?

2024-08-01 Thread Matthew Knepley
On Thu, Aug 1, 2024 at 8:23 AM Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:

> Hi Matthew,
>
> we have our own format that uses MPI I/O for the initial read, then we
> would like to do almost exactly what we do in ex47.c (
> https://urldefense.us/v3/__https://petsc.org/main/src/dm/impls/plex/tests/ex47.c.html__;!!G_uCfscf7eWS!aHeMEPfb0Meog5f2a3LiP86hnFxzuIQvMnwh6xTVli7pOyTG58-uCFxfN1vPwH43kT7LT5MKKPc7W06sEuZH$
>  ) excepted the
> very beginning of the program that will read (MPI I/O) from the disk.
> Then, always in parallel:
>
> 1- Populate a DMPlex with multiple element types (with a variant of
> DMPlexBuildFromCellListParallel ? do you have an example of this?)
>
> 2- Call partitioning (DMPlexDistribute)
>
> 3- Compute overlap (DMPlexDistributeOverlap)
>
> 4- Also compute the corresponding mapping between original element numbers
> and partitonned+overlap elements ( DMPlexNaturalToGlobalBegin/End)
>
> The main point here here is overlap computation.  And the big challenge is
> that we must always rely on the fact that never, ever, any node read all
> the mesh: all nodes have only a small part of it at the beginning then we
> want parallel partitioning and overlapping computation...
>
> It is now working fine for a mesh with a single type of element, but if we
> can modify ex47.c with an example of a mixed element types that will
> achieve what we would like to do!
>
> We can do that. We only need to change step 1. I will put it on my TODO
list. My thinking is the same as Vaclav, namely to replace numCorners with
a PetscSection describing the cells[] array. Will that work for you?

  Thanks,

 Matt

> Thanks,
>
> Eric
>
>
> On 2024-07-31 22:09, Matthew Knepley wrote:
>
> On Wed, Jul 31, 2024 at 4:16 PM Eric Chamberland <
> eric.chamberl...@giref.ulaval.ca> wrote:
>
>> Hi Vaclav,
>>
>> Okay, I am coming back with this question after some time... ;)
>>
>> I am just wondering if it is now possible to call
>> DMPlexBuildFromCellListParallel or something else, to build a mesh that
>> combine different element types into a single DMPlex (in parallel of
>> course) ?
>>
> 1) Meshes with different cell types are fully functional, and some
> applications have been using them for a while now.
>
> 2) The Firedrake I/O methods support these hybrid meshes.
>
> 3) You can, for example, read in a GMsh or ExodusII file with different
> cell types.
>
> However, there is no direct interface like
> DMPlexBuildFromCellListParallel(). If you plan on creating meshes by hand,
> I can build that for you.
> No one so far has wanted that. Rather they want to read in a mesh in some
> format, or alter a base mesh by inserting other cell types.
>
> So, what is the motivating use case?
>
>   Thanks,
>
>  Matt
>
>> Thanks,
>>
>> Eric
>> On 2021-09-23 11:30, Hapla Vaclav wrote:
>>
>> Note there will soon be a generalization of
>> DMPlexBuildFromCellListParallel() around, as a side product of our current
>> collaborative efforts with Firedrake guys. It will take a PetscSection
>> instead of relying on the blocksize [which is indeed always constant for
>> the given dataset]. Stay tuned.
>>
>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/4350__;!!G_uCfscf7eWS!aHeMEPfb0Meog5f2a3LiP86hnFxzuIQvMnwh6xTVli7pOyTG58-uCFxfN1vPwH43kT7LT5MKKPc7W_UAR2Yb$
>>  
>>
>> Thanks,
>>
>> Vaclav
>>
>> On 23 Sep 2021, at 16:53, Eric Chamberland <
>> eric.chamberl...@giref.ulaval.ca> wrote:
>>
>> Hi,
>>
>> oh, that's a great news!
>>
>> In our case we have our home-made file-format, invariant to the number of
>> processes (thanks to MPI_File_set_view), that uses collective, asynchronous
>> MPI I/O native calls for unstructured hybrid meshes and fields .
>>
>> So our needs are not for reading meshes but only to fill an hybrid DMPlex
>> with DMPlexBuildFromCellListParallel (or something else to come?)... to
>> exploit petsc partitioners and parallel overlap computation...
>>
>> Thanks for the follow-up! :)
>>
>> Eric
>>
>>
>> On 2021-09-22 7:20 a.m., Matthew Knepley wrote:
>>
>> On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo  wrote:
>>
>>> Dear Matthew,
>>>
>>> This is great news!
>>> For my part, I would be mostly interested in the parallel input
>>> interface. Sorry for that...
>>> Indeed, in our application,  we already have a parallel mesh data
>>> structure that supports hybrid meshes with parallel I/O and distribution
>>> (based on the M

Re: [petsc-users] How to combine different element types into a single DMPlex?

2024-07-31 Thread Matthew Knepley
On Wed, Jul 31, 2024 at 4:16 PM Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:

> Hi Vaclav,
>
> Okay, I am coming back with this question after some time... ;)
>
> I am just wondering if it is now possible to call
> DMPlexBuildFromCellListParallel or something else, to build a mesh that
> combine different element types into a single DMPlex (in parallel of
> course) ?
>
1) Meshes with different cell types are fully functional, and some
applications have been using them for a while now.

2) The Firedrake I/O methods support these hybrid meshes.

3) You can, for example, read in a GMsh or ExodusII file with different
cell types.

However, there is no direct interface like
DMPlexBuildFromCellListParallel(). If you plan on creating meshes by hand,
I can build that for you.
No one so far has wanted that. Rather they want to read in a mesh in some
format, or alter a base mesh by inserting other cell types.

So, what is the motivating use case?

  Thanks,

 Matt

> Thanks,
>
> Eric
> On 2021-09-23 11:30, Hapla Vaclav wrote:
>
> Note there will soon be a generalization of
> DMPlexBuildFromCellListParallel() around, as a side product of our current
> collaborative efforts with Firedrake guys. It will take a PetscSection
> instead of relying on the blocksize [which is indeed always constant for
> the given dataset]. Stay tuned.
>
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/4350__;!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oBuRbcJg$
>  
>
> Thanks,
>
> Vaclav
>
> On 23 Sep 2021, at 16:53, Eric Chamberland <
> eric.chamberl...@giref.ulaval.ca> wrote:
>
> Hi,
>
> oh, that's a great news!
>
> In our case we have our home-made file-format, invariant to the number of
> processes (thanks to MPI_File_set_view), that uses collective, asynchronous
> MPI I/O native calls for unstructured hybrid meshes and fields .
>
> So our needs are not for reading meshes but only to fill an hybrid DMPlex
> with DMPlexBuildFromCellListParallel (or something else to come?)... to
> exploit petsc partitioners and parallel overlap computation...
>
> Thanks for the follow-up! :)
>
> Eric
>
>
> On 2021-09-22 7:20 a.m., Matthew Knepley wrote:
>
> On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo  wrote:
>
>> Dear Matthew,
>>
>> This is great news!
>> For my part, I would be mostly interested in the parallel input
>> interface. Sorry for that...
>> Indeed, in our application,  we already have a parallel mesh data
>> structure that supports hybrid meshes with parallel I/O and distribution
>> (based on the MED format). We would like to use a DMPlex to make parallel
>> mesh adaptation.
>>  As a matter of fact, all our meshes are in the MED format. We could
>> also contribute to extend the interface of DMPlex with MED (if you consider
>> it could be usefull).
>>
>
> An MED interface does exist. I stopped using it for two reasons:
>
>   1) The code was not portable and the build was failing on different
> architectures. I had to manually fix it.
>
>   2) The boundary markers did not provide global information, so that
> parallel reading was much harder.
>
> Feel free to update my MED reader to a better design.
>
>   Thanks,
>
>  Matt
>
>
>> Best regards,
>> Nicolas
>>
>>
>> Le mar. 21 sept. 2021 à 21:56, Matthew Knepley  a
>> écrit :
>>
>>> On Tue, Sep 21, 2021 at 10:31 AM Karin&NiKo 
>>> wrote:
>>>
>>>> Dear Eric, dear Matthew,
>>>>
>>>> I share Eric's desire to be able to manipulate meshes composed of
>>>> different types of elements in a PETSc's DMPlex.
>>>> Since this discussion, is there anything new on this feature for the
>>>> DMPlex object or am I missing something?
>>>>
>>>
>>> Thanks for finding this!
>>>
>>> Okay, I did a rewrite of the Plex internals this summer. It should now
>>> be possible to interpolate a mesh with any
>>> number of cell types, partition it, redistribute it, and many other
>>> manipulations.
>>>
>>> You can read in some formats that support hybrid meshes. If you let me
>>> know how you plan to read it in, we can make it work.
>>> Right now, I don't want to make input interfaces that no one will ever
>>> use. We have a project, joint with Firedrake, to finalize
>>> parallel I/O. This will make parallel reading and writing for
>>> checkpointing possible, supporting topology, geometry, fields and
>>> layouts, for many meshes in o

Re: [petsc-users] Right DM for a particle network

2024-07-31 Thread Matthew Knepley
On Wed, Jul 31, 2024 at 7:34 PM Marco Seiz  wrote:

> Since PETSc allows for setting of non-local matrix entries I should
> probably be able to set the "missing" entries. Would something like
>
> 1) Construct matrix A for conduction term
>
> 2) Calculate RHS as rhs = source(local information) + A * global vector
>
> 3) hand off to TS
>
> work then? Basically skipping over the DM in the first place and letting
> the matrix handle the connectivity.
>

It sounds like it could work, but now I feel I do not understand your code
enough to be certain. Particle neighborhoods will determine
row sparsity, but how will that be determined in parallel? You could move
rows to different processes with MatGetSubMatrix and then
query them, but this does not seem superior to sending particles. I think I
will not understand until I see a small prototype.

  Thanks,

 Matt


> Best regards,
>
> Marco
>
>
> On 31.07.24 23:28, Matthew Knepley wrote:
> > On Wed, Jul 31, 2024 at 10:08 AM Mark Adams  mfad...@lbl.gov>> wrote:
> >
> > Just a thought, but perhaps he may want to use just sparse matrices,
> AIJ. He Manages the connectivity And we manage ghost values.
> >
> >
> > He is reconfiguring the neighborhood (row) each time, so you would
> essentially create a new matrix at each step with different sparsity. It
> would definitely function, but I wonder if he would have enough local
> information to construct the rows?
> >
> >Thanks,
> >
> >   Matt
> >
> >
> > On Wed, Jul 31, 2024 at 6:25 AM Matthew Knepley  <mailto:knep...@gmail.com>> wrote:
> >
> > On Tue, Jul 30, 2024 at 11:32 PM Marco Seiz  <mailto:ma...@kit.ac.jp>> wrote:
> >
> > Hello,
> >
> > maybe to clarify a bit further: I'd essentially like to
> solve heat transport between particles only, without solving the transport
> on my voxel mesh since there's a large scale difference between the voxel
> size and the particle size and heat transport should be fast enough that
> voxel resolution is unnecessary. Basically a discrete element method just
> for heat transport. The whole motion/size change part is handled separately
> on the voxel mesh.
> > Based on the connectivity, I can make a graph (attached an
> example from a 3D case, for description see [1]) and on each vertex
> (particle) of the graph I want to account for source terms and conduction
> along the edges. What I'd like to avoid is managing the exchange for
> non-locally owned vertices during the solve (e.g. for RHS evaluation)
> myself but rather have the DM do it with DMGlobalToLocal() and friends.
> Thinking a bit further, I'd probably also want to associate some data with
> the edges since that will enter the conduction term but stays constant
> during a solve (think contact area between particles).
> >
> > Looking over the DMSwarm examples, the coupling between
> particles is via the background mesh, so e.g. I can't say "loop over all
> local particles and for each particle and its neighbours do X". I could use
> the projection part to dump the the source terms from the particles to a
> coarser background mesh but for the conduction term I don't see how I could
> get a good approximation of the contact area on the background mesh without
> having a mesh at a similar resolution as I already have, kinda destroying
> the purpose of the whole thing.
> >
> >
> > The point I was trying to make in my previous message is that
> DMSwarm does not require a background mesh. The examples use one because
> that is what we use to evaluate particle grouping. However, you have an
> independent way to do this, so you do not need it.
> >
> > Second, the issue of replicated particles. DMSwarmMigrate allows
> you to replicate particles, using the input flag. Of course, you would have
> to manage removing particles you no longer want.
> >
> >   Thanks,
> >
> >  Matt
> >
> >
> > [1] Points represent particles, black lines are edges, with
> the color indicating which worker "owns" the particle, with 3 workers being
> used and only a fraction of edges/vertices being displayed to keep it
> somewhat tidy. The position of the points corresponds to the particles' x-y
> position, with the z position being ignored. Particle ownership isn't done
> via looking where the particle is on the voxel grid, but rather by dividing
> the array of particle indices into subarrays, so e.g. particles [0-n/3) go
> to the first worker and so on. Since my 

Re: [petsc-users] Right DM for a particle network

2024-07-31 Thread Matthew Knepley
On Wed, Jul 31, 2024 at 10:08 AM Mark Adams  wrote:

> Just a thought, but perhaps he may want to use just sparse matrices, AIJ.
> He Manages the connectivity And we manage ghost values.
>

He is reconfiguring the neighborhood (row) each time, so you would
essentially create a new matrix at each step with different sparsity. It
would definitely function, but I wonder if he would have enough local
information to construct the rows?

   Thanks,

  Matt


> On Wed, Jul 31, 2024 at 6:25 AM Matthew Knepley  wrote:
>
>> On Tue, Jul 30, 2024 at 11:32 PM Marco Seiz  wrote:
>>
>>> Hello,
>>>
>>> maybe to clarify a bit further: I'd essentially like to solve heat
>>> transport between particles only, without solving the transport on my voxel
>>> mesh since there's a large scale difference between the voxel size and the
>>> particle size and heat transport should be fast enough that voxel
>>> resolution is unnecessary. Basically a discrete element method just for
>>> heat transport. The whole motion/size change part is handled separately on
>>> the voxel mesh.
>>> Based on the connectivity, I can make a graph (attached an example from
>>> a 3D case, for description see [1]) and on each vertex (particle) of the
>>> graph I want to account for source terms and conduction along the edges.
>>> What I'd like to avoid is managing the exchange for non-locally owned
>>> vertices during the solve (e.g. for RHS evaluation) myself but rather have
>>> the DM do it with DMGlobalToLocal() and friends. Thinking a bit further,
>>> I'd probably also want to associate some data with the edges since that
>>> will enter the conduction term but stays constant during a solve (think
>>> contact area between particles).
>>>
>>> Looking over the DMSwarm examples, the coupling between particles is via
>>> the background mesh, so e.g. I can't say "loop over all local particles and
>>> for each particle and its neighbours do X". I could use the projection part
>>> to dump the the source terms from the particles to a coarser background
>>> mesh but for the conduction term I don't see how I could get a good
>>> approximation of the contact area on the background mesh without having a
>>> mesh at a similar resolution as I already have, kinda destroying the
>>> purpose of the whole thing.
>>>
>>
>> The point I was trying to make in my previous message is that DMSwarm
>> does not require a background mesh. The examples use one because that is
>> what we use to evaluate particle grouping. However, you have an independent
>> way to do this, so you do not need it.
>>
>> Second, the issue of replicated particles. DMSwarmMigrate allows you to
>> replicate particles, using the input flag. Of course, you would have to
>> manage removing particles you no longer want.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> [1] Points represent particles, black lines are edges, with the color
>>> indicating which worker "owns" the particle, with 3 workers being used and
>>> only a fraction of edges/vertices being displayed to keep it somewhat tidy.
>>> The position of the points corresponds to the particles' x-y position, with
>>> the z position being ignored. Particle ownership isn't done via looking
>>> where the particle is on the voxel grid, but rather by dividing the array
>>> of particle indices into subarrays, so e.g. particles [0-n/3) go to the
>>> first worker and so on. Since my particles can span multiple workers on the
>>> voxel grid this makes it much easier to update edge information with
>>> one-sided communication.  As you can see the "mesh" is quite irregular with
>>> no nice boundary existing for connected particles owned by different
>>> workers.
>>>
>>> Best regards,
>>> Marco
>>>
>>> On 30.07.24 22:56, Mark Adams wrote:
>>> > * they do have a  vocal mesh, so perhaps They want DM Plex.
>>> >
>>> > * they want ghost particle communication, that also might want a mesh
>>> >
>>> > * DM swarm does not have a notion of ghost particle, as far as I know,
>>> but it could use one
>>> >
>>> > On Tue, Jul 30, 2024 at 7:58 AM Matthew Knepley >> <mailto:knep...@gmail.com>> wrote:
>>> >
>>> > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz 
>>> wrote: Hello, I'd like to solve transient heat transport at a particle
&g

Re: [petsc-users] Right DM for a particle network

2024-07-31 Thread Matthew Knepley
On Tue, Jul 30, 2024 at 11:32 PM Marco Seiz  wrote:

> Hello,
>
> maybe to clarify a bit further: I'd essentially like to solve heat
> transport between particles only, without solving the transport on my voxel
> mesh since there's a large scale difference between the voxel size and the
> particle size and heat transport should be fast enough that voxel
> resolution is unnecessary. Basically a discrete element method just for
> heat transport. The whole motion/size change part is handled separately on
> the voxel mesh.
> Based on the connectivity, I can make a graph (attached an example from a
> 3D case, for description see [1]) and on each vertex (particle) of the
> graph I want to account for source terms and conduction along the edges.
> What I'd like to avoid is managing the exchange for non-locally owned
> vertices during the solve (e.g. for RHS evaluation) myself but rather have
> the DM do it with DMGlobalToLocal() and friends. Thinking a bit further,
> I'd probably also want to associate some data with the edges since that
> will enter the conduction term but stays constant during a solve (think
> contact area between particles).
>
> Looking over the DMSwarm examples, the coupling between particles is via
> the background mesh, so e.g. I can't say "loop over all local particles and
> for each particle and its neighbours do X". I could use the projection part
> to dump the the source terms from the particles to a coarser background
> mesh but for the conduction term I don't see how I could get a good
> approximation of the contact area on the background mesh without having a
> mesh at a similar resolution as I already have, kinda destroying the
> purpose of the whole thing.
>

The point I was trying to make in my previous message is that DMSwarm does
not require a background mesh. The examples use one because that is what we
use to evaluate particle grouping. However, you have an independent way to
do this, so you do not need it.

Second, the issue of replicated particles. DMSwarmMigrate allows you to
replicate particles, using the input flag. Of course, you would have to
manage removing particles you no longer want.

  Thanks,

 Matt


> [1] Points represent particles, black lines are edges, with the color
> indicating which worker "owns" the particle, with 3 workers being used and
> only a fraction of edges/vertices being displayed to keep it somewhat tidy.
> The position of the points corresponds to the particles' x-y position, with
> the z position being ignored. Particle ownership isn't done via looking
> where the particle is on the voxel grid, but rather by dividing the array
> of particle indices into subarrays, so e.g. particles [0-n/3) go to the
> first worker and so on. Since my particles can span multiple workers on the
> voxel grid this makes it much easier to update edge information with
> one-sided communication.  As you can see the "mesh" is quite irregular with
> no nice boundary existing for connected particles owned by different
> workers.
>
> Best regards,
> Marco
>
> On 30.07.24 22:56, Mark Adams wrote:
> > * they do have a  vocal mesh, so perhaps They want DM Plex.
> >
> > * they want ghost particle communication, that also might want a mesh
> >
> > * DM swarm does not have a notion of ghost particle, as far as I know,
> but it could use one
> >
> > On Tue, Jul 30, 2024 at 7:58 AM Matthew Knepley  <mailto:knep...@gmail.com>> wrote:
> >
> > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz 
> wrote: Hello, I'd like to solve transient heat transport at a particle
> scale using TS, with the per-particle equation being something like dT_i /
> dt = (S(T_i) + sum(F(T_j,
> > ZjQcmQRYFpfptBannerStart
> > __
> > This Message Is From an External Sender
> > This message came from outside your organization.
> >
> > __
> > ZjQcmQRYFpfptBannerEnd
> > On Tue, Jul 30, 2024 at 12:24 AM Marco Seiz  <mailto:ma...@kit.ac.jp>> wrote:
> >
> > __
> > Hello, I'd like to solve transient heat transport at a particle
> scale using TS, with the per-particle equation being something like dT_i /
> dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source
> term S and a conduction term
> > ZjQcmQRYFpfptBannerStart
> > __
> > This Message Is From an External Sender
> > This message came from outside your organization.
> >
> > __
> > ZjQcmQRYFpfptBannerEnd
> >
> > Hello,
> >
> > I'd like to solve transient heat transport at a particle scale
> 

Re: [petsc-users] Right DM for a particle network

2024-07-30 Thread Matthew Knepley
On Tue, Jul 30, 2024 at 12:24 AM Marco Seiz  wrote:

> Hello, I'd like to solve transient heat transport at a particle scale
> using TS, with the per-particle equation being something like dT_i / dt =
> (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source term
> S and a conduction term
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hello,
>
> I'd like to solve transient heat transport at a particle scale using TS, with 
> the per-particle equation being something like
>
> dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i))
>
> with a nonlinear source term S and a conduction term F. The particles can 
> move, deform and grow/shrink/vanish on a voxel grid, but for the temperature 
> a particle-scale resolution should be sufficient. The particles' connectivity 
> will change during the simulation, but is assumed constant during a single 
> timestep. I have a data structure tracking the particles' connectivity, so I 
> can say which particles should conduct heat to each other. I exploit symmetry 
> and so only save the "forward" edges, so e.g. for touching particles 1->2->3, 
> I only store [[2], [3], []], from which the full list [[2], [1, 3], [2]] 
> could be reconstructed but which I'd like to avoid. In parallel each worker 
> would own some of the particle data, so e.g. for the 1->2->3 example and 2 
> workers, worker 0 could own [[2]] and worker 1 [[3],[]].
>
> Looking over the DM variants, either DMNetwork or some manual mesh build with 
> DMPlex seem suited for this. I'd especially like it if the adjacency 
> information is handled by the DM automagically based on the edges so I don't 
> have to deal with ghost particle communication myself. I already tried 
> something basic with DMNetwork, though for some reason the offsets I get from 
> DMNetworkGetGlobalVecOffset() are larger than the actual network. I've 
> attached what I have so far but comparing to e.g. 
> src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I don't 
> need data at the edges. I might not be seeing the trees for the forest 
> though. The output with -dmnetwork_view looks reasonable to me. Any help in 
> fixing this approach, or if it would seem suitable pointers to using DMPlex 
> for this problem, would be appreciated.
>
> To me, this sounds like you should built it with DMSwarm. Why?

1) We only have vertices and edges, so a mesh does not buy us anything.

2) You are managing the parallel particle connectivity, so DMPlex topology
is not buying us anything. Unless I am misunderstanding.

3) DMNetwork has a lot of support for vertices with different
characteristics. Your particles all have the same attributes, so this is
unnecessary.

How would you set this up?

1) Declare all particle attributes. There are many Swarm examples, but say
ex6 which simulates particles moving under a central force.

2) That example decides when to move particles using a parallel background
mesh. However, you know which particles you want to move,
 so you just change the _rank_ field to the new rank and call
DMSwarmMigrate() with migration type _basic_.

It should be straightforward to setup a tiny example moving around a few
particles to see if it does everything you want.

   Thanks,

 Matt


> Best regards,
> Marco
>
> --
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCEGdiWx7$
  



Re: [petsc-users] Memory usage scaling with number of processors

2024-07-24 Thread Matthew Knepley
On Wed, Jul 24, 2024 at 9:32 PM Matthew Thomas 
wrote:

> Hi Matt,
>
> I have attached the configuration file below.
>

>From the log:

MPI:
  Version:3
  mpiexec: /apps/intel-tools/intel-mpi/2021.11.0/bin/mpiexec
  Implementation: mpich3
  I_MPI_NUMVERSION: 20211100300  MPICH_NUMVERSION: 3042

so you want to use that mpiexec. This is what will be used if you try

  make check

in $PETSC_DIR.

  Thanks,

 Matt


> Thanks,
> Matt
>
>
>
>
> On 25 Jul 2024, at 11:26 AM, Matthew Knepley  wrote:
>
> On Wed, Jul 24, 2024 at 8:37 PM Matthew Thomas 
> wrote:
>
> Hello Matt,
>
> Thanks for the help. I believe the problem is coming from an incorrect
> linking with MPI and PETSc.
>
> I tried running with petscmpiexec from
> $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error
>
> Error build location not found! Please set PETSC_DIR and PETSC_ARCH
> correctly for this build.
>
>
> Naturally I have set these two values and echo $PETSC_DIR gives the path I
> expect, so it seems like I am running my programs with a different version
> of MPI than petsc expects which could explain the memory usage.
>
> Do you have any ideas how to fix this?
>
>
> Yes. First we determine what MPI you configured with. Send configure.log,
> which has this information.
>
>   Thanks,
>
>   Matt
>
>
> Thanks,
> Matt
>
> On 24 Jul 2024, at 8:41 PM, Matthew Knepley  wrote:
>
> You don't often get email from knep...@gmail.com. Learn why this is
> important 
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlW0CILAi$
>  >
> On Tue, Jul 23, 2024 at 8:02 PM Matthew Thomas 
> wrote:
>
> Hello Matt,
>
> I have attached the output with mat_view for 8 and 40 processors.
>
> I am unsure what is meant by the matrix communicator and the partitioning.
> I am using the default behaviour in every case. How can I find this
> information?
>
>
> This shows that the matrix is taking the same amount of memory for 8 and
> 40 procs, so that is not your problem. Also,
> it is a very small amount of memory:
>
>   100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB
>
> and 50% overhead for indexing, so something under 4MB. I am not sure what
> is taking up the rest of the memory, but I do not
> think it is PETSc from the log you included.
>
>   Thanks,
>
>  Matt
>
>
> I have attached the log view as well if that helps.
>
> Thanks,
> Matt
>
>
>
>
> On 23 Jul 2024, at 9:24 PM, Matthew Knepley  wrote:
>
> You don't often get email from knep...@gmail.com. Learn why this is
> important 
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlW0CILAi$
>  >
> Also, you could run with
>
>   -mat_view ::ascii_info_detail
>
> and send the output for both cases. The storage of matrix values is not
> redundant, so something else is
> going on. First, what communicator do you use for the matrix, and what
> partitioning?
>
>   Thanks,
>
>  Matt
>
> On Mon, Jul 22, 2024 at 10:27 PM Barry Smith  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
>
>   Send the code.
>
> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Hello,
>
> I am using petsc and slepc to solve an eigenvalue problem for sparse 
> matrices. When I run my code with double the number of processors, the memory 
> usage also doubles.
>
> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>
> The issue is occurring with petsc not with slepc as this still occurs when I 
> remove the solve step and just create and assemble the petsc matrix.
>
> With n=10, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>
> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>
> Is this the expected behaviour? If not, how can I bug fix this?
>
>
> Thanks,
> Matt
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKpr

Re: [petsc-users] Memory usage scaling with number of processors

2024-07-24 Thread Matthew Knepley
On Wed, Jul 24, 2024 at 8:37 PM Matthew Thomas 
wrote:

> Hello Matt,
>
> Thanks for the help. I believe the problem is coming from an incorrect
> linking with MPI and PETSc.
>
> I tried running with petscmpiexec from
> $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error
>
> Error build location not found! Please set PETSC_DIR and PETSC_ARCH
> correctly for this build.
>
>
> Naturally I have set these two values and echo $PETSC_DIR gives the path I
> expect, so it seems like I am running my programs with a different version
> of MPI than petsc expects which could explain the memory usage.
>
> Do you have any ideas how to fix this?
>

Yes. First we determine what MPI you configured with. Send configure.log,
which has this information.

  Thanks,

  Matt


> Thanks,
> Matt
>
> On 24 Jul 2024, at 8:41 PM, Matthew Knepley  wrote:
>
> You don't often get email from knep...@gmail.com. Learn why this is
> important 
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKGiqaJw1$
>  >
> On Tue, Jul 23, 2024 at 8:02 PM Matthew Thomas 
> wrote:
>
>> Hello Matt,
>>
>> I have attached the output with mat_view for 8 and 40 processors.
>>
>> I am unsure what is meant by the matrix communicator and the
>> partitioning. I am using the default behaviour in every case. How can I
>> find this information?
>>
>
> This shows that the matrix is taking the same amount of memory for 8 and
> 40 procs, so that is not your problem. Also,
> it is a very small amount of memory:
>
>   100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB
>
> and 50% overhead for indexing, so something under 4MB. I am not sure what
> is taking up the rest of the memory, but I do not
> think it is PETSc from the log you included.
>
>   Thanks,
>
>  Matt
>
>
>> I have attached the log view as well if that helps.
>>
>> Thanks,
>> Matt
>>
>>
>>
>>
>> On 23 Jul 2024, at 9:24 PM, Matthew Knepley  wrote:
>>
>> You don't often get email from knep...@gmail.com. Learn why this is
>> important 
>> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKGiqaJw1$
>>  >
>> Also, you could run with
>>
>>   -mat_view ::ascii_info_detail
>>
>> and send the output for both cases. The storage of matrix values is not
>> redundant, so something else is
>> going on. First, what communicator do you use for the matrix, and what
>> partitioning?
>>
>>   Thanks,
>>
>>  Matt
>>
>> On Mon, Jul 22, 2024 at 10:27 PM Barry Smith  wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>>
>>   Send the code.
>>
>> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> Hello,
>>
>> I am using petsc and slepc to solve an eigenvalue problem for sparse 
>> matrices. When I run my code with double the number of processors, the 
>> memory usage also doubles.
>>
>> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>>
>> The issue is occurring with petsc not with slepc as this still occurs when I 
>> remove the solve step and just create and assemble the petsc matrix.
>>
>> With n=10, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>>
>> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>>
>> Is this the expected behaviour? If not, how can I bug fix this?
>>
>>
>> Thanks,
>> Matt
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKBFOwa7T$
>>  >
>>
>>
>>
>
> --
> What most experimenters take for granted before they

Re: [petsc-users] Memory usage scaling with number of processors

2024-07-24 Thread Matthew Knepley
On Tue, Jul 23, 2024 at 8:02 PM Matthew Thomas 
wrote:

> Hello Matt,
>
> I have attached the output with mat_view for 8 and 40 processors.
>
> I am unsure what is meant by the matrix communicator and the partitioning.
> I am using the default behaviour in every case. How can I find this
> information?
>

This shows that the matrix is taking the same amount of memory for 8 and 40
procs, so that is not your problem. Also,
it is a very small amount of memory:

  100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB

and 50% overhead for indexing, so something under 4MB. I am not sure what
is taking up the rest of the memory, but I do not
think it is PETSc from the log you included.

  Thanks,

 Matt


> I have attached the log view as well if that helps.
>
> Thanks,
> Matt
>
>
>
>
> On 23 Jul 2024, at 9:24 PM, Matthew Knepley  wrote:
>
> You don't often get email from knep...@gmail.com. Learn why this is
> important 
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW0d23Zev$
>  >
> Also, you could run with
>
>   -mat_view ::ascii_info_detail
>
> and send the output for both cases. The storage of matrix values is not
> redundant, so something else is
> going on. First, what communicator do you use for the matrix, and what
> partitioning?
>
>   Thanks,
>
>  Matt
>
> On Mon, Jul 22, 2024 at 10:27 PM Barry Smith  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
>
>   Send the code.
>
> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Hello,
>
> I am using petsc and slepc to solve an eigenvalue problem for sparse 
> matrices. When I run my code with double the number of processors, the memory 
> usage also doubles.
>
> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>
> The issue is occurring with petsc not with slepc as this still occurs when I 
> remove the solve step and just create and assemble the petsc matrix.
>
> With n=10, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>
> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>
> Is this the expected behaviour? If not, how can I bug fix this?
>
>
> Thanks,
> Matt
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW9-ZqDyD$
>  >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW9-ZqDyD$
 >


Re: [petsc-users] Memory usage scaling with number of processors

2024-07-23 Thread Matthew Knepley
Also, you could run with

  -mat_view ::ascii_info_detail

and send the output for both cases. The storage of matrix values is not
redundant, so something else is
going on. First, what communicator do you use for the matrix, and what
partitioning?

  Thanks,

 Matt

On Mon, Jul 22, 2024 at 10:27 PM Barry Smith  wrote:

> Send the code. On Jul 22, 2024, at 9: 18 PM, Matthew Thomas via
> petsc-users  wrote: ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍
> ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍
> ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍
> ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>   Send the code.
>
> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Hello,
>
> I am using petsc and slepc to solve an eigenvalue problem for sparse 
> matrices. When I run my code with double the number of processors, the memory 
> usage also doubles.
>
> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>
> The issue is occurring with petsc not with slepc as this still occurs when I 
> remove the solve step and just create and assemble the petsc matrix.
>
> With n=10, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>
> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>
> Is this the expected behaviour? If not, how can I bug fix this?
>
>
> Thanks,
> Matt
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bBuFOzIbQmePGNYKEiglz1pFB-m95_3tE7Dv1DS5LMTtblIFQltGEJC3V0Vyw3OVtQGdNMEF7g-pCek2kf6P$
  



Re: [petsc-users] Warning and Error in Makefile

2024-07-18 Thread Matthew Knepley
On Thu, Jul 18, 2024 at 3:07 AM Ivan Luthfi  wrote:

> Hi friend, I get many warning (but its ok for the warning). However I
> didnt get the result of my code when i compile it, is there any possible
> mistake in my makefile? can you please help me? The attached file is my
> error and warning (error
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi friend,
>
> I get many warning (but its ok for the warning). However I didnt get the
> result of my code when i compile it, is there any possible mistake in my
> makefile? can you please help me?
>
> The attached file is my error and warning (error in makefile line 27) ,
> and the makefile .
>
>
Try this makefile.

  Thanks,

Matt


> Best regards,
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Ya_y__cU_okKfbiRLeVRQcZnpP-bUnDxsDbIoF8uy3PMC76NSOHWbBuUcKvcrtyXwRZOgsmVYgTguq3UHtKu$
  



makefile
Description: Binary data


Re: [petsc-users] Many warning in my code

2024-07-16 Thread Matthew Knepley
We cannot see your code. The warning says that you give a value to a
variable, but then never use it. We cannot tell if that is true without
looking at the code.

  Thanks,

 Matt

On Tue, Jul 16, 2024 at 9:54 AM Ivan Luthfi  wrote:

> Hello guys, I am still trying to compile my old multigrid code. But i get
> so many warning, one of those warning is like this: MsFEM_poisson2D_DMDA.
> c: In function ‘int main(int, char**)’: MsFEM_poisson2D_DMDA. c: 185: 48:
> warning: variable ‘finest’
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hello guys,
> I am still trying to compile my old multigrid code. But i get so many
> warning, one of those warning is like this:
> MsFEM_poisson2D_DMDA.c: In function ‘int main(int, char**)’:
> MsFEM_poisson2D_DMDA.c:185:48: warning: variable ‘finest’ set but not used
> [-Wunused-but-set-variable]
>   185 | PetscInt mg_level = 2, finest;
>   |^~
> MsFEM_poisson2D_DMDA.c:56:44: warning: variable ‘pi’ set but not used
> [-Wunused-but-set-variable]
>56 | PetscScalar a,b,c,d,dt,pi;
>   |^~
> MsFEM_poisson2D_DMDA.c:57:35: warning: variable ‘Lx’ set but not used
> [-Wunused-but-set-variable]
>57 | PetscIntM,Lx,Ly,Nx,Ny,Mx,My;
>   |   ^~
> MsFEM_poisson2D_DMDA.c:57:38: warning: variable ‘Ly’ set but not used
> [-Wunused-but-set-variable]
>57 | PetscIntM,Lx,Ly,Nx,Ny,Mx,My;
>   |  ^~
> MsFEM_poisson2D_DMDA.c:58:33: warning: variable ‘hx’ set but not used
> [-Wunused-but-set-variable]
>58 | PetscScalar hx,hy,Hx,Hy;
>   | ^~
> MsFEM_poisson2D_DMDA.c:58:36: warning: variable ‘hy’ set but not used
> [-Wunused-but-set-variable]
>58 | PetscScalar hx,hy,Hx,Hy;
>   |^~
> MsFEM_poisson2D_DMDA.c:58:39: warning: variable ‘Hx’ set but not used
> [-Wunused-but-set-variable]
>58 | PetscScalar hx,hy,Hx,Hy;
>   |   ^~
> MsFEM_poisson2D_DMDA.c:58:42: warning: variable ‘Hy’ set but not used
> [-Wunused-but-set-variable]
>58 | PetscScalar hx,hy,Hx,Hy;
>   |  ^~
> MsFEM_poisson2D_DMDA.c:60:58: warning: variable ‘Nondimensionalization’
> set but not used [-Wunused-but-set-variable]
>60 | PetscInt
>  Compute_finegridsolution,Nondimensionalization;
>   |
>  ^
> MsFEM_poisson2D_DMDA.c:283:54: warning: ‘%d’ directive writing between 1
> and 11 bytes into a region of size between 0 and 99
>
>
> Can you guys help me to fix or solve this warning in order to get the code
> run smoothly. Please help
>
>
> --
> Best regards,
>
> Ivan Luthfi Ihwani
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!awUD7DouA4Cjqw1gYsc2V7I8PVc9ojQXaCMFCcPwBUMEXEYkc9N6wUcfv_EYtLHhF_rlH_rNmpGvlGD9ZfYP$
  



Re: [petsc-users] [petsc-maint] Error in using KSPSOperators

2024-07-14 Thread Matthew Knepley
On Sun, Jul 14, 2024 at 4:55 AM Ivan Luthfi  wrote:

> Hi there, I have an issue in compiling my code. Here is the warning:
> MsFEM_poisson2D_DMDA. c: 159: 65: error: cannot convert ‘MatStructure’ to
> ‘Mat’ {aka ‘_p_Mat*’} 159 | ierr = KSPSetOperators(ksp_direct,up.
> Af,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);Please
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi there, I have an issue in compiling my code. Here is the warning:
>
> MsFEM_poisson2D_DMDA.c:159:65: error: cannot convert ‘MatStructure’ to
> ‘Mat’ {aka ‘_p_Mat*’}
>   159 | ierr =
> KSPSetOperators(ksp_direct,up.Af,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);
>
>
We removed the MatStructure flag in this call many years ago. The structure
is now inferred automatically.

  Thanks,

 Matt


> Please help me
> --
> Best regards,
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!brkfTnuKK7Tio9g62K-XaKr_oEVo9jMQ5WVmR9pS2s7WZ5hJIEsUHwPAMXX0r368HQnLIeVYuGXDXGg-pObs$
  



Re: [petsc-users] [EXTERNAL] Re: What exactly is the GlobalToNatural PetscSF of DMPlex/DM?

2024-07-13 Thread Matthew Knepley
On Sat, Jul 13, 2024 at 4:39 PM Ferrand, Jesus A. 
wrote:

> Matt:
>
> Thank you for the reply.
> The bulk of it makes a lot of sense.
> Yes! That need to keep track of the original mesh numbers (AKA "Natural")
> is what I find pressing for my research group.
> Awesome! I was separately keeping track of these numbers using a
> PetscSection that I was inputting into DMSetLocalSection() but of the
> coordinate DM, not the plex.
> It is good to know the "correct" way to do it.
>
> "What is repetitive? It should be able to be automated."
>
> Absolutely as the intrinsic process is ubiquitous between mesh formats.
> What I meant by "repetitive" is the information that is reused by
> different API calls (namely, global stratum sizes, and local point numbers
> corresponding to owned DAG points).
> I need to define a struct to bookkeep this. It's not really an issue,
> rather a minor annoyance (for me).
> I need the stratum sizes to offset DMPlex numbering cells in range
> [0,nCell) and vertices ranging in [nCell,nCell+nVert) to other mesh
> numberings where cells range from [1, nCell] and vertices range from [1,
> nVert]. In my experience, this information is needed at least three (3)
> times, during coordinate writes, during element connectivity writes, and
> during DMLabel writes for BC's and other labelled data.
>

This is a good point, and I think supports my argument that these formats
are insane. What you point you below is that the format demands
a completely artificial division of points when writing. I don't do this
when writing HDF5. This division can be recovered in linear time completely
locally after a read, so I think by any metric is it crazy to put it in the
file. However, I recognize that supporting previous formats is a good
thing, so I do not complain too loudly :)

  Thanks,

 Matt


> This information I determine using a code snippet like this:
> PetscCall(PetscObjectGetComm((PetscObject)plex,&mpiComm));
>  PetscCallMPI(MPI_Comm_rank(mpiComm,&mpiRank));
>  PetscCallMPI(MPI_Comm_size(mpiComm,&mpiCommSize));
>  PetscCall(DMPlexCreatePointNumbering(plex,&GlobalNumberIS));
>  PetscCall(ISGetIndices(GlobalNumberIS,&IdxPtr));
>  PetscCall(DMPlexGetDepth(plex,&Depth));
>  PetscCall(PetscMalloc3(//
>Depth,&LocalIdxPtrPtr,//Indices in the local stratum to owned points.
>Depth,&pOwnedPtr,//Number of points in the local stratum that are owned.
>Depth,&GlobalStratumSizePtr//Global stratum size.
>  ));
>  for(PetscInt jj = 0;jj < Depth;jj++){
>PetscCall(DMPlexGetDepthStratum(plex,jj,&pStart,&pEnd));
>pOwnedPtr[jj] = 0;
>for(PetscInt ii = pStart;ii < pEnd;ii++){
>  if(IdxPtr[ii] >= 0) pOwnedPtr[jj]++;
>}
>
> PetscCallMPI(MPI_Allreduce(&pOwnedPtr[jj],&GlobalStratumSizePtr[jj],1,MPIU_INT,MPI_MAX,mpiComm));
>PetscCall(PetscMalloc1(pOwnedPtr[jj],&LocalIdxPtrPtr[jj]));
>kk = 0;
>for(PetscInt ii = pStart;ii < pEnd; ii++){
>  if(IdxPtr[ii] >= 0){
>LocalIdxPtrPtr[jj][kk] = ii;
>kk++;
>  }
>}
>  }
>  PetscCall(ISRestoreIndices(GlobalNumberIS,&IdxPtr));
>  PetscCall(ISDestroy(&GlobalNumberIS));
> --
> *From:* Matthew Knepley 
> *Sent:* Thursday, July 11, 2024 8:32 PM
> *To:* Ferrand, Jesus A. 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* [EXTERNAL] Re: [petsc-users] What exactly is the
> GlobalToNatural PetscSF of DMPlex/DM?
>
> *CAUTION:* This email originated outside of Embry-Riddle Aeronautical
> University. Do not click links or open attachments unless you recognize the
> sender and know the content is safe.
>
> On Mon, Jul 8, 2024 at 10:28 PM Ferrand, Jesus A. 
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Dear PETSc team:
>
> Greetings.
> I keep working on mesh I/O utilities using DMPlex.
> Specifically for the output stage, I need a solid grasp on the global
> numbers and ideally how to set them into the DMPlex during an input
> operation and carrying the global numbers through API calls to
> DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of
> the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF
> manages these global numbers. The next most useful object is the PointSF,
> but to me, it seems to only help establish DAG point ownership, not DAG
> point global indices.
>
>
> This is a good question, and gets at a design point of Plex. I don't
> believe global numbers are the "right" way to talk about mesh points, or
> even a very useful way to do it, for several reasons. Plex is des

Re: [petsc-users] Help me for compiling my Code

2024-07-13 Thread Matthew Knepley
On Sat, Jul 13, 2024 at 5:54 AM Ivan Luthfi  wrote:

> Hi Mr. Knepley,
> I already copy and edit the makefile shared by PETSC. And here is the
> modification i made to compile my codes:
>
> app : MsFEM_poisson2D_DMDA.o UserParameter.o FormFunction.o MsFEM.o
> PCMsFEM.o
> $(LINK.C) -o $@ $^ $(LDLIBS)
>
> MsFEM_poisson2D_DMDA.o: MsFEM_poisson2D_DMDA.c
> $(LINK.C) -o $@ $^ $(LDLIBS)
>
> UserParameter.o: UserParameter.c
> $(LINK.C) -o $@ $^ $(LDLIBS)
>
> FormFunction.o: FormFunction.c
> $(LINK.C) -o $@ $^ $(LDLIBS)
>
> MsFEM.o: MsFEM.c
> $(LINK.C) -o $@ $^ $(LDLIBS)
>
> PCMsFEM.o: PCMsFEM.c
> $(LINK.c) -o $@ $^ $(LDLIBS)
>
> clean:
> rm -rf app *.o
>
> However, after I compile it by using "make app" it said that i have "***
> missing separator. Stop. " in line 24, which is in that first "$(LINK.C) -o
> ". What is wrong from my makefile?
>

You used spaces instead of a tab at the beginning of that line.

  Thanks,

Matt


>
> Pada Jum, 12 Jul 2024 pukul 18.57 Matthew Knepley 
> menulis:
>
>> On Fri, Jul 12, 2024 at 3:16 AM Ivan Luthfi 
>> wrote:
>>
>>> I try to compile my code, but i get this error. Anyone can help me? Here
>>> is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o
>>> bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA. o UserParameter. o
>>> FormFunction. o MsFEM. o PCMsFEM. o /home/ivan/petsc/opt-3. 21.
>>> 2/lib/libpetsc. so
>>> ZjQcmQRYFpfptBannerStart
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> ZjQcmQRYFpfptBannerEnd
>>> I try to compile my code, but i get this error. Anyone can help me?
>>>
>>> Here is my terminal:
>>>
>>> $make bin_MsFEM_poisson2D_DMDA
>>>
>>> mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o
>>> FormFunction.o MsFEM.o PCMsFEM.o
>>> /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \
>>> /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \
>>> /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \
>>> /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \
>>> /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3
>>> /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or
>>> directory
>>> /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or
>>> directory
>>> collect2: error: ld returned 1 exit status
>>> make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1
>>>
>>
>> You are specifying libraries that do not exist. Do not do this. You can
>> use the PETSc Makefiles to build
>> this, as described in the manual:
>>
>>
>> https://urldefense.us/v3/__https://petsc.org/main/manual/getting_started/*sec-writing-application-codes__;Iw!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdGQGJye6$
>>  
>>
>> under the section "For adding PETSc to an existing application"
>>
>>   THanks,
>>
>>  Matt
>>
>>
>>> --
>>> Best regards,
>>>
>>> Ivan Luthfi Ihwani
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdKQPlJPn$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdCScrTl7$
>>  >
>>
>
>
> --
> Best regards,
>
> Ivan Luthfi Ihwani
>
> --
> Ivan Luthfi Ihwani
> Mobile: 08979341681
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdKQPlJPn$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdCScrTl7$
 >


Re: [petsc-users] Questions on TAO and gradient norm / inner products

2024-07-12 Thread Matthew Knepley
On Fri, Jul 12, 2024 at 7:25 AM Blauth, Sebastian <
sebastian.bla...@itwm.fraunhofer.de> wrote:

> Dear Barry,
>
>
>
> thanks for the clarification. Oh, that’s unfortunate, that not all TAO
> algorithms use the supplied matrix for the norm (and then probably also not
> for computing inner products in, e.g., the limited memory formulas).
>
>
>
> I fear that I don’t have sufficient time at the moment to make a MR. I
> could, however, provide some “minimal” example where the behavior is shown.
> However, that example would be using petsc4py as I am only familiar with
> that and I would use the fenics FEM package to define the matrices. Would
> this be okay?
>

Yes


> And if that’s the case, should I post the example here or at the petsc
> gitlab?
>

Either place is fine. Gitlab makes it easier to track.

  Thanks,

 Matt


> Best regards,
>
> Sebastian
>
>
>
> --
>
> Dr. Sebastian Blauth
>
> Fraunhofer-Institut für
>
> Techno- und Wirtschaftsmathematik ITWM
>
> Abteilung Transportvorgänge
>
> Fraunhofer-Platz 1, 67663 Kaiserslautern
>
> Telefon: +49 631 31600-4968
>
> sebastian.bla...@itwm.fraunhofer.de
>
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxKWFVczr$
>  
>
>
>
> *From:* Barry Smith 
> *Sent:* Tuesday, July 9, 2024 3:32 PM
> *To:* Blauth, Sebastian ; Munson,
> Todd ; toby Isaac 
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] Questions on TAO and gradient norm / inner
> products
>
>
>
>
>
>From
>
>
>
> $ git grep TaoGradientNorm
>
> bound/impls/blmvm/blmvm.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> bound/impls/blmvm/blmvm.c:PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> bound/impls/bnk/bnk.c:  PetscCall(*TaoGradientNorm*(tao, tao->gradient,
> NORM_2, &bnk->gnorm));
>
> bound/impls/bnk/bnk.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &bnk->gnorm));
>
> bound/impls/bnk/bnls.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &bnk->gnorm));
>
> bound/impls/bnk/bntl.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &bnk->gnorm));
>
> bound/impls/bnk/bntl.c:PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &bnk->gnorm));
>
> bound/impls/bnk/bntr.c:PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &bnk->gnorm));
>
> interface/taosolver.c:.seealso: [](ch_tao), `Tao`,
> `TaoGetGradientNorm()`, `*TaoGradientNorm*()`
>
> interface/taosolver.c:.seealso: [](ch_tao), `Tao`,
> `TaoSetGradientNorm()`, `*TaoGradientNorm*()`
>
> interface/taosolver.c:  *TaoGradientNorm* - Compute the norm using the
> `NormType`, the user has selected
>
> interface/taosolver.c:PetscErrorCode *TaoGradientNorm*(Tao tao, Vec
> gradient, NormType type, PetscReal *gnorm)
>
> unconstrained/impls/lmvm/lmvm.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/lmvm/lmvm.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/nls/nls.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/nls/nls.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/nls/nls.c:PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/ntr/ntr.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/ntr/ntr.c:PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
> unconstrained/impls/ntr/ntr.c:  PetscCall(*TaoGradientNorm*(tao,
> tao->gradient, NORM_2, &gnorm));
>
>
>
> it appears only some of the algorithm implementations use the norm you
> provide. While
>
>
>
>  git grep VecNorm
>
>
>
> indicates many places where it is not used. Likely some of the other
> algorithm implementations could be easily "fixed" to support by
>
> changing the norm computed. But I am not an expert on the algorithms and
> don't know if all algorithms can mathematically support a user provided
> norm.
>
>
>
> You are welcome to take a stab at making the change in an MR, or do you
> have a simple test problem with a mass matrix we can use to fix the
>
> "missing" implementations?
>
>
>
>   Barry
>
>
>
>
>
>
>
>
>
>
>
> On Jul 9, 2024, at 3:47 AM, Blauth, Sebastian <
> sebastian.bla...@itwm.fraunhofer.de> wrote:
>
>
>
> Hello,
>
>
>
> I have some questions regarding TAO and the use the gradient norm.
>
>
>
> First, I want to use a custom inner product for the optimization in TAO
> (for computing the gradient norm and, e.g., in the double loop of a
> quasi-Newton method). I have seen that there is the method
> TAOSetGradientNorm
> https://urldefense.us/v3/__https://petsc.org/release/manualpages/Tao/TaoSetGradientNorm/__;!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxDTBKkKq$
>   which seems
> to

Re: [petsc-users] Incompatible pointer type

2024-07-12 Thread Matthew Knepley
On Fri, Jul 12, 2024 at 4:41 AM Ivan Luthfi  wrote:

> I get a warning about an incompatible pointer type when compile a code,
> anyone know how to fix this? make bin_MsFEM_poisson2D_DMDA mpicc -Wall -c
> PCMsFEM. c -isystem/home/ivan/petsc/opt-3. 21. 2/include PCMsFEM. c: In
> function ‘PCCreate_MsFEM’: PCMsFEM. c: 59: 33:
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> I get a warning about an incompatible pointer type when compile a code,
> anyone know how to fix this?
>
> make bin_MsFEM_poisson2D_DMDA
> mpicc -Wall -c PCMsFEM.c -isystem/home/ivan/petsc/opt-3.21.2/include
> PCMsFEM.c: In function ‘PCCreate_MsFEM’:
> PCMsFEM.c:59:33: warning: assignment to ‘PetscErrorCode (*)(struct _p_PC
> *, PetscOptionItems *)’ {aka ‘int (*)(struct _p_PC *, struct
> _p_PetscOptionItems *)’} from incompatible pointer type ‘PetscErrorCode
> (*)(struct _p_PC *)’ {aka ‘int (*)(struct _p_PC *)’}
> [-Wincompatible-pointer-types]
>59 | pc->ops->setfromoptions = PCSetFromOptions_MsFEM;
>   | ^
> mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o
> FormFunction.o MsFEM.o PCMsFEM.o
> /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \
> /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \
> /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \
> /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \
> /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3
> /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or
> directory
> /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory
> collect2: error: ld returned 1 exit status
> make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1
>

As the error message says, you are missing the second argument in your
function. Here is an example from PETSc itself:


https://urldefense.us/v3/__https://petsc.org/main/src/ksp/pc/impls/jacobi/jacobi.c.html*PCSetFromOptions_Jacobi__;Iw!!G_uCfscf7eWS!YNTtUlBhxN1oRmv-wQa9_XiOB4c3ylJzvdw0j3lk02txhvFCujXDCzk2N1vy3r2K9JIIBbZ-f_3xM62LxM13$
 

  Thanks,

 Matt


> --
> Best regards,
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YNTtUlBhxN1oRmv-wQa9_XiOB4c3ylJzvdw0j3lk02txhvFCujXDCzk2N1vy3r2K9JIIBbZ-f_3xM93D47Rc$
  



Re: [petsc-users] Help me for compiling my Code

2024-07-12 Thread Matthew Knepley
On Fri, Jul 12, 2024 at 3:16 AM Ivan Luthfi  wrote:

> I try to compile my code, but i get this error. Anyone can help me? Here
> is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o
> bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA. o UserParameter. o
> FormFunction. o MsFEM. o PCMsFEM. o /home/ivan/petsc/opt-3. 21.
> 2/lib/libpetsc. so
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> I try to compile my code, but i get this error. Anyone can help me?
>
> Here is my terminal:
>
> $make bin_MsFEM_poisson2D_DMDA
>
> mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o
> FormFunction.o MsFEM.o PCMsFEM.o
> /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \
> /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \
> /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \
> /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \
> /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3
> /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or
> directory
> /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory
> collect2: error: ld returned 1 exit status
> make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1
>

You are specifying libraries that do not exist. Do not do this. You can use
the PETSc Makefiles to build
this, as described in the manual:


https://urldefense.us/v3/__https://petsc.org/main/manual/getting_started/*sec-writing-application-codes__;Iw!!G_uCfscf7eWS!dR3K5-koJJZxrylQZPi1wyTnMeuWa7qeAM46G7OwoWYcDHB_OHBNiTMPGfh-JeAxS5XzHp7hBitgk6cOgnFL$
 

under the section "For adding PETSc to an existing application"

  THanks,

 Matt


> --
> Best regards,
>
> Ivan Luthfi Ihwani
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dR3K5-koJJZxrylQZPi1wyTnMeuWa7qeAM46G7OwoWYcDHB_OHBNiTMPGfh-JeAxS5XzHp7hBitgk16BIEMn$
  



Re: [petsc-users] Warning [-Wformat-overflow]

2024-07-12 Thread Matthew Knepley
On Fri, Jul 12, 2024 at 2:26 AM Ivan Luthfi  wrote:

> I have  warning: FormatFunction. c: In function 'ComputeStiffnessMatrix':
> FormatFunction. c: 128: 46: warning: '%d' directive writing between 1 and
> 11 bytes into a region of size between 0 and 99 [-Wformat-overflow]
> (line128) sprintf(filename,
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> I have  warning:
>
> FormatFunction.c: In function 'ComputeStiffnessMatrix':
> FormatFunction.c:128:46: warning: '%d' directive writing between 1 and 11
> bytes into a region of size between 0 and 99 [-Wformat-overflow]
> (line128) sprintf(filename, "%sc%d_N%dM%d_permeability.log",
> up->problem_description, up->problem_flag, up->Nx, up->Mx)
>
> do you know why this warning appear? and how to fix it.
>

You are writing a PetscInt into the spot for an int. You can

  1) Cast to int:

sprintf(filename, "%sc%d_N%dM%d_permeability.log", up->problem_description,
up->problem_flag, (int)up->Nx, (int)up->Mx)

  2) Use the custom format

sprintf(filename, "%sc%d_N%" PetscInt_FMT "M%" PetscInt_FMT
"_permeability.log", up->problem_description, up->problem_flag, up->Nx,
up->Mx)

  Thanks,

Matt


> --
> Best regards,
> --
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YNtOwQCCJYLu6JV0ph8gXnJeFhrQXAiTdZyhV_C0bBut_pZLBhsZw-xORAasjijkRzFe_0udBt4lKebIWfMb$
  



Re: [petsc-users] What exactly is the GlobalToNatural PetscSF of DMPlex/DM?

2024-07-11 Thread Matthew Knepley
On Mon, Jul 8, 2024 at 10:28 PM Ferrand, Jesus A. 
wrote:

> Dear PETSc team: Greetings. I keep working on mesh I/O utilities using
> DMPlex. Specifically for the output stage, I need a solid grasp on the
> global numbers and ideally how to set them into the DMPlex during an input
> operation and carrying
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Dear PETSc team:
>
> Greetings.
> I keep working on mesh I/O utilities using DMPlex.
> Specifically for the output stage, I need a solid grasp on the global
> numbers and ideally how to set them into the DMPlex during an input
> operation and carrying the global numbers through API calls to
> DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of
> the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF
> manages these global numbers. The next most useful object is the PointSF,
> but to me, it seems to only help establish DAG point ownership, not DAG
> point global indices.
>

This is a good question, and gets at a design point of Plex. I don't
believe global numbers are the "right" way to talk about mesh points, or
even a very useful way to do it, for several reasons. Plex is designed to
run just fine without any global numbers. It can, of course, produce
them on command, as many people remain committed to their existence.

Thus, the first idea is that global numbers should not be stored, since
they can always be created on command very cheaply. It is much more
costly to write global numbers to disk, or pull them through memory, than
compute them.

The second idea is that we use a combination of local numbers, namely
(rank, point num) pairs, and PetscSF objects to establish sharing relations
for parallel meshes. Global numbering is a particular traversal of a mesh,
running over the locally owned parts of each mesh in local order. Thus an
SF + a local order = a global order, and the local order is provided by the
point numbering.

The third idea is that a "natural" order is just the global order in which
a mesh is first fed to Plex. When I redistribute and reorder for good
performance, I keep track of a PetscSF that can map the mesh back to the
original order in which it was provided. I see this as an unneeded expense,
but many many people want output written in the original order (mostly
because processing tools are so poor). This management is what we mean by
GlobalToNatural.


> Otherwise, I have been working with the IS obtained from
> DMPlexGetPointNumbering() and manually determining global stratum sizes,
> offsets, and numbers by looking at the signs of the involuted index list
> that comes with that IS. It's working for now (I can monolithically write
> meshes to CGNS in parallel), but it is resulting in repetitive code that I
> will need for another mesh format that I want to support.
>

What is repetitive? It should be able to be automated.

  Thanks,

Matt


> Sincerely:
>
> *J.A. Ferrand*
>
> Embry-Riddle Aeronautical University - Daytona Beach - FL
> Ph.D. Candidate, Aerospace Engineering
>
> M.Sc. Aerospace Engineering
>
> B.Sc. Aerospace Engineering
>
> B.Sc. Computational Mathematics
>
>
> *Phone:* (386)-843-1829
>
> *Email(s):* ferra...@my.erau.edu
>
> jesus.ferr...@gmail.com
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Zcybs3rxgbG35ciZiIHB3TY07Qnjd1sD0HzJVDWwr-OuDyXtVjDJ8WbIMS4LRixsMUZwLGtwsznQ8MeOalw3$
  



Re: [petsc-users] Strategies for coupled nonlinear problems

2024-07-08 Thread Matthew Knepley
On Mon, Jul 8, 2024 at 6:14 AM Miguel Angel Salazar de Troya <
miguel.sala...@corintis.com> wrote:

> Thanks Adam and Matt,
>
> Matt, can I get away with just using PCFIELDSPLIT? Or do I need the
> SNESFIELDSPLIT? Though it looks like the block Gauss-Seidel is only
> implemented in serial (
> https://urldefense.us/v3/__https://petsc.org/main/manual/ksp/*block-jacobi-and-overlapping-additive-schwarz-preconditioners__;Iw!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_GnSYekO$
>  
> )
>

You can do what you want for the linear problem, but that will probably not
help. The best thing I know of for this kind of nonlinear coupling is
now called primal-dual Newton, a name which I am not wild about. It is
discussed here 
(https://urldefense.us/v3/__https://core.ac.uk/download/pdf/211337815.pdf__;!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DT_42uJ$
 ) and
originated in reference [33] from that thesis. My aim was to allow these
kinds of solvers with that branch.


> On a more theoretical note, I have the impression that the convergence
> failures of the Newton-Raphson method for this kind of problem is
> ultimately due to a lack of a diagonally dominant Jacobian. I have not
> found any reference so I might be wrong.
>

I would say that the dominant direction for momentum hides the direction
for improvement of the coefficient.

  Thanks,

Matt


> Best,
> Miguel
>
> On Sat, Jul 6, 2024 at 3:33 PM Matthew Knepley  wrote:
>
>> On Fri, Jul 5, 2024 at 3:29 AM Miguel Angel Salazar de Troya <
>> miguel.sala...@corintis.com> wrote:
>>
>>> Hello, I have the Navier-Stokes equation coupled with a
>>> convection-diffusion equation for the temperature. It is a two-way coupling
>>> because the viscosity depends on the temperature. One way to solve this is
>>> with some kind of fixed point iteration
>>> ZjQcmQRYFpfptBannerStart
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> ZjQcmQRYFpfptBannerEnd
>>> Hello,
>>>
>>> I have the Navier-Stokes equation coupled with a convection-diffusion
>>> equation for the temperature. It is a two-way coupling because the
>>> viscosity depends on the temperature. One way to solve this is with some
>>> kind of fixed point iteration scheme, where I solve each equation
>>> separately in a loop until I see convergence. I am aware this is not
>>> possible directly at the SNES level. Is there something that one can do
>>> using PCFIELDSPLIT? I would like to assemble my fully coupled system and
>>> play with the solver options to get some kind of fixed-point iteration
>>> scheme. I would like to avoid having to build two separate SNES solvers,
>>> one per equation. Any reference on techniques to solve this type of coupled
>>> system is welcome.
>>>
>>
>> Hi Miguel,
>>
>> I have a branch
>>
>>
>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_ESUOIOo$
>>  
>>
>> that will allow you to do exactly what you want to do. However, there are
>> caveats. In order to have SNES do this, it needs a way to selectively
>> reassemble subproblems. I assume you are using Firedrake, so this will
>> not work. I would definitely be willing to work with those guys to get
>> this going, introducing callbacks, just as we did on the FieldSplit case.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Best,
>>> Miguel
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DKriL_s$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_Ne_UeR1$
>>  >
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DKriL_s$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_Ne_UeR1$
 >


Re: [petsc-users] Strategies for coupled nonlinear problems

2024-07-06 Thread Matthew Knepley
On Fri, Jul 5, 2024 at 3:29 AM Miguel Angel Salazar de Troya <
miguel.sala...@corintis.com> wrote:

> Hello, I have the Navier-Stokes equation coupled with a
> convection-diffusion equation for the temperature. It is a two-way coupling
> because the viscosity depends on the temperature. One way to solve this is
> with some kind of fixed point iteration
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hello,
>
> I have the Navier-Stokes equation coupled with a convection-diffusion
> equation for the temperature. It is a two-way coupling because the
> viscosity depends on the temperature. One way to solve this is with some
> kind of fixed point iteration scheme, where I solve each equation
> separately in a loop until I see convergence. I am aware this is not
> possible directly at the SNES level. Is there something that one can do
> using PCFIELDSPLIT? I would like to assemble my fully coupled system and
> play with the solver options to get some kind of fixed-point iteration
> scheme. I would like to avoid having to build two separate SNES solvers,
> one per equation. Any reference on techniques to solve this type of coupled
> system is welcome.
>

Hi Miguel,

I have a branch


https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!cqtcQdi0PKTs4S7KpYIdusFz-Sr1TBcqFksEpoLFWkYiP_DAZlbbQdGCNTEQxScvJW1Tm0fsMaqh1YxTAA-_$
 

that will allow you to do exactly what you want to do. However, there are
caveats. In order to have SNES do this, it needs a way to selectively
reassemble subproblems. I assume you are using Firedrake, so this will not
work. I would definitely be willing to work with those guys to get
this going, introducing callbacks, just as we did on the FieldSplit case.

  Thanks,

 Matt


> Best,
> Miguel
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cqtcQdi0PKTs4S7KpYIdusFz-Sr1TBcqFksEpoLFWkYiP_DAZlbbQdGCNTEQxScvJW1Tm0fsMaqh1RE4uZs7$
  



Re: [petsc-users] Doubt about TSMonitorSolutionVTK

2024-07-02 Thread Matthew Knepley
On Tue, Jul 2, 2024 at 3:50 PM MIGUEL MOLINOS PEREZ  wrote:

> Dear Matthew, thank you. That makes much more sense. Is the script you
> mention available for download?
>

$PETSC_DIR/lib/petsc/bin/petsc-_gen_xdmf.py

  Thanks,

Matt


> Thanks,
> Miguel
>
> On Jul 1, 2024, at 5:09 AM, Matthew Knepley  wrote:
>
> On Mon, Jul 1, 2024 at 1:43 AM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Dear Matthey,
>>
>> Sorry for the late response.
>>
>> Yes, I get output when I run the example mentioned by Barry.
>>
>> The output directory should not be an issue since with the exact same
>> configuration works for hdf5 but not for vtk/vts/vtu.
>>
>> I’ve been doing some tests and now I think this issue might be related to
>> the fact that the output vector was generated using a SWARM discretization.
>> Is this possible?
>>
>
> Yes, there is no VTK viewer for Swarm. We have been moving away from VTK
> format, which is bulky and not very expressive, into our own HDF5 and CGNS.
> When we use HDF5, we have a script to generate an XDMF file, telling
> Paraview how to view it. I agree that this is annoying. Currently, we are
> moving toward PyVista, which can read our HDF5 files directly (and also
> work directly with running PETSc), although this is not done yet.
>
>   Thanks,
>
>  Matt
>
>
>> Best,
>> Miguel
>>
>> On Jun 27, 2024, at 4:59 AM, Matthew Knepley  wrote:
>>
>> Do you get output when you run an example with that option? Is it
>> possible that your current working directory is not what you expect? Maybe
>> try putting in an absolute path.
>>
>>   Thanks,
>>
>> Matt
>>
>> On Wed, Jun 26, 2024 at 5:30 PM MIGUEL MOLINOS PEREZ 
>> wrote:
>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> Sorry, I did not put in cc petsc-users@mcs.anl.gov my replay.
>>>
>>> Miguel
>>>
>>> On Jun 24, 2024, at 6:39 PM, MIGUEL MOLINOS PEREZ 
>>> wrote:
>>>
>>> Thank you Barry,
>>>
>>> This is exactly how I did it the first time.
>>>
>>> Miguel
>>>
>>> On Jun 24, 2024, at 6:37 PM, Barry Smith  wrote:
>>>
>>>
>>>See, for example, the bottom of src/ts/tutorials/ex26.c  that uses
>>> -ts_monitor_*solution_vtk* 'foo-%03d.vts'
>>>
>>>
>>> On Jun 24, 2024, at 8:47 PM, MIGUEL MOLINOS PEREZ 
>>> wrote:
>>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> Dear all,
>>>
>>> I want to monitor the results at each iteration of TS using vtk format.
>>> To do so, I add the following lines to my Monitor function:
>>>
>>> char vts_File_Name[MAXC];
>>> PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name),
>>> "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step));
>>> PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name
>>> ));
>>>
>>> My script compiles and executes without any sort of warning/error
>>> messages. However, no output files are produced at the end of the
>>> simulation. I’ve also tried the option “-ts_monitor_solution_vtk
>>> ”, but I got no results as well.
>>>
>>> I can’t find any similar example on the petsc website and I don’t see
>>> what I am doing wrong. Could somebody point me to the right direction?
>>>
>>> Thanks,
>>> Miguel
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyS793ANl$
>>  >
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyS793ANl$
>  >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyS793ANl$
 >


Re: [petsc-users] Question regarding naming of fieldsplit splits

2024-07-01 Thread Matthew Knepley
On Mon, Jul 1, 2024 at 9:48 AM Blauth, Sebastian <
sebastian.bla...@itwm.fraunhofer.de> wrote:

> Dear Matt,
>
>
>
> thanks a lot for your help. Unfortunately, for me these extra options do
> not have any effect, I still get the “u” and “p” fieldnames. Also, this
> would not help me to get rid of the “c” fieldname – on that level of the
> fieldsplit I am basically using your approach already, and still it does
> show up. The output of the -ksp_view is unchanged, so that I do not attach
> it here again. Maybe I misunderstood you?
>

Oh, we make an exception for single fields, since we think you would want
to use the name. I have to make an extra option to shut off naming.

   Thanks,

 Matt


> Thanks for the help and best regards,
>
> Sebastian
>
>
>
> --
>
> Dr. Sebastian Blauth
>
> Fraunhofer-Institut für
>
> Techno- und Wirtschaftsmathematik ITWM
>
> Abteilung Transportvorgänge
>
> Fraunhofer-Platz 1, 67663 Kaiserslautern
>
> Telefon: +49 631 31600-4968
>
> sebastian.bla...@itwm.fraunhofer.de
>
> https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas_5OTyRd$
>  
>
>
>
> *From:* Matthew Knepley 
> *Sent:* Monday, July 1, 2024 2:27 PM
> *To:* Blauth, Sebastian 
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit
> splits
>
>
>
> On Fri, Jun 28, 2024 at 4:05 AM Blauth, Sebastian <
> sebastian.bla...@itwm.fraunhofer.de> wrote:
>
> Hello everyone,
>
>
>
> I have a question regarding the naming convention using PETSc’s
> PCFieldsplit. I have been following
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$
>  
> to create a DMShell with FEniCS in order to customize PCFieldsplit for my
> application.
>
> I am using the following options, which work nicely for me:
>
>
>
> -ksp_type fgmres
>
> -pc_type fieldsplit
>
> -pc_fieldsplit_0_fields 0, 1
>
> -pc_fieldsplit_1_fields 2
>
> -pc_fieldsplit_type additive
>
> -fieldsplit_0_ksp_type fgmres
>
> -fieldsplit_0_pc_type fieldsplit
>
> -fieldsplit_0_pc_fieldsplit_type schur
>
> -fieldsplit_0_pc_fieldsplit_schur_fact_type full
>
> -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
>
> -fieldsplit_0_fieldsplit_u_ksp_type preonly
>
> -fieldsplit_0_fieldsplit_u_pc_type lu
>
> -fieldsplit_0_fieldsplit_p_ksp_type cg
>
> -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14
>
> -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30
>
> -fieldsplit_0_fieldsplit_p_pc_type icc
>
> -fieldsplit_0_ksp_rtol 1e-14
>
> -fieldsplit_0_ksp_atol 1e-30
>
> -fieldsplit_0_ksp_monitor_true_residual
>
> -fieldsplit_c_ksp_type preonly
>
> -fieldsplit_c_pc_type lu
>
> -ksp_view
>
>
>
> By default, we use the field names, but you can prevent this by specifying
> the fields by hand, so
>
>
>
> -fieldsplit_0_pc_fieldsplit_0_fields 0
> -fieldsplit_0_pc_fieldsplit_1_fields 1
>
>
>
> should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I
> think easier to remember than
>
> some extra option.
>
>
>
>   Thanks,
>
>
>
>  Matt
>
>
>
> Note that this is just an academic example (sorry for the low solver
> tolerances) to test the approach, consisting of a Stokes equation and some
> concentration equation (which is not even coupled to Stokes, just for
> testing).
>
> Completely analogous to
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$
>  ,
> I translate my IS’s to a PETSc Section, which is then supplied to a DMShell
> and assigned to a KSP. I am not so familiar with the code or how / why this
> works, but it seems to do so perfectly. I name my sections with petsc4py
> using
>
>
>
> section.setFieldName(0, "u")
>
> section.setFieldName(1, "p")
>
> section.setFieldName(2, "c")
>
>
>
> However, this is also reflected in the way I can access the fieldsplit
> options from the command line. My question is: Is there any way of not
> using the FieldNames specified in python but use the index of the field as
> defined with “-pc_fieldsplit_0_fields 0, 1” and “-pc_fieldsplit_1_fields
> 2”, i.e., instead of the prefix “fieldsplit_0_fieldsplit_u” I want to write
> “fieldsplit_0_fieldsplit_0”, instead o

Re: [petsc-users] Question regarding naming of fieldsplit splits

2024-07-01 Thread Matthew Knepley
On Fri, Jun 28, 2024 at 4:05 AM Blauth, Sebastian <
sebastian.bla...@itwm.fraunhofer.de> wrote:

> Hello everyone,
>
>
>
> I have a question regarding the naming convention using PETSc’s
> PCFieldsplit. I have been following
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOHdcKldy$
>  
> to create a DMShell with FEniCS in order to customize PCFieldsplit for my
> application.
>
> I am using the following options, which work nicely for me:
>
>
>
> -ksp_type fgmres
>
> -pc_type fieldsplit
>
> -pc_fieldsplit_0_fields 0, 1
>
> -pc_fieldsplit_1_fields 2
>
> -pc_fieldsplit_type additive
>
> -fieldsplit_0_ksp_type fgmres
>
> -fieldsplit_0_pc_type fieldsplit
>
> -fieldsplit_0_pc_fieldsplit_type schur
>
> -fieldsplit_0_pc_fieldsplit_schur_fact_type full
>
> -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
>
> -fieldsplit_0_fieldsplit_u_ksp_type preonly
>
> -fieldsplit_0_fieldsplit_u_pc_type lu
>
> -fieldsplit_0_fieldsplit_p_ksp_type cg
>
> -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14
>
> -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30
>
> -fieldsplit_0_fieldsplit_p_pc_type icc
>
> -fieldsplit_0_ksp_rtol 1e-14
>
> -fieldsplit_0_ksp_atol 1e-30
>
> -fieldsplit_0_ksp_monitor_true_residual
>
> -fieldsplit_c_ksp_type preonly
>
> -fieldsplit_c_pc_type lu
>
> -ksp_view
>

By default, we use the field names, but you can prevent this by specifying
the fields by hand, so

-fieldsplit_0_pc_fieldsplit_0_fields 0
-fieldsplit_0_pc_fieldsplit_1_fields 1

should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I think
easier to remember than
some extra option.

  Thanks,

 Matt


> Note that this is just an academic example (sorry for the low solver
> tolerances) to test the approach, consisting of a Stokes equation and some
> concentration equation (which is not even coupled to Stokes, just for
> testing).
>
> Completely analogous to
> https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOHdcKldy$
>  ,
> I translate my IS’s to a PETSc Section, which is then supplied to a DMShell
> and assigned to a KSP. I am not so familiar with the code or how / why this
> works, but it seems to do so perfectly. I name my sections with petsc4py
> using
>
>
>
> section.setFieldName(0, "u")
>
> section.setFieldName(1, "p")
>
> section.setFieldName(2, "c")
>
>
>
> However, this is also reflected in the way I can access the fieldsplit
> options from the command line. My question is: Is there any way of not
> using the FieldNames specified in python but use the index of the field as
> defined with “-pc_fieldsplit_0_fields 0, 1” and “-pc_fieldsplit_1_fields
> 2”, i.e., instead of the prefix “fieldsplit_0_fieldsplit_u” I want to write
> “fieldsplit_0_fieldsplit_0”, instead of “fieldsplit_0_fieldsplit_p” I want
> to use “fieldsplit_0_fieldsplit_1”, and instead of “fieldsplit_c” I want to
> use “fieldsplit_1”. Just changing the names of the fields to
>
>
>
> section.setFieldName(0, "0")
>
> section.setFieldName(1, "1")
>
> section.setFieldName(2, "2")
>
>
>
> does obviously not work as expected, as it works for velocity and
> pressure, but not for the concentration – the prefix there is then
> “fieldsplit_2” and not “fieldsplit_1”. In the docs, I have found
> https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOD6iRa_k$
>   which seems
> to suggest that the fieldname can potentially be supplied, but I don’t see
> how to do so from the command line. Also, for the sake of completeness, I
> attach the output of the solve with “-ksp_view” below.
>
>
>
> Thanks a lot in advance and best regards,
>
> Sebastian
>
>
>
>
>
> The output of ksp_view is the following:
>
> KSP Object: 1 MPI processes
>
>   type: fgmres
>
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
> happy breakdown tolerance 1e-30
>
>   maximum iterations=1, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-11, divergence=1.
>
>   right preconditioning
>
>   using UNPRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: fieldsplit
>
> FieldSplit with ADDITIVE composition: total splits = 2
>
> Solver info for each split is in the following KSP objects:
>
>   Split number 0 Defined by IS
>
>   KSP Object: (fieldsplit_0_) 1 MPI processes
>
> type: fgmres
>
>   restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
>   happy breakdown tolerance 1e-30
>
> maximum iterations=1, initial guess is zero
>
> tolerances:  relative=1e-14, absolute=1e-30, divergence=1.
>
> right 

Re: [petsc-users] Doubt about TSMonitorSolutionVTK

2024-07-01 Thread Matthew Knepley
On Mon, Jul 1, 2024 at 1:43 AM MIGUEL MOLINOS PEREZ  wrote:

> Dear Matthey,
>
> Sorry for the late response.
>
> Yes, I get output when I run the example mentioned by Barry.
>
> The output directory should not be an issue since with the exact same
> configuration works for hdf5 but not for vtk/vts/vtu.
>
> I’ve been doing some tests and now I think this issue might be related to
> the fact that the output vector was generated using a SWARM discretization.
> Is this possible?
>

Yes, there is no VTK viewer for Swarm. We have been moving away from VTK
format, which is bulky and not very expressive, into our own HDF5 and CGNS.
When we use HDF5, we have a script to generate an XDMF file, telling
Paraview how to view it. I agree that this is annoying. Currently, we are
moving toward PyVista, which can read our HDF5 files directly (and also
work directly with running PETSc), although this is not done yet.

  Thanks,

 Matt


> Best,
> Miguel
>
> On Jun 27, 2024, at 4:59 AM, Matthew Knepley  wrote:
>
> Do you get output when you run an example with that option? Is it possible
> that your current working directory is not what you expect? Maybe try
> putting in an absolute path.
>
>   Thanks,
>
> Matt
>
> On Wed, Jun 26, 2024 at 5:30 PM MIGUEL MOLINOS PEREZ 
> wrote:
>
>> Sorry, I did not put in cc petsc-users@ mcs. anl. gov my replay. Miguel
>> On Jun 24, 2024, at 6: 39 PM, MIGUEL MOLINOS PEREZ 
>> wrote: Thank you Barry, This is exactly how I did it the first time. Miguel
>> On Jun 24, 2024, at 6: 37
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>> Sorry, I did not put in cc petsc-users@mcs.anl.gov my replay.
>>
>> Miguel
>>
>> On Jun 24, 2024, at 6:39 PM, MIGUEL MOLINOS PEREZ  wrote:
>>
>> Thank you Barry,
>>
>> This is exactly how I did it the first time.
>>
>> Miguel
>>
>> On Jun 24, 2024, at 6:37 PM, Barry Smith  wrote:
>>
>>
>>See, for example, the bottom of src/ts/tutorials/ex26.c  that uses
>> -ts_monitor_*solution_vtk* 'foo-%03d.vts'
>>
>>
>> On Jun 24, 2024, at 8:47 PM, MIGUEL MOLINOS PEREZ  wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>> Dear all,
>>
>> I want to monitor the results at each iteration of TS using vtk format.
>> To do so, I add the following lines to my Monitor function:
>>
>> char vts_File_Name[MAXC];
>> PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name),
>> "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step));
>> PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name));
>>
>> My script compiles and executes without any sort of warning/error
>> messages. However, no output files are produced at the end of the
>> simulation. I’ve also tried the option “-ts_monitor_solution_vtk
>> ”, but I got no results as well.
>>
>> I can’t find any similar example on the petsc website and I don’t see
>> what I am doing wrong. Could somebody point me to the right direction?
>>
>> Thanks,
>> Miguel
>>
>>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEe1KXx1t$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEVRHZKJf$
>  >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEe1KXx1t$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEVRHZKJf$
 >


Re: [petsc-users] Doubt about TSMonitorSolutionVTK

2024-06-27 Thread Matthew Knepley
Do you get output when you run an example with that option? Is it possible
that your current working directory is not what you expect? Maybe try
putting in an absolute path.

  Thanks,

Matt

On Wed, Jun 26, 2024 at 5:30 PM MIGUEL MOLINOS PEREZ  wrote:

> Sorry, I did not put in cc petsc-users@ mcs. anl. gov my replay. Miguel
> On Jun 24, 2024, at 6: 39 PM, MIGUEL MOLINOS PEREZ 
> wrote: Thank you Barry, This is exactly how I did it the first time. Miguel
> On Jun 24, 2024, at 6: 37
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Sorry, I did not put in cc petsc-users@mcs.anl.gov my replay.
>
> Miguel
>
> On Jun 24, 2024, at 6:39 PM, MIGUEL MOLINOS PEREZ  wrote:
>
> Thank you Barry,
>
> This is exactly how I did it the first time.
>
> Miguel
>
> On Jun 24, 2024, at 6:37 PM, Barry Smith  wrote:
>
>
>See, for example, the bottom of src/ts/tutorials/ex26.c  that uses
> -ts_monitor_*solution_vtk* 'foo-%03d.vts'
>
>
> On Jun 24, 2024, at 8:47 PM, MIGUEL MOLINOS PEREZ  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear all,
>
> I want to monitor the results at each iteration of TS using vtk format. To
> do so, I add the following lines to my Monitor function:
>
> char vts_File_Name[MAXC];
> PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name),
> "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step));
> PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name));
>
> My script compiles and executes without any sort of warning/error
> messages. However, no output files are produced at the end of the
> simulation. I’ve also tried the option “-ts_monitor_solution_vtk
> ”, but I got no results as well.
>
> I can’t find any similar example on the petsc website and I don’t see what
> I am doing wrong. Could somebody point me to the right direction?
>
> Thanks,
> Miguel
>
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cRDEhdo8wMYy8YI7qVH44Ui6kVCB25tWDo4FafPe5dkLag3M8deW0vrvVYE7_UDXg-mBs7lTNZGsNNie5ANx$
  



Re: [petsc-users] Trying to develop my own Krylov solver

2024-06-25 Thread Matthew Knepley
On Tue, Jun 25, 2024 at 12:05 PM Julien BRUCHON via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi, Based on 'cg. c', I'm trying to develop my own Krylov solver (a
> projected conjugate gradient). I want to integrate this into my C++ code,
> where I already have an interface for PETSC which works well. However, I
> have the following questions
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi,
>
> Based on 'cg.c', I'm trying to develop my own Krylov solver (a projected
> conjugate gradient). I want to integrate this into my C++ code, where I
> already have an interface for PETSC which works well. However, I have the
> following questions :
>
> - Where am I sensed to put my 'cg_projected.c' and 'pcgimpl.h' files?
> Should they go in a directory petsc/src/ksp/ksp/impls/pcg/? If so, how do I
> compile that? Is it simply by adding this directory to the Makefile in
> petsc/src/ksp/ksp/impls/?
>

Yes.

  Thanks,

Matt


> - I have also tried the basic approach of putting these two files in
> directories of my own C++ code  and compiling. However, I have this error
> at the link edition:
> [100%] Linking CXX shared library libcoeur.so
> /usr/bin/ld: src/solvers/libsolvers.a(cg_projected.c.o): warning:
> relocation against `petscstack' in read-only section `.text'
> /usr/bin/ld: src/solvers/libsolvers.a(cg_projected.c.o): relocation
> R_X86_64_PC32 against symbol `petscstack' can not be used when making a
> shared object; recompilé avec -fPIC
> /usr/bin/ld : échec de l'édition de liens finale : bad value
> collect2: error: ld returned 1 exit status
> make[2]: *** [CMakeFiles/coeur.dir/build.make:121 : libcoeur.so] Erreur 1
> make[1]: *** [CMakeFiles/Makefile2:286 : CMakeFiles/coeur.dir/all] Erreur 2
> make: *** [Makefile:91 : all] Erreur 2
>
> Could you please tell me what is the right way to proceed?
>
> Thank you,
>
> Julien
> --
> Julien Bruchon
> Professeur IMT - Responsable du département MPE
> LGF - UMR CNRS 5307 - 
> https://urldefense.us/v3/__https://www.mines-stetienne.fr/lgf/__;!!G_uCfscf7eWS!ZSMOgmxB-aRx34PmTC3s7ZkDC-zT09xxpmLjhj_vx8oVkTvDSORUOeoTe8ZdEFCHVCUxSrs3eOz34zZTK5ep$
>  
> 
> Mines Saint-Étienne, une école de l'Institut Mines-Télécom
> Librairie Éléments Finis Coeur
> 
> 0477420072
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZSMOgmxB-aRx34PmTC3s7ZkDC-zT09xxpmLjhj_vx8oVkTvDSORUOeoTe8ZdEFCHVCUxSrs3eOz341kHHQmb$
  



Re: [petsc-users] DMPlexBuildFromCellList node ordering for tetrahedral elements

2024-06-25 Thread Matthew Knepley
On Tue, Jun 25, 2024 at 4:50 AM onur.notonur via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi, I'm trying to implement a Tetgen mesh importer for my
> Petsc/DMPlex-based solver. I am encountering some issues and suspect they
> might be due to my import process. The Tetgen mesh definitions can be found
> here for reference: https: //wias-berlin. de/software/tetgen/fformats.
> htmlI
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi,
>
> I'm trying to implement a Tetgen mesh importer for my Petsc/DMPlex-based
> solver. I am encountering some issues and suspect they might be due to my
> import process. The Tetgen mesh definitions can be found here for
> reference: 
> https://urldefense.us/v3/__https://wias-berlin.de/software/tetgen/fformats.html__;!!G_uCfscf7eWS!fZrK2yb2VL8fOhEwFSop4QgqPuB1M2_C1OMkfebpI6V32422apR69VnseJBVL8CTE5Gn4r6jRSz7K8CgmdZe$
>  
> 
>
> I am building DMPlex using the DMPlexBuildFromCellList function and using
> the exact ordering of nodes I get from the Tetgen mesh files (.ele file).
> The resulting mesh looks good when I export it to VTK, but I encounter
> issues when solving particular PDEs. (I can solve them while using other
> importers I write) I suspect there may be orientation errors or something
> similar.
>
> So, my question is, Is the ordering of nodes in elements important for
> tetrahedral elements while using DMPlexBuildFromCellList? If so, how should
> I arrange them?
>

Yes, TetGen inverts tetrahedra compared to Plex, since I use all outward
facing normals, whereas those in TetGen are not consistently ordered.
However, why not just use DMPlexGenerate() with TetGen?

  Thanks,

Matt


> Thanks,
> Onur
>
> Sent with Proton Mail secure email.
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fZrK2yb2VL8fOhEwFSop4QgqPuB1M2_C1OMkfebpI6V32422apR69VnseJBVL8CTE5Gn4r6jRSz7K3D5kEEE$
  



Re: [petsc-users] Error type "Petsc has generated inconsistent data"

2024-06-24 Thread Matthew Knepley
On Mon, Jun 24, 2024 at 2:50 PM MIGUEL MOLINOS PEREZ  wrote:

> Dear all, I am trying to assemble a matrix A with coefficients which I
> need to assemble the RHS (F) and its Jacobian (J) in a TS type of problem.
> Determining each coefficient of A involves the resolution of a small
> non-linear problem (1 dof)
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Dear all,
>
> I am trying to assemble a matrix A with coefficients which I need to
> assemble the RHS (F) and its Jacobian (J) in a TS type of problem.
>
> Determining each coefficient of A involves the resolution of a small
> non-linear problem (1 dof) using the serial version of SNES. By the way, the
> matrix A is of the type  “MATMPIAIJ”.
>
> The weird part is, if I pass the matrix A  to the TS routine inside of a
> user-context structure, without even accessing to the values inside of A, I
> got the following error message:
>
> >* [1]PETSC ERROR: Petsc has generated inconsistent data
> *>* [1]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> *>* on different processors*
>
> But if I comment out the line which calls the SNES routine used to
> evaluate the coefficients inside of A, I don’t get the error message.
>
> Some additional context:
> - The SNES routine is called once at a time inside of each rank.
> - I use PetscCall(SNESCreate(PETSC_COMM_SELF, &snes));
> - The vectors inside of the SNES function are defined as follows:
> VecCreateSeq(PETSC_COMM_SELF, 1, &Y)
> - All the input fields for SNES are also sequential.
>
> Any feedback is greatly appreciated!
>

There is something inconsistent among the processes. First, I would try
running with a constant A. Then execute your nonlinear solve, but return
the constant A. If that passes, then likely you are returning inconsistent
results across processes with your solve.

  Thanks,

 Matt


> Thanks,
> Miguel
>
>
>
>
> [test-Mass-Transport-Master-Equation-PETSc-Backward-Euler][MgHx-hcp-x5x5x5-cell]
> t=0.e+00  dt=1.e-07  it=( 0,  0)
>   0 KSP Residual norm 4.776631889125e-07
>   1 KSP Residual norm 6.807505564283e-17
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Petsc has generated inconsistent data
> [5]PETSC ERROR: - Error Message
> --
> [5]PETSC ERROR: Petsc has generated inconsistent data
> [7]PETSC ERROR: - Error Message
> --
> [7]PETSC ERROR: Petsc has generated inconsistent data
> [0]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Petsc has generated inconsistent data
> [1]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [2]PETSC ERROR: - Error Message
> --
> [2]PETSC ERROR: Petsc has generated inconsistent data
> [2]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [3]PETSC ERROR: - Error Message
> --
> [3]PETSC ERROR: Petsc has generated inconsistent data
> [3]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [4]PETSC ERROR: - Error Message
> --
> [4]PETSC ERROR: Petsc has generated inconsistent data
> [4]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [5]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [6]PETSC ERROR: - Error Message
> --
> [6]PETSC ERROR: Petsc has generated inconsistent data
> [6]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [7]PETSC ERROR: MPI_Allreduce() called in different locations (code lines)
> on different processors
> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the
> program crashed before usage or a spelling mistake, etc!
> [1]PETSC ERROR: WARNING! There are unused option(s) set! Could be the
> program crashed before usage or a spelling mistake, etc!
> [2]PETSC ERROR: WARNING! There are unused option(s) set! Could be the
> program crashed before usage or a spelling mistake, etc!
> [2]PETSC ERROR:   Option left: name:-sns_monitor (no value) source: code
> [2]PETSC ERROR: [3]PETSC ERROR: WARNING! There are unused option(s) set!
> Could be the program crash

Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc KSPSolve Performance Issue

2024-06-24 Thread Matthew Knepley
On Mon, Jun 24, 2024 at 11:21 AM Yongzhong Li 
wrote:

> Thank you Pierre for your information. Do we have a conclusion for my
> original question about the parallelization efficiency for different stages
> of KSP Solve? Do we need to do more testing to figure out the issues? Thank
> you, Yongzhong From:
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Thank you Pierre for your information. Do we have a conclusion for my
> original question about the parallelization efficiency for different stages
> of KSP Solve? Do we need to do more testing to figure out the issues?
>

We have an extended discussion of this here:
https://urldefense.us/v3/__https://petsc.org/release/faq/*what-kind-of-parallel-computers-or-clusters-are-needed-to-use-petsc-or-why-do-i-get-little-speedup__;Iw!!G_uCfscf7eWS!aQJpmm5W6l6FUiumnIPmkouzwzNUfx-Dyq04i1O2KS_InQGk6qjI7wUir0Hx6QEUQE2AMiJDsez3x4zRO7V_$
 

The kinds of operations you are talking about (SpMV, VecDot, VecAXPY, etc)
are memory bandwidth limited. If there is no more bandwidth to be
marshalled on your board, then adding more processes does nothing at all.
This is why people were asking about how many "nodes" you are running on,
because that is the unit of memory bandwidth, not "cores" which make little
difference.

  Thanks,

 Matt


> Thank you,
>
> Yongzhong
>
>
>
> *From: *Pierre Jolivet 
> *Date: *Sunday, June 23, 2024 at 12:41 AM
> *To: *Yongzhong Li 
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc
> KSPSolve Performance Issue
>
>
>
>
>
> On 23 Jun 2024, at 4:07 AM, Yongzhong Li 
> wrote:
>
>
>
> This Message Is From an External Sender
>
> This message came from outside your organization.
>
> Yeah, I ran my program again using -mat_view::ascii_info and set
> MKL_VERBOSE to be 1, then I noticed the outputs suggested that the matrix
> to be seqaijmkl type (I’ve attached a few as below)
>
> --> Setting up matrix-vector products...
>
>
>
> Mat Object: 1 MPI process
>
>   type: seqaijmkl
>
>   rows=16490, cols=35937
>
>   total: nonzeros=128496, allocated nonzeros=128496
>
>   total number of mallocs used during MatSetValues calls=0
>
> not using I-node routines
>
> Mat Object: 1 MPI process
>
>   type: seqaijmkl
>
>   rows=16490, cols=35937
>
>   total: nonzeros=128496, allocated nonzeros=128496
>
>   total number of mallocs used during MatSetValues calls=0
>
> not using I-node routines
>
>
>
> --> Solving the system...
>
>
>
> Excitation 1 of 1...
>
>
>
> 
>
> Iterative solve completed in 7435 ms.
>
> CONVERGED: rtol.
>
> Iterations: 72
>
> Final relative residual norm: 9.22287e-07
>
> 
>
> [CPU TIME] System solution: 2.2716e+02 s.
>
> [WALL TIME] System solution: 7.44387218e+00 s.
>
> However, it seems to me that there were still no MKL outputs even I set
> MKL_VERBOSE to be 1. Although, I think it should be many spmv operations
> when doing KSPSolve(). Do you see the possible reasons?
>
>
>
> SPMV are not reported with MKL_VERBOSE (last I checked), only dense BLAS
> is.
>
>
>
> Thanks,
>
> Pierre
>
>
>
> Thanks,
>
> Yongzhong
>
>
>
>
>
> *From: *Matthew Knepley 
> *Date: *Saturday, June 22, 2024 at 5:56 PM
> *To: *Yongzhong Li 
> *Cc: *Junchao Zhang , Pierre Jolivet <
> pie...@joliv.et>, petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc
> KSPSolve Performance Issue
>
> 你通常不会收到来自 knep...@gmail.com 的电子邮件。了解这一点为什么很重要
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!fVvbGldqcUV5ju4jpu5oGmt-VjITi5JpCJzhHxpbgsERLVYZzglpxKOOyrBRGxjRxp7vWHwt3SnINFOQErR1Z8kcDcf3qwbYRxM$>
>
> On Sat, Jun 22, 2024 at 5:03 PM Yongzhong Li <
> yongzhong...@mail.utoronto.ca> wrote:
>
> MKL_VERBOSE=1 ./ex1 matrix nonzeros = 100, allocated nonzeros = 100
> MKL_VERBOSE Intel(R) MKL 2019. 0 Update 4 Product build 20190411 for
> Intel(R) 64 architecture Intel(R) Advanced Vector Extensions 512 (Intel(R)
> AVX-512) with support of Vector
>
> ZjQcmQRYFpfptBannerStart
>
> *This Message Is From an External Sender*
>
> This message came from outside your organization.
>
>
>
> ZjQcmQRYFpfptBannerEnd
>
> MKL_VERBOSE=1 ./ex1
>
>
> matrix nonzeros = 100, allocated nonzeros = 100
>
> MKL_VERBOSE Intel(R) MKL 2019.

Re: [petsc-users] Restart Krylov-Schur "Manually"

2024-06-24 Thread Matthew Knepley
On Mon, Jun 24, 2024 at 11:38 AM Samar Khatiwala <
samar.khatiw...@earth.ox.ac.uk> wrote:

> Hi Matt,
>
> This would be for SNES and KSP. In many of my applications it would be too
> expensive to regenerate the Krylov space, which would also be problematic
> for Newton as I often do matrix-free calculations.
>
> I know how complex the underlying data structures are for these objects
> and entirely understand how difficult it would be to provide a general
> checkpointing facility. Still, I do dream that one day I’ll be able to do
> Save(snes,...) and Load(snes,…) ...
>

Let's talk specifically about SNES. I think this works now. It would be
good to find out why you think it does not. You can do
SNESView() and it will serialize the solver, and VecView() to serialize the
current solution. Then you SNESLoad() and VecLoad()
and call SNESSolve() with that solution as the initial guess.

  Thanks,

Matt


> Thanks,
>
> Samar
>
> On Jun 24, 2024, at 12:15 PM, Matthew Knepley  wrote:
>
> On Mon, Jun 24, 2024 at 4:24 AM Samar Khatiwala <
> samar.khatiw...@earth.ox.ac.uk> wrote:
>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> Hi,
>>
>> Sorry to hijack this thread but I just want to add that this is a more
>> general problem that I constantly face with PETSc. Not being able to
>> checkpoint the complete state of a solver instance and restart a
>> computation (at least not easily) has long been the biggest missing feature
>> in PETSc for me.
>>
>
> Which type of solver do you want to do this for? Some solvers, like
> Newton, just need the current iterate, which we do. You could imagine
> saving Krylov spaces, but it is very often cheaper to regenerate them than
> to save and load them from disk (which tends to be under-provisioned).
>
>   Thanks,
>
> Matt
>
>
>> Thanks,
>>
>> Samar
>>
>> On Jun 24, 2024, at 9:14 AM, Jose E. Roman via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> Unfortunately there is no support for this.
>>
>> If you requested several eigenvalues and the solver has converged some of 
>> them already, then it would be possible to stop the run, save the 
>> eigenvectors and rerun with the eigenvectors passed via 
>> EPSSetDeflationSpace().
>>
>> Jose
>>
>>
>> > El 24 jun 2024, a las 0:21, Marildo Kola  escribió:
>> >
>> > This Message Is From an External Sender
>> > This message came from outside your organization.
>> > Hello,
>> > I am using SLEPc to calculate eigenvalues for fluid dynamics stability 
>> > analysis (specifically studying bifurcations). We employ a 
>> > MatShellOperation, which involves propagating Navier-Stokes to construct 
>> > the Krylov space, and this particularly slows down our algorithm. The 
>> > problem I am facing is that, after days of simulations, the simulation may 
>> > die due to a time limit on the cluster, but the eigensolver (I am using 
>> > the default Krylov-Schur) has not converged yet, leading to the loss of 
>> > all the information computed up to that point. I wanted to inquire if it 
>> > is possible to implement, with the available features, a restarting 
>> > strategy, which can allow me, once the simulation stops (or after a given 
>> > number of restart iterations of the solver), to save all the information 
>> > necessary to restart the EPSSolver from the point it had stopped.
>> > Thank you in advance,
>> > Best regards, Marildo Kola
>>
>>
>>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eqOVZGVAqNSjCctZz15A80QkgJt28WLpriJPEkHdcCiN1vrJ4RfXPebAjRgUJQsG16l6LF3_JF75uEgZQtwr$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eqOVZGVAqNSjCctZz15A80QkgJt28WLpriJPEkHdcCiN1vrJ4RfXPebAjRgUJQsG16l6LF3_JF75uFfyfnKY$
>  >
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eqOVZGVAqNSjCctZz15A80QkgJt28WLpriJPEkHdcCiN1vrJ4RfXPebAjRgUJQsG16l6LF3_JF75uEgZQtwr$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eqOVZGVAqNSjCctZz15A80QkgJt28WLpriJPEkHdcCiN1vrJ4RfXPebAjRgUJQsG16l6LF3_JF75uFfyfnKY$
 >


Re: [petsc-users] Modelica + PETSc?

2024-06-24 Thread Matthew Knepley
On Mon, Jun 24, 2024 at 10:29 AM Zou, Ling  wrote:

> This is the website I normally refer to
>
> https://urldefense.us/v3/__https://openmodelica.org/doc/OpenModelicaUsersGuide/latest/solving.html__;!!G_uCfscf7eWS!atB8LuQrlGQnbi8lXYaGJKrUHYTfhXYVS8-QcBlSPWc_cjEPT8rDhboXFr08Hx6cSDhSTlwO2WEXOpoY6C5F$
>  
>
>
>
> Looks like DASSL is the default solver.
>
>
That is what I would have guessed. DASSL is a good solver, but quite dated.
I think PETSc can solve those problems, and more scalably. We would be
happy to give advice on conforming to their interface.

  Thanks,

Matt


> PS: I was playing with Modelica with some toy problem I have, which solves
> fine but could not hold on with the steady-state solution for some reason.
> Maybe I did it wrong, or maybe I am not familiar with the solver. That was
> the reason of the Modelica+PETSc question since I am quite familiar with
> PETSc. Also, the combination seems to be a powerful pair.
>
>
>
> -Ling
>
>
>
> *From: *Matthew Knepley 
> *Date: *Monday, June 24, 2024 at 6:12 AM
> *To: *Zou, Ling 
> *Cc: *petsc-users@mcs.anl.gov 
> *Subject: *Re: [petsc-users] Modelica + PETSc?
>
> On Sun, Jun 23, 2024 at 5: 04 PM Zou, Ling via petsc-users  mcs. anl. gov> wrote: Hi all, I am just curious … any effort trying to
> include PETSc as Modelica’s solution option? (Modelica forum or email list
> seem to be quite dead
>
> ZjQcmQRYFpfptBannerStart
>
> *This Message Is From an External Sender *
>
> This message came from outside your organization.
>
>
>
> ZjQcmQRYFpfptBannerEnd
>
> On Sun, Jun 23, 2024 at 5:04 PM Zou, Ling via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hi all, I am just curious … any effort trying to include PETSc as
> Modelica’s solution option?
>
> (Modelica forum or email list seem to be quite dead so asking here.)
>
>
>
> I had not heard of it before. I looked at the 3.6 specification, but it
> did not sy how the generated DAE were solved, or
>
> how to interface packages. Do they have documentation on that?
>
>
>
>   Thanks,
>
>
>
> Matt
>
>
>
>
>
> -Ling
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!atB8LuQrlGQnbi8lXYaGJKrUHYTfhXYVS8-QcBlSPWc_cjEPT8rDhboXFr08Hx6cSDhSTlwO2WEXOjQrgtOM$
>  
> <https://urldefense.us/v3/__http:/www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e8sA5VFEfZrWzBeykVmuJSprkdGGGySwcvejvOBPvzrBd0qITlSt4aai30icUjLpUqdLPz-LkDNH$>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!atB8LuQrlGQnbi8lXYaGJKrUHYTfhXYVS8-QcBlSPWc_cjEPT8rDhboXFr08Hx6cSDhSTlwO2WEXOjQrgtOM$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!atB8LuQrlGQnbi8lXYaGJKrUHYTfhXYVS8-QcBlSPWc_cjEPT8rDhboXFr08Hx6cSDhSTlwO2WEXOjlJYUgi$
 >


Re: [petsc-users] Restart Krylov-Schur "Manually"

2024-06-24 Thread Matthew Knepley
On Mon, Jun 24, 2024 at 4:24 AM Samar Khatiwala <
samar.khatiw...@earth.ox.ac.uk> wrote:

> Hi, Sorry to hijack this thread but I just want to add that this is a more
> general problem that I constantly face with PETSc. Not being able to
> checkpoint the complete state of a solver instance and restart a
> computation (at least not easily)
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Hi,
>
> Sorry to hijack this thread but I just want to add that this is a more
> general problem that I constantly face with PETSc. Not being able to
> checkpoint the complete state of a solver instance and restart a
> computation (at least not easily) has long been the biggest missing feature
> in PETSc for me.
>

Which type of solver do you want to do this for? Some solvers, like Newton,
just need the current iterate, which we do. You could imagine saving Krylov
spaces, but it is very often cheaper to regenerate them than to save and
load them from disk (which tends to be under-provisioned).

  Thanks,

Matt


> Thanks,
>
> Samar
>
> On Jun 24, 2024, at 9:14 AM, Jose E. Roman via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Unfortunately there is no support for this.
>
> If you requested several eigenvalues and the solver has converged some of 
> them already, then it would be possible to stop the run, save the 
> eigenvectors and rerun with the eigenvectors passed via 
> EPSSetDeflationSpace().
>
> Jose
>
>
> > El 24 jun 2024, a las 0:21, Marildo Kola  escribió:
> >
> > This Message Is From an External Sender
> > This message came from outside your organization.
> > Hello,
> > I am using SLEPc to calculate eigenvalues for fluid dynamics stability 
> > analysis (specifically studying bifurcations). We employ a 
> > MatShellOperation, which involves propagating Navier-Stokes to construct 
> > the Krylov space, and this particularly slows down our algorithm. The 
> > problem I am facing is that, after days of simulations, the simulation may 
> > die due to a time limit on the cluster, but the eigensolver (I am using the 
> > default Krylov-Schur) has not converged yet, leading to the loss of all the 
> > information computed up to that point. I wanted to inquire if it is 
> > possible to implement, with the available features, a restarting strategy, 
> > which can allow me, once the simulation stops (or after a given number of 
> > restart iterations of the solver), to save all the information necessary to 
> > restart the EPSSolver from the point it had stopped.
> > Thank you in advance,
> > Best regards, Marildo Kola
>
>
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ch8NH3Wyy13drlSnX_Ftydd3HzRlIz3IQda46x_WnHdpCZvqNPlj-Fhk8Ap5uLxb85QjWMip0Rn0PZdq7XGu$
  



Re: [petsc-users] Modelica + PETSc?

2024-06-24 Thread Matthew Knepley
On Sun, Jun 23, 2024 at 5:04 PM Zou, Ling via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi all, I am just curious … any effort trying to include PETSc as
> Modelica’s solution option?
>
> (Modelica forum or email list seem to be quite dead so asking here.)
>

I had not heard of it before. I looked at the 3.6 specification, but it did
not sy how the generated DAE were solved, or
how to interface packages. Do they have documentation on that?

  Thanks,

Matt


>
>
> -Ling
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Zm0NjFBejQX24YkLIjkKSr1FkyhGSd5YDzKPEYLhPVdjIB_EifkXVLP3RicixnSb0xjR5KtYyBcRN6v-fWU4$
  



Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc KSPSolve Performance Issue

2024-06-22 Thread Matthew Knepley
0  4913
>
>
>
> Finally there are a huge number of
>
>
>
> MatMultAdd258048 1.0 1.4178e+03 1.0 6.10e+13 1.0 0.0e+00 0.0e+00
> 0.0e+00  7 29  0  0  0   7 29  0  0  0 43025
>
>
>
> Are you making calls to all these routines? Are you doing this inside your
> MatMult() or before you call KSPSolve?
>
>
>
> The reason I wanted you to make a simpler run without the initial guess
> code is that your events are far more complicated than would be produced by
> GMRES alone so it is not possible to understand the behavior you are seeing
> without fully understanding all the events happening in the code.
>
>
>
>   Barry
>
>
>
>
>
> On Jun 14, 2024, at 1:19 AM, Yongzhong Li 
> wrote:
>
>
>
> Thanks, I have attached the results without using any KSPGuess. At low
> frequency, the iteration steps are quite close to the one with KSPGuess,
> specifically
>
>   KSPGuess Object: 1 MPI process
>
> type: fischer
>
> Model 1, size 200
>
> However, I found at higher frequency, the # of iteration steps are
>  significant higher than the one with KSPGuess, I have attahced both of the
> results for your reference.
>
> Moreover, could I ask why the one without the KSPGuess options can be used
> for a baseline comparsion? What are we comparing here? How does it relate
> to the performance issue/bottleneck I found? “*I have noticed that the
> time taken by **KSPSolve** is **almost two times **greater than the CPU
> time for matrix-vector product multiplied by the number of iteration*”
>
> Thank you!
> Yongzhong
>
>
>
> *From: *Barry Smith 
> *Date: *Thursday, June 13, 2024 at 2:14 PM
> *To: *Yongzhong Li 
> *Cc: *petsc-users@mcs.anl.gov ,
> petsc-ma...@mcs.anl.gov , Piero Triverio <
> piero.trive...@utoronto.ca>
> *Subject: *Re: [petsc-maint] Assistance Needed with PETSc KSPSolve
> Performance Issue
>
>
>
>   Can you please run the same thing without the  KSPGuess option(s) for a
> baseline comparison?
>
>
>
>Thanks
>
>
>
>Barry
>
>
>
> On Jun 13, 2024, at 1:27 PM, Yongzhong Li 
> wrote:
>
>
>
> This Message Is From an External Sender
>
> This message came from outside your organization.
>
> Hi Matt,
>
> I have rerun the program with the keys you provided. The system output
> when performing ksp solve and the final petsc log output were stored in a
> .txt file attached for your reference.
>
> Thanks!
> Yongzhong
>
>
>
> *From: *Matthew Knepley 
> *Date: *Wednesday, June 12, 2024 at 6:46 PM
> *To: *Yongzhong Li 
> *Cc: *petsc-users@mcs.anl.gov ,
> petsc-ma...@mcs.anl.gov , Piero Triverio <
> piero.trive...@utoronto.ca>
> *Subject: *Re: [petsc-maint] Assistance Needed with PETSc KSPSolve
> Performance Issue
>
> 你通常不会收到来自 knep...@gmail.com 的电子邮件。了解这一点为什么很重要
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!djGfJnEhNJROfsMsBJy5u_KoRKbug55xZ64oHKUFnH2cWku_Th1hwt4TDdoMd8pWYVDzJeqJslMNZwpO3y0Et94d31qk-oCEwo4$>
>
> On Wed, Jun 12, 2024 at 6:36 PM Yongzhong Li <
> yongzhong...@mail.utoronto.ca> wrote:
>
> Dear PETSc’s developers, I hope this email finds you well. I am currently
> working on a project using PETSc and have encountered a performance issue
> with the KSPSolve function. Specifically, I have noticed that the time
> taken by KSPSolve is
>
> ZjQcmQRYFpfptBannerStart
>
> *This Message Is From an External Sender*
>
> This message came from outside your organization.
>
>
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear PETSc’s developers,
>
> I hope this email finds you well.
>
> I am currently working on a project using PETSc and have encountered a
> performance issue with the KSPSolve function. Specifically, *I have
> noticed that the time taken by **KSPSolve** is **almost two times **greater
> than the CPU time for matrix-vector product multiplied by the number of
> iteration steps*. I use C++ chrono to record CPU time.
>
> For context, I am using a shell system matrix A. Despite my efforts to
> parallelize the matrix-vector product (Ax), the overall solve time
> remains higher than the matrix vector product per iteration indicates
> when multiple threads were used. Here are a few details of my setup:
>
>- *Matrix Type*: Shell system matrix
>- *Preconditioner*: Shell PC
>- *Parallel Environment*: Using Intel MKL as PETSc’s BLAS/LAPACK
>library, multithreading is enabled
>
> I have considered several potential reasons, such as preconditioner setup,
> additional solver operations, and the inherent overhead of using a shell
> system matrix. *However, since KSPSolve is a high-level API, I 

Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc KSPSolve Performance Issue

2024-06-14 Thread Matthew Knepley
PETSc itself only takes 47% of the runtime. I am not sure what is happening
for the other half. For the PETSc half, it is all in the solve:

KSPSolve  20 1.0 5.3323e+03 1.0 1.01e+14 1.0 0.0e+00 0.0e+00
0.0e+00 47 100  0  0  0  47 100  0  0  0 18943

About 2/3 of that is matrix operations (I don't know where you are using LU)

MatMult19960 1.0 2.1336e+03 1.0 8.78e+13 1.0 0.0e+00 0.0e+00
0.0e+00 19 87  0  0  0  19 87  0  0  0 41163
MatMultAdd152320 1.0 8.4854e+02 1.0 3.60e+13 1.0 0.0e+00 0.0e+00
0.0e+00  7 35  0  0  0   7 35  0  0  0 42442
MatSolve6600 1.0 9.0724e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  8  0  0  0  0   8  0  0  0  0 0

and 1/3 is vector operations for orthogonalization in GMRES:

KSPGMRESOrthog  3290 1.0 1.2390e+03 1.0 8.77e+12 1.0 0.0e+00 0.0e+00
0.0e+00 11  9  0  0  0  11  9  0  0  0  7082
VecMAXPY   13220 1.0 1.7894e+03 1.0 9.02e+12 1.0 0.0e+00 0.0e+00
0.0e+00 16  9  0  0  0  16  9  0  0  0  5040

The flop rates do not look crazy, but I do not know what kind of hardware
you are running on.

  Thanks,

 Matt

On Fri, Jun 14, 2024 at 1:20 AM Yongzhong Li 
wrote:

> Thanks, I have attached the results without using any KSPGuess. At low
> frequency, the iteration steps are quite close to the one with KSPGuess,
> specifically KSPGuess Object: 1 MPI process type: fischer Model 1, size 200
> However, I found at
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Thanks, I have attached the results without using any KSPGuess. At low
> frequency, the iteration steps are quite close to the one with KSPGuess,
> specifically
>
>   KSPGuess Object: 1 MPI process
>
> type: fischer
>
> Model 1, size 200
>
> However, I found at higher frequency, the # of iteration steps are
>  significant higher than the one with KSPGuess, I have attahced both of the
> results for your reference.
>
> Moreover, could I ask why the one without the KSPGuess options can be used
> for a baseline comparsion? What are we comparing here? How does it relate
> to the performance issue/bottleneck I found? “*I have noticed that the
> time taken by **KSPSolve is **almost two times greater than the CPU time
> for matrix-vector product multiplied by the number of iteration*”
>
> Thank you!
> Yongzhong
>
>
>
> *From: *Barry Smith 
> *Date: *Thursday, June 13, 2024 at 2:14 PM
> *To: *Yongzhong Li 
> *Cc: *petsc-users@mcs.anl.gov ,
> petsc-ma...@mcs.anl.gov , Piero Triverio <
> piero.trive...@utoronto.ca>
> *Subject: *Re: [petsc-maint] Assistance Needed with PETSc KSPSolve
> Performance Issue
>
>
>
>   Can you please run the same thing without the  KSPGuess option(s) for a
> baseline comparison?
>
>
>
>Thanks
>
>
>
>Barry
>
>
>
> On Jun 13, 2024, at 1:27 PM, Yongzhong Li 
> wrote:
>
>
>
> This Message Is From an External Sender
>
> This message came from outside your organization.
>
> Hi Matt,
>
> I have rerun the program with the keys you provided. The system output
> when performing ksp solve and the final petsc log output were stored in a
> .txt file attached for your reference.
>
> Thanks!
> Yongzhong
>
>
>
> *From: *Matthew Knepley 
> *Date: *Wednesday, June 12, 2024 at 6:46 PM
> *To: *Yongzhong Li 
> *Cc: *petsc-users@mcs.anl.gov ,
> petsc-ma...@mcs.anl.gov , Piero Triverio <
> piero.trive...@utoronto.ca>
> *Subject: *Re: [petsc-maint] Assistance Needed with PETSc KSPSolve
> Performance Issue
>
> 你通常不会收到来自 knep...@gmail.com 的电子邮件。了解这一点为什么很重要
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!djGfJnEhNJROfsMsBJy5u_KoRKbug55xZ64oHKUFnH2cWku_Th1hwt4TDdoMd8pWYVDzJeqJslMNZwpO3y0Et94d31qk-oCEwo4$>
>
> On Wed, Jun 12, 2024 at 6:36 PM Yongzhong Li <
> yongzhong...@mail.utoronto.ca> wrote:
>
> Dear PETSc’s developers, I hope this email finds you well. I am currently
> working on a project using PETSc and have encountered a performance issue
> with the KSPSolve function. Specifically, I have noticed that the time
> taken by KSPSolve is
>
> ZjQcmQRYFpfptBannerStart
>
> *This Message Is From an External Sender*
>
> This message came from outside your organization.
>
>
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear PETSc’s developers,
>
> I hope this email finds you well.
>
> I am currently working on a project using PETSc and have encountered a
> performance issue with the KSPSolve function. Specifically, *I have
> noticed that the time taken by **KSPSolve is **almost two times greater
> than the CPU time for matrix-vector product multiplied by the number of

Re: [petsc-users] [petsc-maint] Assistance Needed with PETSc KSPSolve Performance Issue

2024-06-12 Thread Matthew Knepley
On Wed, Jun 12, 2024 at 6:36 PM Yongzhong Li 
wrote:

> Dear PETSc’s developers, I hope this email finds you well. I am currently
> working on a project using PETSc and have encountered a performance issue
> with the KSPSolve function. Specifically, I have noticed that the time
> taken by KSPSolve is
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear PETSc’s developers,
>
> I hope this email finds you well.
>
> I am currently working on a project using PETSc and have encountered a
> performance issue with the KSPSolve function. Specifically, *I have
> noticed that the time taken by **KSPSolve** is **almost two times **greater
> than the CPU time for matrix-vector product multiplied by the number of
> iteration steps*. I use C++ chrono to record CPU time.
>
> For context, I am using a shell system matrix A. Despite my efforts to
> parallelize the matrix-vector product (Ax), the overall solve time
> remains higher than the matrix vector product per iteration indicates
> when multiple threads were used. Here are a few details of my setup:
>
>- *Matrix Type*: Shell system matrix
>- *Preconditioner*: Shell PC
>- *Parallel Environment*: Using Intel MKL as PETSc’s BLAS/LAPACK
>library, multithreading is enabled
>
> I have considered several potential reasons, such as preconditioner setup,
> additional solver operations, and the inherent overhead of using a shell
> system matrix. *However, since KSPSolve is a high-level API, I have been
> unable to pinpoint the exact cause of the increased solve time.*
>
> Have you observed the same issue? Could you please provide some experience
> on how to diagnose and address this performance discrepancy? Any insights
> or recommendations you could offer would be greatly appreciated.
>

For any performance question like this, we need to see the output of your
code run with

  -ksp_view -ksp_monitor_true_residual -ksp_converged_reason -log_view

  Thanks,

 Matt


> Thank you for your time and assistance.
>
> Best regards,
>
> Yongzhong
>
> ---
>
> *Yongzhong Li*
>
> PhD student | Electromagnetics Group
>
> Department of Electrical & Computer Engineering
>
> University of Toronto
>
> https://urldefense.us/v3/__http://www.modelics.org__;!!G_uCfscf7eWS!eMuXWvayLIhrQweHZY95IfQMST6PiUiLEskCz9WUy0pb9bazMdyoLiAyZh_l80blSuxXwO5yN7vzdEzWkCL8$
>  
> 
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eMuXWvayLIhrQweHZY95IfQMST6PiUiLEskCz9WUy0pb9bazMdyoLiAyZh_l80blSuxXwO5yN7vzdEsBefqt$
  



Re: [petsc-users] 2^32 integer problems

2024-06-02 Thread Matthew Knepley
On Sun, Jun 2, 2024 at 10:27 AM Matthew Knepley  wrote:

> On Sat, Jun 1, 2024 at 11:39 PM Carpenter, Mark H. (LARC-D302) via
> petsc-users  wrote:
>
>> Mark Carpenter, NASA Langley. I am a novice PETSC user of about 10 years.
>> I’ve build a DG-FEM code with petsc as one of the solver paths (I have my
>> own as well). Furthermore, I use petsc for MPI communication. I’m running
>> the DG-FEM
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Mark Carpenter,  NASA Langley.
>>
>>
>>
>> I am a novice PETSC user of about 10 years.  I’ve build  a DG-FEM code
>> with petsc as one of the solver paths (I have my own as well).
>> Furthermore, I use petsc for MPI communication.
>>
>>
>>
>> I’m running the DG-FEM code on our NAS supercomputer.  Everything works
>> when my integer sizes are small.  When I exceed the 2^32 limit of integer
>> arithmetic the code fails in very strange ways.
>>
>> The users that originally set up the petsc infrastructure in the code are
>> no longer at NASA and I’m “dead in the water”.
>>
>
One additional point. I have looked at the error message. When you make
PETSc calls, each call should be wrapped in PetscCall(). Here is a Fortran
example:


https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex22f.F90?ref_type=heads__;!!G_uCfscf7eWS!eOkbaTOpui-YHhrX_HYLmYerXOaaGtlJn04-tdLvQzfRqa6gaCs2x-YtPn7xNTWzRRgD-wze7GkX5hkXqc8i$
 

This checks the return value after each call and ends early if there is an
error. It would make your
error output much more readable.

  Thanks,

 Matt


>
>>
>> I think I’ve promoted all the integers that  are problematic in my code
>> (F95).  On PETSC side:  I’ve tried
>>
>>1. Reinstall petsc with –with-64-bit-integers  (no luck)
>>
>>
> That option does not exist, so this will not work.
>
>
>>
>>1.
>>2. Reinstall petsc with –with-64-bit-integers and
>>–with-64-bit-indices  (code will not compile with these options.
>>Additional variables on F90 side require promotion and then the errors
>>cascade through code  when making PETSC calls.
>>
>>
> We should fix this. I feel confident we can get the code to compile.
>
>
>>
>>1.
>>2. It’s possible that I’ve missed offending integers, but the petsc
>>error messages are so cryptic that I can’t even tell where it is failing.
>>
>>
>>
>> Further complicating matters:
>>
>> The problem by definition needs to be HUGE.  Problem sizes requiring 1000
>> cores (10^6 elements at P5) are needed to experience the errors, which
>> involves waiting in queues for ½ day at least.
>>
>>
>>
>> Attached are the
>>
>>1. Install script used to install PETSC on our machine
>>2. The Makefile used on the fortran side
>>3. A data dump from an offending simulation (which is huge and I
>>can’t see any useful information.)
>>
>>
>>
>> How do I attack this problem.
>>
>> (I’ve never gotten debugging working properly).
>>
>
> Let's get the install for 64-bit indices to work. So we
>
> 1) Configure PETSc adding --with-64bit-indices to the configure line. Does
> this work? If not, send configure.log
>
> 2) Compile PETSc. Does this work? If not, send make.log
>
> 3) Compile your code. Does this work? If not, send all output.
>
> 4) Do one of the 1/2 day runs and let us know what happens. An alternative
> is to run a small number
> of processes on a large memory workstation. We do this to test at the
> lab.
>
>   Thanks,
>
>  Matt
>
>
>> Mark
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eOkbaTOpui-YHhrX_HYLmYerXOaaGtlJn04-tdLvQzfRqa6gaCs2x-YtPn7xNTWzRRgD-wze7GkX5gB4gnrA$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eOkbaTOpui-YHhrX_HYLmYerXOaaGtlJn04-tdLvQzfRqa6gaCs2x-YtPn7xNTWzRRgD-wze7GkX5r8KJKDw$
>  >
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eOkbaTOpui-YHhrX_HYLmYerXOaaGtlJn04-tdLvQzfRqa6gaCs2x-YtPn7xNTWzRRgD-wze7GkX5gB4gnrA$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eOkbaTOpui-YHhrX_HYLmYerXOaaGtlJn04-tdLvQzfRqa6gaCs2x-YtPn7xNTWzRRgD-wze7GkX5r8KJKDw$
 >


Re: [petsc-users] 2^32 integer problems

2024-06-02 Thread Matthew Knepley
On Sat, Jun 1, 2024 at 11:39 PM Carpenter, Mark H. (LARC-D302) via
petsc-users  wrote:

> Mark Carpenter, NASA Langley. I am a novice PETSC user of about 10 years.
> I’ve build a DG-FEM code with petsc as one of the solver paths (I have my
> own as well). Furthermore, I use petsc for MPI communication. I’m running
> the DG-FEM
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Mark Carpenter,  NASA Langley.
>
>
>
> I am a novice PETSC user of about 10 years.  I’ve build  a DG-FEM code
> with petsc as one of the solver paths (I have my own as well).
> Furthermore, I use petsc for MPI communication.
>
>
>
> I’m running the DG-FEM code on our NAS supercomputer.  Everything works
> when my integer sizes are small.  When I exceed the 2^32 limit of integer
> arithmetic the code fails in very strange ways.
>
> The users that originally set up the petsc infrastructure in the code are
> no longer at NASA and I’m “dead in the water”.
>
>
>
> I think I’ve promoted all the integers that  are problematic in my code
> (F95).  On PETSC side:  I’ve tried
>
>1. Reinstall petsc with –with-64-bit-integers  (no luck)
>
>
That option does not exist, so this will not work.


>
>1.
>2. Reinstall petsc with –with-64-bit-integers and
>–with-64-bit-indices  (code will not compile with these options.
>Additional variables on F90 side require promotion and then the errors
>cascade through code  when making PETSC calls.
>
>
We should fix this. I feel confident we can get the code to compile.


>
>1.
>2. It’s possible that I’ve missed offending integers, but the petsc
>error messages are so cryptic that I can’t even tell where it is failing.
>
>
>
> Further complicating matters:
>
> The problem by definition needs to be HUGE.  Problem sizes requiring 1000
> cores (10^6 elements at P5) are needed to experience the errors, which
> involves waiting in queues for ½ day at least.
>
>
>
> Attached are the
>
>1. Install script used to install PETSC on our machine
>2. The Makefile used on the fortran side
>3. A data dump from an offending simulation (which is huge and I can’t
>see any useful information.)
>
>
>
> How do I attack this problem.
>
> (I’ve never gotten debugging working properly).
>

Let's get the install for 64-bit indices to work. So we

1) Configure PETSc adding --with-64bit-indices to the configure line. Does
this work? If not, send configure.log

2) Compile PETSc. Does this work? If not, send make.log

3) Compile your code. Does this work? If not, send all output.

4) Do one of the 1/2 day runs and let us know what happens. An alternative
is to run a small number
of processes on a large memory workstation. We do this to test at the
lab.

  Thanks,

 Matt


> Mark
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!auF9rrBGDlsDNKTGGczofe7W5jFe6xzdNRcYh93Hu_48IDvf_AkLauQ1sfAdN5qS_ENmKo_z_6HeyVJBTACI$
  



Re: [petsc-users] petsc4py and PCMGGetSmoother

2024-05-28 Thread Matthew Knepley
On Tue, May 28, 2024 at 11:04 AM Jose E. Roman via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> It should be: smoother=pc. getMGSmoother(0) The general rule is to drop
> the class name and move "Get" or "Set" to the front (in small letters). But
> sometimes this does not hold. It is better that you check the source code,
> for instance do "git
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> It should be:smoother=pc.getMGSmoother(0)
> The general rule is to drop the class name and move "Get" or "Set" to the 
> front (in small letters). But sometimes this does not hold. It is better that 
> you check the source code, for instance do "git grep PCMGGetSmoother -- 
> src/binding" to locate where the C function is called from within petsc4py.
>
> And we now also have documentation:

  
https://urldefense.us/v3/__https://petsc.org/release/petsc4py/reference/petsc4py.PETSc.PC.html__;!!G_uCfscf7eWS!c3p1NNyTOWrr2Svr5gbJxnbWYQ_UjM4v52aqXrRsVOsL0rBd5lggGW0QXDAzJEo_DuEtmLvXLsrXk1mFzxO5$
 

  Thanks,

 Matt

> Jose
>
> > El 28 may 2024, a las 16:38, Klaij, Christiaan  escribió:
> >
> > This Message Is From an External Sender
> > This message came from outside your organization.
> > I'm attempting some rapid prototyping with petsc4py. The idea is basically 
> > to read-in a matrix and rhs, setup GAMG as ksppreonly, get the smoother and 
> > overrule it with a python function of my own, similar to the demo where the 
> > Jacobi method is user-defined in python. So far I have something like this:
> >
> > pc = PETSc.PC().create()
> > pc.setOperators(A)
> > pc.setType(PETSc.PC.Type.GAMG)
> > pc.PCMGGetSmoother(l=0,ksp=smoother)
> >
> > which triggers the following error:
> >
> > AttributeError: 'petsc4py.PETSc.PC' object has no attribute 
> > 'PCMGGetSmoother'
> >
> > 1) Am I doing something wrong, or is this function just not available 
> > through python?
> >
> > 2) How can I tell up front if a function is available, only if it is listed 
> > in libpetsc4py.pyx?
> >
> > 3) Given the function description in C from the manual pages, how can I 
> > deduce the python syntax?
> > (perhaps it's supposed to be ksp = pc.PCMGGetSmoother(l=0), or something 
> > else entirely)
> >
> > Thanks for your help,
> > dr. ir. Christiaan  Klaij  |  Senior Researcher  |  Research & Development
> > T +31 317 49 33 44  |   c.kl...@marin.nl | 
> > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!ZTfC2k6ASlYOqzjxlaly2X8L-9NS8fzzLfjyqZtXWmY8PjiE5RBTDhE92LAZnY0I2cIn-iUK5vuRYHqeVTaOpoyb$
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c3p1NNyTOWrr2Svr5gbJxnbWYQ_UjM4v52aqXrRsVOsL0rBd5lggGW0QXDAzJEo_DuEtmLvXLsrXk9nFHPiP$
  



Re: [petsc-users] Diagnosing Convergence Issue in Fieldsplit Problem

2024-05-23 Thread Matthew Knepley
I put in stuff to propagate the nullspace if you use DM.

   Matt

On Thu, May 23, 2024 at 11:04 PM Barry Smith  wrote:

> On May 23, 2024, at 3: 48 PM, Stefano Zampini  com> wrote: the null space of the Schur complement is the restriction of
> the original null space. I guess if fieldsplit is Schur type then we could
> in principle extract
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>
> On May 23, 2024, at 3:48 PM, Stefano Zampini 
> wrote:
>
> the null space of the Schur complement is the restriction of the original
> null space. I guess if fieldsplit is Schur type then we could in principle
> extract the sub vectors and renormalize them
>
>
>Is this true if A is singular?   Or are you assuming the Schur
> complement form is only used if A is nonsingular? Would the user need to
> somehow indicate A is nonsingular?
>
>
>
>
> On Thu, May 23, 2024, 22:13 Jed Brown  wrote:
>
>> Barry Smith  writes: > Unfortunately it cannot
>> automatically because -pc_fieldsplit_detect_saddle_point just grabs part of
>> the matrix (having no concept of "what part" so doesn't know to grab the
>> null space information.
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Barry Smith  writes:
>>
>> >Unfortunately it cannot automatically because 
>> > -pc_fieldsplit_detect_saddle_point just grabs part of the matrix (having 
>> > no concept of "what part" so doesn't know to grab the null space 
>> > information.
>> >
>> >It would be possible for PCFIELDSPLIT to access the null space of the 
>> > larger matrix directly as vectors and check if they are all zero in the 00 
>> > block, then it would know that the null space only applied to the second 
>> > block and could use it for the Schur complement.
>> >
>> >Matt, Jed, Stefano, Pierre does this make sense?
>>
>> I think that would work (also need to check that the has_cnst flag is 
>> false), though if you've gone to the effort of filling in that Vec, you 
>> might as well provide the IS.
>>
>> I also wonder if the RHS is consistent.
>>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cl36t5n0xoPzg3CNPvl8rF34swIpHS7UzkeJ7NGQpRF4ZxJeeQqaIuD7nTY5hCChJbFlaXCbk0pIovfR2pFG$
  



Re: [petsc-users] Modify matrix nonzero structure

2024-05-20 Thread Matthew Knepley
On Sun, May 19, 2024 at 11:25 PM Barry Smith  wrote:

> Certainly missing Jacobian entries can dramatically change the Newton
> direction and hence the convergence. Even if the optimal (in time) setup
> skips some Jacobian entries it is always good to have runs with all the
> entries to see the "best
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>   Certainly missing Jacobian entries can dramatically change the Newton
> direction and hence the convergence. Even if the optimal (in time) setup
> skips some Jacobian entries it is always good to have runs with all the
> entries to see the "best possible" convergence.
>

Let me expand on this. If you are missing Jacobian entries, one option is
to use the -snes_mf_operator mode for the solve. In this mode, you provide
an approximate Jacobian that is used to generate the preconditioner, but
the action of the Jacobian is calculated by finite differences using the
residual function. Thus, the Jaccbian "matrix" should be consistent with
the residual, but the preconditioner is approximate.

  Thanks,

  Matt


>   Barry
>
>
> On May 19, 2024, at 10:44 PM, Adrian Croucher 
> wrote:
>
> Great, it sounds like this might be easier than I expected. Thanks very
> much.
>
> Did you have any thoughts on my diagnosis of the problem (the poor
> nonlinear solver convergence being caused by missing Jacobian elements
> representing interaction between the sources)?
>
> - Adrian
> On 20/05/24 12:41 pm, Matthew Knepley wrote:
>
> On Sun, May 19, 2024 at 8:25 PM Barry Smith  wrote:
>
>> You can call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) then insert
>> the new values. If it is just a handful of new insertions the extra time
>> should be small. Making a copy of the matrix won't give you a new matrix
>> that is any faster to
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>>You can call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) then
>> insert the new values. If it is just a handful of new insertions the extra
>> time should be small.
>>
>> Making a copy of the matrix won't give you a new matrix that is any
>> faster to insert into so best to just use the same matrix.
>>
>
> Let me add to Barry's answer. The preallocation infrastructure is now not
> strictly necessary. It is possible to just add all your nonzeros in and
> assembly,  and the performance will be pretty good (uses hashing etc). So
> if just adding a few nonzeros does not work, we can go this route.
>
>   Thanks,
>
>  Matt
>
>
>>   Barry
>>
>>
>> On May 19, 2024, at 7:44 PM, Adrian Croucher 
>> wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> hi,
>>
>> I have a Jacobian matrix created using DMCreateMatrix(). What would be
>> the best way to add extra nonzero entries into it?
>>
>> I'm guessing that DMCreateMatrix() allocates the storage so the nonzero
>> structure can't really be easily modified. Would it be a case of
>> creating a new matrix, copying the nonzero entries from the original one
>> and then adding the extra ones, before calling MatSetUp() or similar? If
>> so, how exactly would you copy the nonzero structure from the original
>> matrix?
>>
>> Background: the flow problem I'm solving (on a DMPlex with finite volume
>> method) has complex source terms that depend on the solution (e.g.
>> pressure), and can also depend on other source terms. A simple example
>> is when fluid is extracted from one location, with a pressure-dependent
>> flow rate, and some of it is then reinjected in another location. This
>> can result in poor nonlinear solver convergence. I think the reason is
>> that there are effectively missing Jacobian entries in the row for the
>> reinjection cell, which should have an additional dependence on the
>> solution in the cell where fluid is extracted.
>>
>> - Adrian
>>
>>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eoJE8Eq9V_WNo4aDVL3rN9-rq3xKlASAxI-BbyITAUnKZ2Se208TWSxPRfxptNTBq0ZDwJ7rPdCyNyy2-i4i$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eoJE8Eq9V_WNo4aDVL3rN9-rq3xKlASAxI-BbyITAUnKZ2Se208TWSxPRfxptNTBq0ZDwJ7rPdCyN0s5AfvC$
 >


Re: [petsc-users] Modify matrix nonzero structure

2024-05-19 Thread Matthew Knepley
On Sun, May 19, 2024 at 8:25 PM Barry Smith  wrote:

> You can call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) then insert
> the new values. If it is just a handful of new insertions the extra time
> should be small. Making a copy of the matrix won't give you a new matrix
> that is any faster to
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>You can call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) then insert
> the new values. If it is just a handful of new insertions the extra time
> should be small.
>
> Making a copy of the matrix won't give you a new matrix that is any
> faster to insert into so best to just use the same matrix.
>

Let me add to Barry's answer. The preallocation infrastructure is now not
strictly necessary. It is possible to just add all your nonzeros in and
assembly,  and the performance will be pretty good (uses hashing etc). So
if just adding a few nonzeros does not work, we can go this route.

  Thanks,

 Matt


>   Barry
>
>
> On May 19, 2024, at 7:44 PM, Adrian Croucher 
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> hi,
>
> I have a Jacobian matrix created using DMCreateMatrix(). What would be
> the best way to add extra nonzero entries into it?
>
> I'm guessing that DMCreateMatrix() allocates the storage so the nonzero
> structure can't really be easily modified. Would it be a case of
> creating a new matrix, copying the nonzero entries from the original one
> and then adding the extra ones, before calling MatSetUp() or similar? If
> so, how exactly would you copy the nonzero structure from the original
> matrix?
>
> Background: the flow problem I'm solving (on a DMPlex with finite volume
> method) has complex source terms that depend on the solution (e.g.
> pressure), and can also depend on other source terms. A simple example
> is when fluid is extracted from one location, with a pressure-dependent
> flow rate, and some of it is then reinjected in another location. This
> can result in poor nonlinear solver convergence. I think the reason is
> that there are effectively missing Jacobian entries in the row for the
> reinjection cell, which should have an additional dependence on the
> solution in the cell where fluid is extracted.
>
> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZAicuPSGpbQ0tcP9ZNf7TPN7b-DE4XuRcqUtAUEsr_yk9p2EAWn5wSAIdkxE7SYHsDeBftI_qWDRw8aUfXLK$
  



Re: [petsc-users] Help with Integrating PETSc into Fortran Groundwater Flow Simulation Code

2024-05-18 Thread Matthew Knepley
On Sat, May 18, 2024 at 7:29 PM Shatanawi, Sawsan Muhammad <
sawsan.shatan...@wsu.edu> wrote:

> Hello everyone,
>
> Thank you all for your feedback, it helped me a lot.
>
> I read the PETSc document and examples related to the Jacobian and
> modified my code.
> Now I am getting errors related to the memory access.  I tried to debug
> but couldn't find out how to fix it.
>

I think the right way to find the bug is to start with either

a) Valgrind 
(https://urldefense.us/v3/__http://www.valgrind.org__;!!G_uCfscf7eWS!an0a5TBb_2F0Yl9de3r2Rr5Lde7DTjTZJaHL4BLVhBks6EHjOrKhtOsm9NFyMydeBwa6UGgfAXF46n-sW_2y$
 )

or

b) Address Sanitizer, which is a feature of the clang compiler
(-fsanitize=address)

You can also run with -malloc_debug in PETSc, which can detect some errors.

  Thanks,

  Matt


> sshatanawi/SawSim: Sawsan's Simulation (SawSim) is a groundwater dynamics
> scheme to be integrated into Noah MP land surface model (github.com)
> <https://urldefense.us/v3/__https://github.com/sshatanawi/SawSim__;!!G_uCfscf7eWS!an0a5TBb_2F0Yl9de3r2Rr5Lde7DTjTZJaHL4BLVhBks6EHjOrKhtOsm9NFyMydeBwa6UGgfAXF46t8nZWfw$
>  >   is the link to the code in the
> GitHub, I would appreciate it if you could have a look at it and guide me
> how to fix it.
> I believe the problem is in the memory of LHS and res_vector because they
> are new vectors I created.
>
> Thank you in advance for your help, I really appreciate it
>
> Bests,
> Sawsan
> --
> *From:* Matthew Knepley 
> *Sent:* Saturday, May 11, 2024 1:56 AM
> *To:* Shatanawi, Sawsan Muhammad 
> *Cc:* Barry Smith ; petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
>
> *[EXTERNAL EMAIL]*
> On Fri, May 10, 2024 at 6:30 PM Shatanawi, Sawsan Muhammad via petsc-users
>  wrote:
>
> Good afternoon, I have tried SNESComputeJacobianDefaultColor(), but the
> arguments needed are confusing me. Would you please have a look at my code
> and the error messages I am getting? I didn't understand what the nonzero
> values of the sparse
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Good afternoon,
>
> I have tried SNESComputeJacobianDefaultColor(), but the arguments needed
> are confusing me.
>
> Would you please have a look at my code and the error messages I am
> getting?
> I didn't understand what the nonzero values of the sparse Jacobian would
> be.
>
>
> 1) You are not checking any of the return values from PETSc calls. Look at
> the PETSc Fortran examples.
>  You  should wrap all PETSc calls in PetscCall() or PetscCallA().
>
> 2) You are not intended to directly
> call SNESComputeJacobianDefaultColor(). PETSc will do this automatically if
> you do not set of Jacobian function.
>
> 3) As Barry points out, coloring will not work unless we understand the
> nonzero structure of your Jacobian. This can happen by either:
>
>   a) Using a DM: This is the easiest. Find the type that matches your
> grid, or
>
>   b) Preallocating your Jacobian: Here you give the nonzero structure of
> your Jacobian, but not the values.
>Currently you do not do this. Instead you just give the size, not
> the nonzero structure (columns for
>each row).
>
>   Thanks,
>
>  Matt
>
>
> Thank you for your patience and help
> Bests,
> Sawsan
>
>
> --
> *From:* Barry Smith 
> *Sent:* Thursday, May 9, 2024 12:05 PM
> *To:* Shatanawi, Sawsan Muhammad 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
>
> *[EXTERNAL EMAIL]*
>
>
> On May 9, 2024, at 2:52 PM, Shatanawi, Sawsan Muhammad via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello everyone,
>
> Thank you for your responses and feedback,
>
> I checked PFLOTRAN and found that it is a model to simulate groundwater
> flow, contaminant transport, and other subsurface processes.
> my goal is not to simulate the groundwater flow, my goal is to develop a
> code from scratch to simulate the groundwater flow with specific
> conditions, and then integrate this code with land surface models.
> Later, the simulation of this code will be on a large scale.
>
> I want PETSc to calculate the Jacobian because the system is large and has
> complex nonlinear behavior, and I don’t risk calculating the derivative by
> m

Re: [petsc-users] GMSH entities

2024-05-14 Thread Matthew Knepley
On Tue, May 14, 2024 at 9:07 AM Matthew Knepley  wrote:

> On Mon, May 13, 2024 at 10:04 PM Adrian Croucher <
> a.crouc...@auckland.ac.nz> wrote:
>
>> On 14/05/24 1:44 pm, Matthew Knepley wrote:
>>
>> I wish GMsh was clearer about what is optional:
>> https://urldefense.us/v3/__https://gmsh.info/doc/texinfo/gmsh.html*MSH-file-format__;Iw!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-M4Z0oroq$
>>  
>> They do talk about it, but not exhaustively. GMsh always writes and
>> $Entities block from what I can tell.
>> I can make it optional, it just might take until after the PETSc Meeting.
>>
>> Looks like $Entities are optional:
>>
>>
>> https://urldefense.us/v3/__https://gitlab.onelab.info/gmsh/gmsh/-/commit/b5feba2af57181ffa946d3f0c494b014603c6efa__;!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-M57ksfou$
>>  
>>
>> I can also load a GMSH 4.1 file without $Entities into GMSH itself and it
>> doesn't complain, suggesting that they are indeed optional.
>>
>> Yes, but they are not careful to specify when a file can be inconsistent.
> For instance, omitting the $Entities, but then specifying entity numbers in
> the $Nodes block. I think they also thought this was inconsistent, but then
> got user complaints. The minimal example they show does exactly this.
>
>> If the $Entities aren't strictly needed for anything in DMPlex (which I'm
>> guessing they aren't, as the GMSH file format 2.2 doesn't even have them)
>> then it would be useful not to require them.
>>
> I put in some code for this:
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7546__;!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-M-tMQE24$
>  
>
> It just ignores entity numbers when there is no section.
>

This merged, so now it should be fixed for you.

  Thanks,

 Matt


>   Thanks,
>
>  Matt
>
>> - Adrian
>>
>> --
>> Dr Adrian Croucher
>> Senior Research Fellow
>> Department of Engineering Science
>> Waipapa Taumata Rau / University of Auckland, New Zealand
>> email: a.crouc...@auckland.ac.nz
>> tel: +64 (0)9 923 4611
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-M7SPCKFm$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-MzikdTxP$
>  >
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-M7SPCKFm$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fewn3DodeIVBmUE15mfIIEWSAHMhFjmoORD8nz_G4QnsaxhItWNFYbe4Lm0noV3Fuzj8ep504ww-MzikdTxP$
 >


Re: [petsc-users] GMSH entities

2024-05-14 Thread Matthew Knepley
On Mon, May 13, 2024 at 10:04 PM Adrian Croucher 
wrote:

> On 14/05/24 1:44 pm, Matthew Knepley wrote:
>
> I wish GMsh was clearer about what is optional:
> https://urldefense.us/v3/__https://gmsh.info/doc/texinfo/gmsh.html*MSH-file-format__;Iw!!G_uCfscf7eWS!YJN5how37EbmfjwDvfPAsVSCQdWejJn8symxZ83hj94omk6Mh9imh2qOrFqZbsZRM_3W3G5YIn5lZK2KzQlj$
>  
> They do talk about it, but not exhaustively. GMsh always writes and
> $Entities block from what I can tell.
> I can make it optional, it just might take until after the PETSc Meeting.
>
> Looks like $Entities are optional:
>
>
> https://urldefense.us/v3/__https://gitlab.onelab.info/gmsh/gmsh/-/commit/b5feba2af57181ffa946d3f0c494b014603c6efa__;!!G_uCfscf7eWS!YJN5how37EbmfjwDvfPAsVSCQdWejJn8symxZ83hj94omk6Mh9imh2qOrFqZbsZRM_3W3G5YIn5lZMbTrLlQ$
>  
>
> I can also load a GMSH 4.1 file without $Entities into GMSH itself and it
> doesn't complain, suggesting that they are indeed optional.
>
> Yes, but they are not careful to specify when a file can be inconsistent.
For instance, omitting the $Entities, but then specifying entity numbers in
the $Nodes block. I think they also thought this was inconsistent, but then
got user complaints. The minimal example they show does exactly this.

> If the $Entities aren't strictly needed for anything in DMPlex (which I'm
> guessing they aren't, as the GMSH file format 2.2 doesn't even have them)
> then it would be useful not to require them.
>
I put in some code for this:
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7546__;!!G_uCfscf7eWS!YJN5how37EbmfjwDvfPAsVSCQdWejJn8symxZ83hj94omk6Mh9imh2qOrFqZbsZRM_3W3G5YIn5lZHVFZ3e3$
 

It just ignores entity numbers when there is no section.

  Thanks,

 Matt

> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YJN5how37EbmfjwDvfPAsVSCQdWejJn8symxZ83hj94omk6Mh9imh2qOrFqZbsZRM_3W3G5YIn5lZFEALcjs$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YJN5how37EbmfjwDvfPAsVSCQdWejJn8symxZ83hj94omk6Mh9imh2qOrFqZbsZRM_3W3G5YIn5lZFz1kNMU$
 >


Re: [petsc-users] DMPlex periodic face coordinates

2024-05-14 Thread Matthew Knepley
On Tue, May 14, 2024 at 12:14 AM Matteo Semplice <
matteo.sempl...@uninsubria.it> wrote:

> Dear petsc-users, I am playing with DMPlexGetCellCoordinates and observing
> that it returns correct periodic coordinates for cells, but not for faces.
> More precisely, adding PetscCall(DMPlexGetHeightStratum(dm, 1, &fStart,
> &fEnd)); for
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Dear petsc-users,
>
> I am playing with DMPlexGetCellCoordinates and observing that it
> returns correct periodic coordinates for cells, but not for faces.
>
> More precisely, adding
>
> PetscCall(DMPlexGetHeightStratum(dm, 1, &fStart, &fEnd));
> for (f = fStart; f < fEnd; ++f) {
>   const PetscScalar *array;
>   PetscScalar   *x = NULL;
>   PetscInt   ndof;
>   PetscBool  isDG;
>
>   PetscCall(DMPlexGetCellCoordinates(dm, f, &isDG, &ndof, &array, &x));
>   PetscCheck(ndof % cdim == 0, PETSC_COMM_SELF, PETSC_ERR_ARG_INCOMP,
> "ndof not divisible by cdim");
>   PetscCall(PetscPrintf(PETSC_COMM_SELF, "Face #%" PetscInt_FMT "
> coordinates\n", f - fStart));
>   for (PetscInt i = 0; i < ndof; i += cdim)
> PetscCall(PetscScalarView(cdim, &x[i], PETSC_VIEWER_STDOUT_SELF));
>   PetscCall(DMPlexRestoreCellCoordinates(dm, f, &isDG, &ndof, &array,
> &x));
> }
>
> to src/dm/impls/plex/tutorials/ex8.c, I get
>
> $ ./ex8 -dm_plex_dim 2 -petscspace_degree 1 -dm_plex_simplex 0
> -dm_plex_box_faces 3,2 -dm_plex_box_bd periodic,none -dm_view -view_coord
> DM Object: box 1 MPI process
>   type: plex
> box in 2 dimensions:
>   Number of 0-cells per rank: 9
>   Number of 1-cells per rank: 15
>   Number of 2-cells per rank: 6
> Periodic mesh (PERIODIC, NONE) coordinates localized
>
> [...]
>
> Element #2 coordinates
>  0:   6.6667e-01   0.e+00
>  0:   1.e+00   0.e+00  <<<- correct
>  0:   1.e+00   5.e-01
>  0:   6.6667e-01   5.e-01
> [...]
>
> Face #0 coordinates
>  0:   0.e+00   0.e+00
>  0:   3.e-01   0.e+00
> Face #1 coordinates
>  0:   3.e-01   0.e+00
>  0:   6.6667e-01   0.e+00
> Face #2 coordinates
>  0:   6.6667e-01   0.e+00
>  0:   0.e+00   0.e+00  <<< should be (0.66,0.00) and
> (1.00,0.00)
>
> Is there a way to recover correct periodic coordinates in the case of
> periodic DMPLex?
>
> The way that periodic coordinates work is that it stores a DG coordinate
field by cell. Faces default back to the vertices. You could think about
also putting DG coordinates on faces, but no one had asked, and it is
potentially expensive.

If you really need them to keep going, face coordinates can be extracted
from cell coordinates. Otherwise, I can do it after the PETSc Meeting. Or,
we are happy to take contributions adding this.

  Thanks,

 Matt

> Thanks in advance
>
> Matteo
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z9-i-9O8yBcNimBXW2alc9XHsn9MWWFwaobajlfkDcoghfrtZlVX4CRMW0BdrXaX2hOabPupZN8g9FrtoNf2$
  



Re: [petsc-users] GMSH entities

2024-05-13 Thread Matthew Knepley
On Mon, May 13, 2024 at 9:33 PM Adrian Croucher 
wrote:

> hi, We often create meshes in GMSH format using the meshio library. This
> works OK if we stick to GMSH file format 2. 2. If we use GMSH file format
> 4. 1, DMPlex can't read them because it expects the "Entities" section to
> be present: [0]PETSC
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> hi,
>
> We often create meshes in GMSH format using the meshio library. This
> works OK if we stick to GMSH file format 2.2.
>
> If we use GMSH file format 4.1, DMPlex can't read them because it
> expects the "Entities" section to be present:
>
> [0]PETSC ERROR: Unexpected data in file
> [0]PETSC ERROR: File is not a valid Gmsh file, expecting $Entities, not
> $Nodes
> [0]PETSC ERROR: See 
> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ZfBS1KM5EBZ7ZJIu6lBKFcclVMmXteXsW8m9HBEZ5tIf0u_3duEFt9eXKF7FcorQSAQqD5SbJrYh4C8rX676S_IMI_sp6naX$
>  for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision:
> v3.21.1-124-g2d06e2faec8  GIT Date: 2024-05-08 19:31:33 +
> [0]PETSC ERROR: waiwera on a main-debug named EN438880 by acro018 Tue
> May 14 11:25:54 2024
> [0]PETSC ERROR: Configure options --with-x --download-hdf5
> --download-zlib --download-netcdf --download-pnetcdf --download-exodusii
> --download-triangle --download-ptscotch --download-chaco --download-hypre
> [0]PETSC ERROR: #1 GmshExpect() at
> /home/acro018/software/PETSc/code/src/dm/impls/plex/plexgmsh.c:270
> [0]PETSC ERROR: #2 DMPlexCreateGmsh() at
> /home/acro018/software/PETSc/code/src/dm/impls/plex/plexgmsh.c:1608
> [0]PETSC ERROR: #3 DMPlexCreateGmshFromFile() at
> /home/acro018/software/PETSc/code/src/dm/impls/plex/plexgmsh.c:1469
> [0]PETSC ERROR: #4 DMPlexCreateFromFile() at
> /home/acro018/software/PETSc/code/src/dm/impls/plex/plexcreate.c:5804
>
> By default meshio doesn't seem to write the Entities section. From what
> I can gather, it is optional.
>
> Am I right that this section is not optional in DMPlex?
>
>
I wish GMsh was clearer about what is optional:
https://urldefense.us/v3/__https://gmsh.info/doc/texinfo/gmsh.html*MSH-file-format__;Iw!!G_uCfscf7eWS!fS2tI8aC3rUnGOv-LbLPj1PRyijAPB-EO54CwUBP6pNFD-VKeGE98pOHepNMSTP_krHBKTZNk92Bx01NrC-T$
 
They do talk about it, but not exhaustively. GMsh always writes and
$Entities block from what I can tell.
I can make it optional, it just might take until after the PETSc Meeting.

  Thanks,

Matt

- Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fS2tI8aC3rUnGOv-LbLPj1PRyijAPB-EO54CwUBP6pNFD-VKeGE98pOHepNMSTP_krHBKTZNk92Bx1gj7y2J$
  



Re: [petsc-users] [SLEPc] Best method to compute all eigenvalues of a MatShell

2024-05-13 Thread Matthew Knepley
On Mon, May 13, 2024 at 1:40 PM Sreeram R Venkat 
wrote:

> I have a MatShell object that computes matrix-vector products of a dense
> symmetric matrix of size NxN. The MatShell does not actually form the dense
> matrix, so it is never in memory/storage. For my application, N ranges from
> 1e4 to 1e5. I want
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> I have a MatShell object that computes matrix-vector products of a dense
> symmetric matrix of size NxN. The MatShell does not actually form the dense
> matrix, so it is never in memory/storage. For my application, N ranges from
> 1e4 to 1e5.
>
> I want to compute the full spectrum of this matrix. For an example with N
> ~1e4, I was able to use SLEPc's Krylov-Schur solver to get the spectrum in
> about 3 hours running on 6 A100 GPUs. There, I had set the MPD to 2000.
> Before moving on to larger matrices, I wanted to check whether this is the
> best way to go about it. I saw on other posts that for computing full
> spectra of dense matrices, it is generally better to go with
> LAPACK/SCALAPACK. Is the same true for MatShells of dense matrices? The
> matrix-vector products with the shell themselves are really cheap, so I can
> form the dense matrix with MatComputeOperator() and store it to later
> compute with another solver if needed. If SLEPc is not a bad option, what
> is a good way to select MPD/NCV?
>

You can select LAPACK through SLEPc. I would definitely try it since it is
easy. For the complete spectrum, it is likely to be faster (modulo how fast
KS converges).

  Thanks,

Matt


> I do need the full spectrum here since I am trying to analyze how the
> spectral decay changes for different problem configurations.
>
> Thanks for your help,
> Sreeram
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cLlaAsyk3G1fWIZGlTet7-bB1VMxzmbrmnjxP0IPtxxAA-mYn4vGvI9tzI2x7GXRMPqOjNBQZZrrDoH5JihG$
  



Re: [petsc-users] HDF5 time step count

2024-05-13 Thread Matthew Knepley
On Sun, May 12, 2024 at 10:42 PM Adrian Croucher 
wrote:

> hi Matt,
> On 11/05/24 4:12 am, Matthew Knepley wrote:
>
> Thanks. I tried it out and the error below was raised, looks like it's
>> when it tries to destroy the viewer. It still runs ok when
>> DMGetOutputSequenceLength() isn't called.
>>
> Sorry, it looks like I did not close the group. I pushed an update. If
> that is alright, I will merge it.
>
> Thanks, it does appear to work now. That's going to be very useful!
>
> Excellent! I will get it merged in.

  Thanks,

Matt

> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fa_sX8leJlX2EqUMrX8Mgt-V_CgjZo1q5KtjgUiqXJKLzHcUH1Ko3Xgwt49Ji5tcIxwutLtuTM1N_Y15u93u$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fa_sX8leJlX2EqUMrX8Mgt-V_CgjZo1q5KtjgUiqXJKLzHcUH1Ko3Xgwt49Ji5tcIxwutLtuTM1N_RFBcCyv$
 >


Re: [petsc-users] Help with Integrating PETSc into Fortran Groundwater Flow Simulation Code

2024-05-11 Thread Matthew Knepley
On Fri, May 10, 2024 at 6:30 PM Shatanawi, Sawsan Muhammad via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Good afternoon, I have tried SNESComputeJacobianDefaultColor(), but the
> arguments needed are confusing me. Would you please have a look at my code
> and the error messages I am getting? I didn't understand what the nonzero
> values of the sparse
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Good afternoon,
>
> I have tried SNESComputeJacobianDefaultColor(), but the arguments needed
> are confusing me.
>
> Would you please have a look at my code and the error messages I am
> getting?
> I didn't understand what the nonzero values of the sparse Jacobian would
> be.
>

1) You are not checking any of the return values from PETSc calls. Look at
the PETSc Fortran examples.
 You  should wrap all PETSc calls in PetscCall() or PetscCallA().

2) You are not intended to directly call SNESComputeJacobianDefaultColor().
PETSc will do this automatically if you do not set of Jacobian function.

3) As Barry points out, coloring will not work unless we understand the
nonzero structure of your Jacobian. This can happen by either:

  a) Using a DM: This is the easiest. Find the type that matches your grid,
or

  b) Preallocating your Jacobian: Here you give the nonzero structure of
your Jacobian, but not the values.
   Currently you do not do this. Instead you just give the size, not
the nonzero structure (columns for
   each row).

  Thanks,

 Matt


> Thank you for your patience and help
> Bests,
> Sawsan
>
>
> --
> *From:* Barry Smith 
> *Sent:* Thursday, May 9, 2024 12:05 PM
> *To:* Shatanawi, Sawsan Muhammad 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
>
> *[EXTERNAL EMAIL]*
>
>
> On May 9, 2024, at 2:52 PM, Shatanawi, Sawsan Muhammad via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello everyone,
>
> Thank you for your responses and feedback,
>
> I checked PFLOTRAN and found that it is a model to simulate groundwater
> flow, contaminant transport, and other subsurface processes.
> my goal is not to simulate the groundwater flow, my goal is to develop a
> code from scratch to simulate the groundwater flow with specific
> conditions, and then integrate this code with land surface models.
> Later, the simulation of this code will be on a large scale.
>
> I want PETSc to calculate the Jacobian because the system is large and has
> complex nonlinear behavior, and I don’t risk calculating the derivative by
> myself.
> My A-Matrix has parts of source terms that depend on the flow fields, and
> independent parts will be in the RHS vector.
>
>
> With coloring SNESComputeJacobianDefaultColor() PETSc can compute
> Jacobian's pretty efficiently. You do not to provide the residual function
> and you need to provide the nonzero pattern of the sparse Jacobian; that is
> what residual components f_i are coupled to what input variables in the
> array x_i. This information comes from your PDE and discretization and
> appears implicitly in your residual function.
>
>   Barry
>
>
> I hope I have answered your questions, and I apologize that I wasn’t clear
> from the beginning, I was trying to keep my descriptions brief.
>
> Bests,
> Sawsan
> --
> *From:* Matthew Knepley 
> *Sent:* Tuesday, May 7, 2024 5:17 PM
> *To:* Shatanawi, Sawsan Muhammad 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
> *[EXTERNAL EMAIL]*
> On Tue, May 7, 2024 at 2:23 PM Shatanawi, Sawsan Muhammad via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Hello everyone,
>
>
> I hope this email finds you well.
>
>
>  My Name is Sawsan Shatanawi, and I was developing a Fortran code for
> simulating groundwater flow in a 3D system with nonlinear behavior.  I
> solved the nonlinear system using the PCG solver and Picard iteration, but
> I did not get good results although I checked my matrix and RHS and
> everything, I decided to change my solver to Newton Rapson method.
> I checked PETSc documents but I have a few questions:
> 1) My groundwater system is time-dependent, so should I use TS only
> instead of SNES?
>
>
> You could use TS, but it is not neces

Re: [petsc-users] HDF5 time step count

2024-05-10 Thread Matthew Knepley
On Fri, May 10, 2024 at 1:01 AM Adrian Croucher 
wrote:

> hi Matt,
> On 10/05/24 12:15 am, Matthew Knepley wrote:
>
> I just tried to test it, but there doesn't seem to be a Fortran interface
>> for DMGetOutputSequenceLength().
>>
> Pushed.
>
> Thanks. I tried it out and the error below was raised, looks like it's
> when it tries to destroy the viewer. It still runs ok when
> DMGetOutputSequenceLength() isn't called.
>
> Sorry, it looks like I did not close the group. I pushed an update. If
that is alright, I will merge it.

  Thanks

 Matt

> - Adrian
>
> [0]PETSC ERROR: Error in external library
> [0]PETSC ERROR: Error in HDF5 call H5Fclose() Status -1
> [0]PETSC ERROR: See 
> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cr7LJSXkch_z_JwHbQJLw6lOHNHIwlYHSs2EKuOgdO4WPY3yBsl64nAdyRiQX9blQC8KVSJZObA0YA9zG9uD$
>   for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.21.1-124-g2d06e2faec8
> GIT Date: 2024-05-08 19:31:33 +
> [0]PETSC ERROR: ./initial_test on a main-debug named EN438880 by acro018
> Fri May 10 16:55:34 2024
> [0]PETSC ERROR: Configure options --with-x --download-hdf5 --download-zlib
> --download-netcdf --download-pnetcdf --download-exodusii
> --download-triangle --download-ptscotch --download-chaco --download-hypre
> [0]PETSC ERROR: #1 PetscViewerFileClose_HDF5() at
> /home/acro018/software/PETSc/code/src/sys/classes/viewer/impls/hdf5/hdf5v.c:107
> [0]PETSC ERROR: #2 PetscViewerDestroy_HDF5() at
> /home/acro018/software/PETSc/code/src/sys/classes/viewer/impls/hdf5/hdf5v.c:126
> [0]PETSC ERROR: #3 PetscViewerDestroy() at
> /home/acro018/software/PETSc/code/src/sys/classes/viewer/interface/view.c:101
> [0]PETSC ERROR: #4 ../src/initial.F90:497
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cr7LJSXkch_z_JwHbQJLw6lOHNHIwlYHSs2EKuOgdO4WPY3yBsl64nAdyRiQX9blQC8KVSJZObA0YLHcR0C8$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cr7LJSXkch_z_JwHbQJLw6lOHNHIwlYHSs2EKuOgdO4WPY3yBsl64nAdyRiQX9blQC8KVSJZObA0YDfia_zg$
 >


Re: [petsc-users] HDF5 time step count

2024-05-09 Thread Matthew Knepley
On Wed, May 8, 2024 at 10:46 PM Adrian Croucher 
wrote:

> hi Matt,
> On 9/05/24 3:00 am, Matthew Knepley wrote:
>
> Sorry about the delay. I had lost track of this. Can you look at branch
>
>   knepley/feature-hdf5-seq-len
>
> I have not made a test yet, but if this works for you, I will make a test
> and merge it in.
>
> Thanks for looking at that.
>
> I just tried to test it, but there doesn't seem to be a Fortran interface
> for DMGetOutputSequenceLength().
>
Pushed.

> Another odd thing I found was that DMPlexConstructGhostCells() seemed to
> complain if I passed in PETSC_NULL_INTEGER for the parameter numGhostCells
> (which works ok in v3.21.1) - the compiler complained about a rank-1/scalar
> mismatch. Can you no longer pass in a null integer for that? It works if I
> declare a dummy integer and pass it in.
>
Oh, Barry has been redoing the Fortran stubs to try and get more automated.
He might have eliminated the hand copy by mistake (since we cannot yet
automate checking for NULL). I will ask him. For now, can we put a dummy in?

  Thanks,

 Matt

> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YEiiqBTCh7zSAbsfhvbrdIgnFEYpbB4rUltLdnm3IhphEaC4qf6ny1HMQl-j374n_GwyG96OGNk1F-AzQUVb$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YEiiqBTCh7zSAbsfhvbrdIgnFEYpbB4rUltLdnm3IhphEaC4qf6ny1HMQl-j374n_GwyG96OGNk1F8dmZwZL$
 >


Re: [petsc-users] HDF5 time step count

2024-05-08 Thread Matthew Knepley
On Wed, May 1, 2024 at 11:03 PM Adrian Croucher 
wrote:

> hi Matt & all,
>
> I just had a query from one of my users which prompted me to see if any
> progress had been made on the issue below - using PETSc to get the number
> of time steps in an HDF5 file.
>
> I can't see anything new in PETSc on this - I did try using
> PetscViewerHDF5ReadSizes() to see if that would do it, but it seems it
> doesn't. If I use that on the "time" dataset it just returns 1.
>
> Sorry about the delay. I had lost track of this. Can you look at branch

  knepley/feature-hdf5-seq-len

I have not made a test yet, but if this works for you, I will make a test
and merge it in.

  Thanks!

 Matt

> Regards, Adrian
> On 11/10/21 2:08 pm, Adrian Croucher wrote:
>
> On 10/11/21 11:59 AM, Matthew Knepley wrote:
>
> On Sun, Oct 10, 2021 at 6:51 PM Adrian Croucher 
> wrote:
>
>> hi
>>
>> Is there any way to query the PETSc HDF5 viewer to find the number of
>> time steps in the file?
>>
>> A common use case I have is that an HDF5 file from a previous simulation
>> is used to get initial conditions for a subsequent run. The most common
>> thing you want to do is restart from the last set of results in the
>> previous output. To do that you need to know how many time steps there
>> are, so you can set the output index to be the last one.
>>
>> I thought maybe I could just query the size of the "time" dataset, but I
>> can't even see any obvious way to do that using the viewer functions.
>>
>
> There is nothing in there that does it right now. Do you know how to do it
> in HDF5?
> If so, I can put it in. Otherwise, I will have to learn more HDF5 :)
>
>
> I haven't actually tried this myself but it looks like what you do is:
>
> 1) get the dataspace for the dataset (in our case the "time" dataset):
>
> hid_t dspace = H5Dget_space(dset);
>
> 2) Get the dimensions of the dataspace:
>
> const int ndims = 1;
>
> hsize_t dims[ndims];
> H5Sget_simple_extent_dims(dspace, dims, NULL);
>
> The first element of dims should be the number of time steps. Here I've
> assumed the number of dimensions of the time dataset is 1. In general you
> can instead query the rank of the dataspace using
> H5Sget_simple_extent_ndims() to get the rank ndims.
>
> Regards, Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eWw8ZyC03JCY81KS4nuvF6OwPoyRvldIwdZsw_6BlM1OLpJ4cucgOSJ6wetJu-YCCDUSYq0ZI_4rgu343kFn$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eWw8ZyC03JCY81KS4nuvF6OwPoyRvldIwdZsw_6BlM1OLpJ4cucgOSJ6wetJu-YCCDUSYq0ZI_4rgriZqJYl$
 >


Re: [petsc-users] Help with Integrating PETSc into Fortran Groundwater Flow Simulation Code

2024-05-07 Thread Matthew Knepley
On Tue, May 7, 2024 at 2:23 PM Shatanawi, Sawsan Muhammad via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hello everyone, I hope this email finds you well. My Name is Sawsan
> Shatanawi, and I was developing a Fortran code for simulating groundwater
> flow in a 3D system with nonlinear behavior. I solved the nonlinear system
> using the PCG solver and
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hello everyone,
>
>
>
> I hope this email finds you well.
>
>
>
>  My Name is Sawsan Shatanawi, and I was developing a Fortran code for
> simulating groundwater flow in a 3D system with nonlinear behavior.  I
> solved the nonlinear system using the PCG solver and Picard iteration, but
> I did not get good results although I checked my matrix and RHS and
> everything, I decided to change my solver to Newton Rapson method.
> I checked PETSc documents but I have a few questions:
> 1) My groundwater system is time-dependent, so should I use TS only
> instead of SNES?
>

You could use TS, but it is not necessary. You could use SNES and your own
timestepping. THe advantage of TS is that you can try many different
timesteppers without recoding (just like you can
try many different linear and nonlinear solvers).


> 2) My system has its deltaT, would using deltaT as dt affect my solver, or
> is it better to use TS-PETSc dt? Also, would using PETSc dt affect the
> simulation of the groundwater system
>

It sounds like your dt comes from your timestepper. If you use TS, you
would use the dt from that.


> 3) I want my Jacobian matrix to be calculated by PETSc automatically
>

PETSc can calculate a full Jacobian for smaller problems, or a
finite-difference Jacobian for any problem (but this impacts the solver).
It should be straightfoward to code up the analytic Jacobian. Is there a
reason it would be a problem?


> 4) Do I need to define and calculate the residual vector?
>

Yes.


> My A-Matrix contains coefficients and external sources and my RHS vector
> includes the boundary conditions
>

It is strange that your matrix would contain source terms. Do they depend
on the flow fields?

Barry is right, you should consider PFlotran, and at least know why it
would not work for your problem if you don't use it.

  Thanks,

 Matt


>
> Please find the attached file contains a draft of my code
>
> Thank you in advance for your time and help.
>
>
> Best regards,
>
>  Sawsan
>
> ----------
> *From:* Shatanawi, Sawsan Muhammad 
> *Sent:* Tuesday, January 16, 2024 10:43 AM
> *To:* Junchao Zhang 
> *Cc:* Barry Smith ; Matthew Knepley ;
> Mark Adams ; petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
> Hello all,
>
> Thank you for your valuable help. I will do your recommendations and hope
> it will run without any issues.
>
> Bests,
> Sawsan
>
> --
> *From:* Junchao Zhang 
> *Sent:* Friday, January 12, 2024 8:46 AM
> *To:* Shatanawi, Sawsan Muhammad 
> *Cc:* Barry Smith ; Matthew Knepley ;
> Mark Adams ; petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] Help with Integrating PETSc into Fortran
> Groundwater Flow Simulation Code
>
>
> *[EXTERNAL EMAIL]*
> Hi, Sawsan,
>First in test_main.F90, you need to call VecGetArrayF90(temp_solution,
> H_vector, ierr) and  VecRestoreArrayF90 (temp_solution, H_vector, ierr)  as
> Barry mentioned.
>Secondly, in the loop of test_main.F90, it calls GW_solver(). Within
> it, it calls PetscInitialize()/PetscFinalize(). But without MPI being
> initialized, PetscInitialize()/PetscFinalize()* can only be called once. *
> do timestep =2 , NTSP
>call GW_boundary_conditions(timestep-1)
> !print *,HNEW(1,1,1)
>call GW_elevation()
>! print *, GWTOP(2,2,2)
>call GW_conductance()
>! print *, CC(2,2,2)
>call GW_recharge()
>! print *, B_Rech(5,4)
>call GW_pumping(timestep-1)
>! print *, B_pump(2,2,2)
>call GW_SW(timestep-1)
> print *,B_RIVER (2,2,2)
>call GW_solver(timestep-1,N)
>call GW_deallocate_loop()
> end do
>
> A solution is to delete PetscInitialize()/PetscFinalize() in
> GW_solver_try.F90 and move it to test_main.F90,  outside the do loop.
>
> diff --git a/test_main.F90 b/test_main.F90
> index b5997c55..107bd3ee 100644
> --- a/test_ma

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Matthew Knepley
On Tue, May 7, 2024 at 8:28 AM Mark Adams  wrote:

> "A^T A being similar to a finite difference Poisson matrix if the rows
> were permuted randomly."
> Normal eqs are a good option in general for rectangular systems and we
> have Poisson solvers.
>

I missed that. Why not first permute back to the Poisson matrix? Then it
would be trivial.

  Thanks,

Matt


> I'm not sure what you mean by "permuted randomly." A random permutation
> of the matrix can kill performance but not math.
>
> Mark
>
>
> On Tue, May 7, 2024 at 8:03 AM Matthew Knepley  wrote:
>
>> On Tue, May 7, 2024 at 5: 12 AM Pierre Jolivet 
>> wrote: On 7 May 2024, at 9: 10 AM, Marco Seiz 
>> wrote: Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet
>> wrote: On 7 May 2024,
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>> On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:
>>
>>> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote:
>>> Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote:
>>> On 7 May 2024, at 7: 04 AM, Marco Seiz  wrote: This
>>> Message Is From an External
>>> ZjQcmQRYFpfptBannerStart
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> ZjQcmQRYFpfptBannerEnd
>>>
>>>
>>> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>>>
>>> Thanks for the quick response!
>>>
>>> On 07.05.24 14:24, Pierre Jolivet wrote:
>>>
>>>
>>>
>>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> Hello,
>>>
>>> something a bit different from my last question, since that didn't
>>> progress so well:
>>> I have a related model which generally produces a rectangular matrix A,
>>> so I am using LSQR to solve the system.
>>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>>> to a finite difference Poisson matrix if the rows were permuted randomly.
>>> The problem is singular in that the solution is only specified up to a
>>> constant from the matrix, with my target solution being a weighted zero
>>> average one, which I can handle by adding a nullspace to my matrix.
>>> However, I'd also like to pin (potentially many) DOFs in the future so I
>>> also tried pinning a single value, and afterwards subtracting the
>>> average from the KSP solution.
>>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>>> the target size of the matrix will be something like ([1,20] N) x N,
>>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>>> for reasonable execution time.
>>>
>>> For a smaller example system, I set up my application to dump the input
>>> to the KSP when it breaks down and I've attached a simple python script
>>> + data using petsc4py to demonstrate the divergence for those specific
>>> systems.
>>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>>> particular system shows breakdown, but if I remove the pinned DOF and
>>> add the nullspace (pass -usens) it converges. I did try different PCs
>>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>>> interestingly it doesn't break down when I explicitly form A^T A (don't
>>> pass -usenormal).
>>>
>>>
>>> What version are you using? All those commands are returning
>>>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>>> So I cannot reproduce any breakdown, but there have been recent changes
>>> to KSPLSQR.
>>>
>>> For those tests I've been using PETSc 3.20.5 (last githash was
>>> 4b82c11ab5d ).
>>> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
>>> but I had to drop --download-suitesparse=1 from my earlier config due to
>>> errors.
>>> Should I write a separate mail about this?
>>>
>>> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
>>> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
>>> -usens -p

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Matthew Knepley
On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:

> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote: Thanks
> for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote: On 7
> May 2024, at 7: 04 AM, Marco Seiz  wrote: This
> Message Is From an External
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>
> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>
> Thanks for the quick response!
>
> On 07.05.24 14:24, Pierre Jolivet wrote:
>
>
>
> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
> something a bit different from my last question, since that didn't
> progress so well:
> I have a related model which generally produces a rectangular matrix A,
> so I am using LSQR to solve the system.
> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
> to a finite difference Poisson matrix if the rows were permuted randomly.
> The problem is singular in that the solution is only specified up to a
> constant from the matrix, with my target solution being a weighted zero
> average one, which I can handle by adding a nullspace to my matrix.
> However, I'd also like to pin (potentially many) DOFs in the future so I
> also tried pinning a single value, and afterwards subtracting the
> average from the KSP solution.
> This leads to the KSP *sometimes* diverging when I use a preconditioner;
> the target size of the matrix will be something like ([1,20] N) x N,
> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
> for reasonable execution time.
>
> For a smaller example system, I set up my application to dump the input
> to the KSP when it breaks down and I've attached a simple python script
> + data using petsc4py to demonstrate the divergence for those specific
> systems.
> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
> particular system shows breakdown, but if I remove the pinned DOF and
> add the nullspace (pass -usens) it converges. I did try different PCs
> but they tend to break down at different steps, e.g. `python3 lsdiv.py
> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
> for PCQR when I use MatCreateNormal for creating the PC mat, but
> interestingly it doesn't break down when I explicitly form A^T A (don't
> pass -usenormal).
>
>
> What version are you using? All those commands are returning
>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> So I cannot reproduce any breakdown, but there have been recent changes to
> KSPLSQR.
>
> For those tests I've been using PETSc 3.20.5 (last githash was
> 4b82c11ab5d ).
> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
> but I had to drop --download-suitesparse=1 from my earlier config due to
> errors.
> Should I write a separate mail about this?
>
> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
> -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
> but the QR example fails since I had to remove suitesparse.
> petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
> then `ldd app` gives me `libpetsc.so
> .3.21
> =>
> /opt/petsc/linux-c-opt/lib/libpetsc.so
> .3.21`
> so it should be using the
> newly built one.
> The application then still eventually yields a DIVERGED_BREAKDOWN.
> I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so if we are on
> the same version and there's still a discrepancy it is quite weird.
>
>
> Quite weird indeed…
> $ python3 lsdiv.py -pc_type lu -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -usens -pc_type lu -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -pc_type qr -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -usens -pc_type qr -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>
> For the moment I can work by adding the nullspace but eventually the
> need for pinning DOFs will resurface, so I'd like to ask where the
> breakdown is coming from. What causes the breakdowns? Is that a generic
> problem occurring when adding (dof_i = val) rows to least-squares
> systems which prevents these preconditioners from being robust? If so,
> what preconditioners could be robust?
> I did a minimal sweep of the available PCs by going over the possible
> 

Re: [petsc-users] PETSc options

2024-05-06 Thread Matthew Knepley
On Mon, May 6, 2024 at 1:14 PM Mark Adams  wrote:

> But that will hardwire disabling -options_left. right?
>

True. Your suggestion would only filter out those two in particular.

  Thanks,

 Matt


> On Mon, May 6, 2024 at 11:30 AM Matthew Knepley  wrote:
>
>> On Mon, May 6, 2024 at 11: 15 AM Pierre Jolivet 
>> wrote: On 6 May 2024, at 3: 14 PM, Matthew Knepley 
>> wrote: This Message Is From an External Sender This message came from
>> outside your organization. On
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>> On Mon, May 6, 2024 at 11:15 AM Pierre Jolivet  wrote:
>>
>>> On 6 May 2024, at 3:14 PM, Matthew Knepley  wrote:
>>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> On Mon, May 6, 2024 at 1:04 AM Adrian Croucher <
>>> a.crouc...@auckland.ac.nz> wrote:
>>>
>>>> This Message Is From an External Sender
>>>> This message came from outside your organization.
>>>>
>>>>
>>>> hi,
>>>>
>>>> My code has some optional command line arguments -v and -h for output of
>>>> version number and usage help. These are processed using Fortran's
>>>> get_command_argument().
>>>>
>>>> Since updating PETSc to version 3.21, I get some extra warnings after
>>>> the output:
>>>>
>>>> acro018@EN438880:~$ waiwera -v
>>>> 1.5.0b1
>>>> WARNING! There are options you set that were not used!
>>>> WARNING! could be spelling mistake, etc!
>>>> There is one unused database option. It is:
>>>> Option left: name:-v (no value) source: command line
>>>>
>>>> That didn't used to happen. What should I do to make them go away?
>>>>
>>>>
>>> Hi Adrian,
>>>
>>> Barry and Mark's suggestions will make this go away. However, it should
>>> not happen
>>> in the first place.
>>>
>>>
>>> It should happen if Adrian was previously not using 3.19.X or below.
>>> See 
>>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/6601__;!!G_uCfscf7eWS!aNM60JUO9Vbf5saWU1g8lA5UztQH35oMhYJcuYQfRU3hKT9s_58lqzRoWbtXuf0G1azqCD_9R9cWfJxmWrGE$
>>>  
>>> <https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/6601__;!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOSu_aPK3$>
>>>
>>
>> I forgot. Not my favorite change.
>>
>> Okay. You can shut this off using
>>
>>   PetscCall(PetscOptionsSetValue(NULL, "-options_left", "0"))
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks,
>>> Pierre
>>>
>>> We should try to figure this out.
>>>
>>> This warning is usually activated by the -options_left argument. Could
>>> that be in the
>>> PETSC_OPTIONS env variable, or in ~/.petscrc?
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
>>>> Regards, Adrian
>>>>
>>>> --
>>>> Dr Adrian Croucher
>>>> Senior Research Fellow
>>>> Department of Engineering Science
>>>> Waipapa Taumata Rau / University of Auckland, New Zealand
>>>> email: a.crouc...@auckland.ac.nz
>>>> tel: +64 (0)9 923 4611
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aNM60JUO9Vbf5saWU1g8lA5UztQH35oMhYJcuYQfRU3hKT9s_58lqzRoWbtXuf0G1azqCD_9R9cWfFIGaM_Q$
>>>  
>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fo7-rCLfudbdd6yHxfeF1pyKANLJjDLd2VvUtM7xc31s9AoJjSSu-vjiyaVVAhNAqV6c4s-9aJPkL3lFP1DF$>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aNM60JUO9Vbf5saWU1g8lA5UztQH35oMhYJcuYQfRU3hKT9s_58lqzRoWbtXuf0G1azqCD_9R9cWfFIGaM_Q$
>>  
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOXbl1p93$>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aNM60JUO9Vbf5saWU1g8lA5UztQH35oMhYJcuYQfRU3hKT9s_58lqzRoWbtXuf0G1azqCD_9R9cWfFIGaM_Q$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aNM60JUO9Vbf5saWU1g8lA5UztQH35oMhYJcuYQfRU3hKT9s_58lqzRoWbtXuf0G1azqCD_9R9cWfCOgoX4N$
 >


Re: [petsc-users] PETSc options

2024-05-06 Thread Matthew Knepley
On Mon, May 6, 2024 at 11:15 AM Pierre Jolivet  wrote:

> On 6 May 2024, at 3:14 PM, Matthew Knepley  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> On Mon, May 6, 2024 at 1:04 AM Adrian Croucher 
> wrote:
>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>>
>> hi,
>>
>> My code has some optional command line arguments -v and -h for output of
>> version number and usage help. These are processed using Fortran's
>> get_command_argument().
>>
>> Since updating PETSc to version 3.21, I get some extra warnings after
>> the output:
>>
>> acro018@EN438880:~$ waiwera -v
>> 1.5.0b1
>> WARNING! There are options you set that were not used!
>> WARNING! could be spelling mistake, etc!
>> There is one unused database option. It is:
>> Option left: name:-v (no value) source: command line
>>
>> That didn't used to happen. What should I do to make them go away?
>>
>>
> Hi Adrian,
>
> Barry and Mark's suggestions will make this go away. However, it should
> not happen
> in the first place.
>
>
> It should happen if Adrian was previously not using 3.19.X or below.
> See 
> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/6601__;!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOSu_aPK3$
>  
>

I forgot. Not my favorite change.

Okay. You can shut this off using

  PetscCall(PetscOptionsSetValue(NULL, "-options_left", "0"))

  Thanks,

 Matt


> Thanks,
> Pierre
>
> We should try to figure this out.
>
> This warning is usually activated by the -options_left argument. Could
> that be in the
> PETSC_OPTIONS env variable, or in ~/.petscrc?
>
>   Thanks,
>
>  Matt
>
>
>> Regards, Adrian
>>
>> --
>> Dr Adrian Croucher
>> Senior Research Fellow
>> Department of Engineering Science
>> Waipapa Taumata Rau / University of Auckland, New Zealand
>> email: a.crouc...@auckland.ac.nz
>> tel: +64 (0)9 923 4611
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOQZGyQDX$
>  
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fo7-rCLfudbdd6yHxfeF1pyKANLJjDLd2VvUtM7xc31s9AoJjSSu-vjiyaVVAhNAqV6c4s-9aJPkL3lFP1DF$>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOQZGyQDX$
  
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bVUciSLxU1Cw3mv04eO2KtlHR8v6ZR7CRE_CpMzmxmCn4x58fUdEJ4L-dknPq_yMbgnc9OkN_O8aOXbl1p93$
 >


Re: [petsc-users] PETSc options

2024-05-06 Thread Matthew Knepley
On Mon, May 6, 2024 at 1:04 AM Adrian Croucher 
wrote:

> hi, My code has some optional command line arguments -v and -h for output
> of version number and usage help. These are processed using Fortran's
> get_command_argument(). Since updating PETSc to version 3. 21, I get some
> extra warnings after the
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> hi,
>
> My code has some optional command line arguments -v and -h for output of
> version number and usage help. These are processed using Fortran's
> get_command_argument().
>
> Since updating PETSc to version 3.21, I get some extra warnings after
> the output:
>
> acro018@EN438880:~$ waiwera -v
> 1.5.0b1
> WARNING! There are options you set that were not used!
> WARNING! could be spelling mistake, etc!
> There is one unused database option. It is:
> Option left: name:-v (no value) source: command line
>
> That didn't used to happen. What should I do to make them go away?
>
>
Hi Adrian,

Barry and Mark's suggestions will make this go away. However, it should not
happen
in the first place. We should try to figure this out.

This warning is usually activated by the -options_left argument. Could that
be in the
PETSC_OPTIONS env variable, or in ~/.petscrc?

  Thanks,

 Matt


> Regards, Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> Waipapa Taumata Rau / University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fo7-rCLfudbdd6yHxfeF1pyKANLJjDLd2VvUtM7xc31s9AoJjSSu-vjiyaVVAhNAqV6c4s-9aJPkL_AI6ctk$
  



Re: [petsc-users] [EXTERNAL] Re: Is there anything like a "DMPlexSetCones()" ?

2024-05-03 Thread Matthew Knepley
On Wed, May 1, 2024 at 6:30 PM Ferrand, Jesus A. 
wrote:

> "You know the number of cells if you can read connectivity. Do you mean
> that the format does not tell you
> the number of vertices?"
>
> Sort of...
> So, the format does provide the number of vertices, however, due to the
> way I read the data in parallel I don't know immediately how many local
> vertices there will be.
> Which prevents me from knowing the chart a priori.
> I figured out the number of vertices using PetscHashSetI to determine the
> number of unique entries in the connectivity list.
>

Right, but that means that the connectivity you read in is not in local
numbering. You will have to renumber it, so making a copy is probably
necessary anyway. This is the same sort of processing I do for the parallel
read.

  Thanks,

 Matt


> --
> *From:* Matthew Knepley 
> *Sent:* Wednesday, May 1, 2024 8:52 PM
> *To:* Ferrand, Jesus A. 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [EXTERNAL] Re: [petsc-users] Is there anything like a
> "DMPlexSetCones()" ?
>
> On Wed, May 1, 2024 at 4:23 PM Ferrand, Jesus A. 
> wrote:
>
> Matt:
>
> My bad again, I need to clarify something that I just realized doesn't
> make sense.
> I said: "The nature of the I/O makes it so that I need to read the
> connectivity before I get a semblance of buffer sizes."
> Scratch that, obviously, if I can read the connectivity, I know the buffer
> sizes.
>
> What I should have said is that I cannot know the "chart" that one sets in
> DMPlexSetChart() a priori like in most cases.
> I need to determine the chart from the connectivity lists.
>
>
> You know the number of cells if you can read connectivity. Do you mean
> that the format does not tell you
> the number of vertices?
>
>   Thanks,
>
> Matt
>
>
> --
> *From:* Ferrand, Jesus A. 
> *Sent:* Wednesday, May 1, 2024 8:17 PM
> *To:* Matthew Knepley 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [EXTERNAL] Re: [petsc-users] Is there anything like a
> "DMPlexSetCones()" ?
>
> Matt:
>
> "I do not understand the "flag check". What is that?"
>
> My bad, I should have referred to the "dm->setupcalled".
> I believe this PetscBool is checked by the other DM (not just DMPlex) APIs.
> The subsequent checks for dm->setupcalled == PETSC_TRUE is what I meant to
> say.
> Here's a copy of DMSetUp().
>
> *PetscErrorCode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowpJ3s88P$
>  > DMSetUp 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/DM/DMSetUp/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowpy3zGSd$
>  >(DM 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/DM/DM/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowt_sd2eh$
>  > dm)*
> 817: {
> 818:   PetscFunctionBegin 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionBegin/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowuMbcxmD$
>  >;
> 820:   if (dm->setupcalled) PetscFunctionReturn 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionReturn/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowhC986W-$
>  >(PETSC_SUCCESS 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowpJ3s88P$
>  >);
> 821:   PetscTryTypeMethod 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscTryTypeMethod/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowrbaxERF$
>  >(dm, setup);
> 822:   dm->setupcalled = PETSC_TRUE 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PETSC_TRUE/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowoILDY_P$
>  >;
> 823:   PetscFunctionReturn 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionReturn/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX8aI6qchOlBW23rM9TlwMTiidfBsowhC986W-$
>  >(PETSC_SUCCESS 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!e603Zn0jXSCVvzNnzzRiius-0-C5jw6GGzCOFJ7EW5nKTGdX

Re: [petsc-users] Question about petsc4py createWithArray function

2024-05-02 Thread Matthew Knepley
On Thu, May 2, 2024 at 12:53 PM Samar Khatiwala <
samar.khatiw...@earth.ox.ac.uk> wrote:

> Hello, I have a couple of questions about createWithArray in petsc4py: 1)
> What is the correct usage for creating a standard MPI Vec with it?
> Something like this seems to work but is it right?: On each rank do: a =
> np. zeros(localSize) v = PETSc. Vec(). createWithArray(a,
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hello,
>
> I have a couple of questions about createWithArray in petsc4py:
>
> 1) What is the correct usage for creating a standard MPI Vec with it? 
> Something like this seems to work but is it right?:
>
> On each rank do:
> a = np.zeros(localSize)
> v = PETSc.Vec().createWithArray(a, comm=PETSc.COMM_WORLD)
>
> Is that all it takes?
>
>
That looks right to me.

> 2) Who ‘owns’ the underlying memory for a Vec created with the 
> createWithArray method, i.e., who is responsible for managing it and doing 
> garbage collection? In my problem, the numpy array is created in a Cython 
> module where memory is allocated, and a pointer to it is associated with a 
> numpy ndarray via PyArray_SimpleNewFromData and PyArray_SetBaseObject. I have 
> a deallocator method of my own that is called when the numpy array is 
> deleted/goes out of scope/whenever python does garbage collection. All of 
> that works fine. But if I use this array to create a Vec with createWithArray 
> what happens when the Vec is, e.g., destroyed? Will my deallocator be called?
>
> No. The PETSc struct will be deallocated, but the storage will not be
touched.

  Thanks,

 Matt

> Or does petsc4py know that it doesn’t own the memory and won’t attempt to 
> free it? I can’t quite figure out from the petsc4py code what is going on. 
> And help would be appreciated.
>
> Thanks very much.
>
> Samar
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!anRKQn_8v127_CSkzk22FfTFRodKT0G2BLwgi_kPAUQt_eqCiySYWSg3ctwewCceXLeJY4FU01ONUG3xfSeD$
  



Re: [petsc-users] [EXTERNAL] Re: Is there anything like a "DMPlexSetCones()" ?

2024-05-01 Thread Matthew Knepley
On Wed, May 1, 2024 at 4:23 PM Ferrand, Jesus A. 
wrote:

> Matt:
>
> My bad again, I need to clarify something that I just realized doesn't
> make sense.
> I said: "The nature of the I/O makes it so that I need to read the
> connectivity before I get a semblance of buffer sizes."
> Scratch that, obviously, if I can read the connectivity, I know the buffer
> sizes.
>
> What I should have said is that I cannot know the "chart" that one sets in
> DMPlexSetChart() a priori like in most cases.
> I need to determine the chart from the connectivity lists.
>

You know the number of cells if you can read connectivity. Do you mean that
the format does not tell you
the number of vertices?

  Thanks,

Matt


> --
> *From:* Ferrand, Jesus A. 
> *Sent:* Wednesday, May 1, 2024 8:17 PM
> *To:* Matthew Knepley 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* Re: [EXTERNAL] Re: [petsc-users] Is there anything like a
> "DMPlexSetCones()" ?
>
> Matt:
>
> "I do not understand the "flag check". What is that?"
>
> My bad, I should have referred to the "dm->setupcalled".
> I believe this PetscBool is checked by the other DM (not just DMPlex) APIs.
> The subsequent checks for dm->setupcalled == PETSC_TRUE is what I meant to
> say.
> Here's a copy of DMSetUp().
>
> *PetscErrorCode 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbEGErkJ-$
>  > DMSetUp 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/DM/DMSetUp/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbHlKVRsv$
>  >(DM 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/DM/DM/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbLTlKr33$
>  > dm)*
> 817: {
> 818:   PetscFunctionBegin 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionBegin/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbO3Lr_yl$
>  >;
> 820:   if (dm->setupcalled) PetscFunctionReturn 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionReturn/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbIQ_4i0Z$
>  >(PETSC_SUCCESS 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbEGErkJ-$
>  >);
> 821:   PetscTryTypeMethod 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscTryTypeMethod/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbAvZ6cYM$
>  >(dm, setup);
> 822:   dm->setupcalled = PETSC_TRUE 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PETSC_TRUE/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbDXZdzLl$
>  >;
> 823:   PetscFunctionReturn 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscFunctionReturn/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbIQ_4i0Z$
>  >(PETSC_SUCCESS 
> <https://urldefense.us/v3/__https://petsc.org/release/manualpages/Sys/PetscErrorCode/__;!!G_uCfscf7eWS!dYZsj6bNyjrW3n2ZUHSK0N1eCUS5ylIPHxjjS0SQT-DSlMCOeipzRVmuuasQCL9GHi7tXPQbRftLbEGErkJ-$
>  >);
> 824: }
>
> "We could make a DMPlexSetCones(), but as you point out, the workflow for
> DMSetUp() would have to change. Where does the memory come from for your
> connectivity?"
> It's a memory that I myself allocate based on a file's contents.
> The nature of the I/O makes it so that I need to read the connectivity
> before I get a semblance of buffer sizes.
> Otherwise, I would stick to the tried and tested way.
>
> Also, when replying to the PETSc developers, users must reply to
> petsc-users@mcs.anl.gov and not just to the individual email accounts of
> the developers, right?
>
>
>
>
> --
> *From:* Matthew Knepley 
> *Sent:* Wednesday, May 1, 2024 8:07 PM
> *To:* Ferrand, Jesus A. 
> *Cc:* petsc-users@mcs.anl.gov 
> *Subject:* [EXTERNAL] Re: [petsc-users] Is there anything like a
> "DMPlexSetCones()" ?
>
> *CAUTION:* This email originated outside of Embry-Riddle Aeronautical
> University. Do not click links or open attachments unless you recognize the
> sender and know the content is safe.
>
> On Wed

Re: [petsc-users] Is there anything like a "DMPlexSetCones()" ?

2024-05-01 Thread Matthew Knepley
On Wed, May 1, 2024 at 3:34 PM Ferrand, Jesus A. 
wrote:

> Dear PETSc team: For a project that I'm working on, I need to manually
> build a DMPlex. From studying the source code of the various APIs in which
> the plex is built from some supported file format, I get that the workflow
> is this: DMPlexSetChart()
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> Dear PETSc team:
>
> For a project that I'm working on, I need to manually build a DMPlex.
> From studying the source code of the various APIs in which the plex is
> built from some supported file format, I get that the workflow is this:
>
>
>1. DMPlexSetChart() <-- Input nCells + nVerts
>2. DMPlexSetConeSize() <-- Input ConeSize for each point in [0,nCells)
>3. DMSetUp() – Allocates memory internally.
>4. DMPlexGetCones() --> Gives you the memory onto which to write the
>cell connectivity.
>5. *Write connectivity*
>6. DMPlexReorderCell() <-- For each point in [0,nCells)
>
>
> I'm in a situation where the memory given by step (4) is available
> a-priori.
> I was hoping to skip steps 2, 3 , and 4 with something like a
> "DMPlexSetCones()", but such an API does not exist.
> My current workaround is to implement steps 2 through 4 as always and have
> double the memory allocated in the interim (my instance + DM's internal
> instance).
> I was thinking of looking for the name of the struct member and = it to my
> memory, but I can't overcome the flag check in DMSetUp() during later calls
> to DMPlexGetCones() or DMPlexGetTransitiveClosure().
>

I do not understand the "flag check". What is that?

We could make a DMPlexSetCones(), but as you point out, the workflow for
DMSetUp() would have to change. Where does the memory come from for your
connectivity?

  Thanks,

Matt


>
> Sincerely:
>
> *J.A. Ferrand*
>
> Embry-Riddle Aeronautical University - Daytona Beach - FL
> Ph.D. Candidate, Aerospace Engineering
>
> M.Sc. Aerospace Engineering
>
> B.Sc. Aerospace Engineering
>
> B.Sc. Computational Mathematics
>
>
> *Phone:* (386)-843-1829
>
> *Email(s):* ferra...@my.erau.edu
>
> jesus.ferr...@gmail.com
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR1_fNBzyO9hgjbXCkFEbX3CMsLXVmoeah8FCeMOb9AMhbszC-gLvEU6mbq67TLMx9kDRY2b--nt1pUHr_Np$
  



Re: [petsc-users] PETSc-GPU

2024-04-26 Thread Matthew Knepley
On Fri, Apr 26, 2024 at 7:23 AM Karthikeyan Chockalingam - STFC UKRI via
petsc-users  wrote:

> Hello, When PETSc is installed with GPU support, will it run only on GPUs
> or can it be run on CPUs (without GPUs)? Currently, PETSc crashes when run
> on CPUs. Thank you. Best regards, Karthik. -- Karthik Chockalingam, Ph. D.
> Senior Research Software
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hello,
>
>
>
> When PETSc is installed with GPU support, will it run only on GPUs or can
> it be run on CPUs (without GPUs)? Currently, PETSc crashes when run on CPUs.
>

It should run on both. Can you send the crash? I think we did fix a problem
with it eagerly initializing GPUs when they were absent.

  Thanks,

 Matt


> Thank you.
>
>
>
> Best regards,
>
> Karthik.
>
>
>
> --
>
> *Karthik Chockalingam, Ph.D.*
>
> Senior Research Software Engineer
>
> High Performance Systems Engineering Group
>
> Hartree Centre | Science and Technology Facilities Council
>
> karthikeyan.chockalin...@stfc.ac.uk
>
>
>
>  [image: signature_3970890138]
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a0VV8buEHYPmWgGr_wyPE35z-vFWz2YwIgw3_bJI-_ydHr3nvKWF8LjXSvvoCPZ0U1zxOqXdxcrOKx8-ahM8$
  



  1   2   3   4   5   6   7   8   9   10   >