ot scheduled there.
> And, what is the situation of oversubscribing? Could you give some
> examples?
>
Some MPI implementations perform extremely poorly when the number of
processes exceeds the number of cores. This is called oversubscription.
Thanks,
Matt
> Thank you!
>
On Wed, Jun 15, 2022 at 4:21 AM Runfeng Jin wrote:
> Hi!
> I use the same machine, same nodes and same processors per nodes. And I
> test many times, so this seems not an accidental result. But your points do
> inspire me. I use Global Array's communicator when solving matrix A, ang
> just MPI_CO
0 1 2 3 4 5 6 7
> 8 4 5 6 7 8 9 10 11
>
>
> - - - > So I m trying now to compile my code with petsc 3.16, may be it
> solves the problem of the rotation order of nodes.
>
> Thank you and have a good day,
>
> Sami,
>
> --
> Dr. Sami BEN ELHAJ SALAH
> Ingénie
On Sat, Jun 11, 2022 at 8:43 PM Samuel Estes
wrote:
> I'm sorry, would you mind clarifying? I think my email was so long and
> rambling that it's tough for me to understand which part was being
> answered.
>
> On Sat, Jun 11, 2022 at 7:38 PM Matthew Knepley wrote:
>
On Sat, Jun 11, 2022 at 8:32 PM Samuel Estes
wrote:
> Hello,
>
> My question concerns preallocation for Mats in adaptive FEM problems. When
> the grid refines, I destroy the old matrix and create a new one of the
> appropriate (larger size). When the grid “un-refines” I just use the same
> (extra
ld:
1) Download the Matrix Market format
2) Use Mat test ex72 to read that matrix + vector and output them in
PETSc binary format
3) Use KSP ex10 to read the PETSc binary format and test your solver
Thanks,
Matt
> Best Regards,
>
> NILTON SANTOS
>
>
> El vie, 1
On Fri, Jun 10, 2022 at 1:15 PM NILTON SANTOS VALDIVIA <
100442...@alumnos.uc3m.es> wrote:
> Hello,
>
> I was trying to load a sparse matrix from a .MAT file (and solve the
> linear system). But even though, I have extracted the A and b matrix and
> vector from this suite *https://sparse.tamu.ed
On Thu, Jun 9, 2022 at 5:20 PM Jorti, Zakariae via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hi,
>
> I am solving non-linear problem that has 5 unknowns {ni, T, E, B, V}, and
> for the preconditioning part, I am using a FieldSplit preconditioner. At
> the last fieldsplit/level, we are left w
On Wed, Jun 8, 2022 at 11:24 AM Sami BEN ELHAJ SALAH <
sami.ben-elhaj-sa...@ensma.fr> wrote:
> Yes, the file "sami.vtu" is loaded correctly in paraview and I have the
> good output like you.
>
> In my code, I tried with the same command given in your last answer and I
> still have the wrong .vtu f
On Tue, Jun 7, 2022 at 9:51 AM wang yuqi wrote:
> Hi, Dear developer:
>
> I encountered the following problems when I run my code with PETSC-3.5.2:
>
>
>
> [46]PETSC ERROR:
>
>
> [46]PETSC ERROR: Caught signal number 11 SEGV
On Fri, Jun 3, 2022 at 9:09 AM Arne Morten Kvarving via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hi!
>
> I have a Chorin pressure correction solver with consistent pressure
> update, i.e.
> pressure solve is based on the Schur complement
>
> E = -A10*ainv(A00)*A01
>
> with A10 = divergence,
e being performed.
Thanks,
Matt
> The ksp_monitor out for this running (included 15 iterations) using 36 MPI
> processes and a file with the memory bandwidth information (testSpeed) are
> also attached. We can provide our C++ script if it is needed.
>
> Thanks a lot!
> Best,
&g
On Thu, Jun 2, 2022 at 8:59 AM Patrick Sanan
wrote:
> Thanks, Barry and Changqing! That seems reasonable to me, so I'll make an
> MR with that change.
>
Hi Patrick,
In the MR, could you add that option to all places we internally use
Preallocator? I think we mean it for those.
Thanks,
For any configure error, you need to send configure.log
Thanks,
Matt
On Thu, Jun 2, 2022 at 5:38 AM hamid badi wrote:
> Hi,
>
> I want to compile petsc with openblas & mumps (sequential) under mingw64.
> To do so, I compiled openblas and mumps without any problem. But when it
> comes to
openMP threads or many MPI
> processes) are attached.
>
>
> Thank you!
> Best,
> Lidia
>
> On 31.05.2022 15:21, Matthew Knepley wrote:
>
> I have looked at the local logs. First, you have run problems of size 12
> and 24. As a rule of thumb, you need 10,000
> v
h/index.html>
>
>
>
> Le 29 mai 2022 à 18:02, Sami BEN ELHAJ SALAH <
> sami.ben-elhaj-sa...@ensma.fr> a écrit :
>
> Hi Matthew,
> Thank you for this example. It seems exactly what I am looking for.
> Thank you again for your help and have a good day.
> Sami
On Tue, May 31, 2022 at 10:28 AM Ye Changqing
wrote:
> Dear developers of PETSc,
>
> I encountered a problem when using the DMStag module. The program could be
> executed perfectly in serial, while errors are thrown out in parallel
> (using mpiexec). Some rows in Mat cannot be accessed in local p
led internally.
>>
>> Thanks,
>>
>> Matt
>>
>>
>>> Thanks,
>>> Mike
>>>
>>>
>>> I will also point out that Toby has created a nice example showing how
>>>> to create an SF for halo exchange between lo
ode currently cannot find PetscSFCreateRemoteOffsets().
>
I believe if you pass in NULL for remoteOffsets, that function will be
called internally.
Thanks,
Matt
> Thanks,
> Mike
>
> 2022년 5월 24일 (화) 오후 8:46, Matthew Knepley 님이 작성:
>
>> I will also point out that Toby has crea
On Tue, May 31, 2022 at 1:02 AM 冯宏磊 <12132...@mail.sustech.edu.cn> wrote:
> my code is below:
> ierr = PetscViewerCreate(PETSC_COMM_WORLD,&h5);CHKERRQ(ierr);
> ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"explicit.h5",
> FILE_MODE_WRITE, &h5);CHKERRQ(ierr);
> ierr = PetscObjectSetName((PetscObject
I have looked at the local logs. First, you have run problems of size 12
and 24. As a rule of thumb, you need 10,000
variables per process in order to see good speedup.
Thanks,
Matt
On Tue, May 31, 2022 at 8:19 AM Matthew Knepley wrote:
> On Tue, May 31, 2022 at 7:39 AM Lidia wr
so threads should not change
> anything).
>
> As Matt said, it is best to start with a PETSc example that does something
> like what you want (parallel linear solve, see src/ksp/ksp/tutorials for
> examples), and then add your code to it.
> That way you get the basic infrastruct
On Mon, May 30, 2022 at 10:12 PM 冯宏磊 <12132...@mail.sustech.edu.cn> wrote:
> Hey there
> I'm a new user of PETSc. In use, I want to read and write an HDF5 file in
> parallel, but I only found an example of serial reading and writing. How
> can I read and write HDF5 in parallel? Can you give me a c
On Mon, May 30, 2022 at 10:12 PM Lidia wrote:
> Dear colleagues,
>
> Is here anyone who have solved big sparse linear matrices using PETSC?
>
There are lots of publications with this kind of data. Here is one recent
one: https://arxiv.org/abs/2204.01722
> We have found NO performance improveme
On Sat, May 28, 2022 at 2:19 PM Matthew Knepley wrote:
> On Sat, May 28, 2022 at 1:35 PM Sami BEN ELHAJ SALAH <
> sami.ben-elhaj-sa...@ensma.fr> wrote:
>
>> Hi Matthew,
>>
>> Thank you for your response.
>>
>> I don't have that. My DM object i
S)
> Institut Pprime - ISAE - ENSMA
> Mobile: 06.62.51.26.74
> Email: sami.ben-elhaj-sa...@ensma.fr
> www.samibenelhajsalah.com
> <https://samiben91.github.io/samibenelhajsalah/index.html>
>
>
>
> Le 27 mai 2022 à 20:45, Matthew Knepley a écrit :
>
> On F
On Fri, May 27, 2022 at 9:42 AM Sami BEN ELHAJ SALAH <
sami.ben-elhaj-sa...@ensma.fr> wrote:
> Hello Isaac,
>
> Thank you for your reply!
>
> Let me confirm that when I use DMCreateMatrix() with the orig_dm, I got my
> jacobian_matrix. Also, I have succeeded to solve my system and my solution
> wa
I will also point out that Toby has created a nice example showing how to
create an SF for halo exchange between local vectors.
https://gitlab.com/petsc/petsc/-/merge_requests/5267
Thanks,
Matt
On Sun, May 22, 2022 at 9:47 PM Matthew Knepley wrote:
> On Sun, May 22, 2022 at 4:28
On Sun, May 22, 2022 at 4:28 PM Mike Michell wrote:
> Thanks for the reply. The diagram makes sense and is helpful for
> understanding 1D representation.
>
> However, something is still unclear. From your diagram, the number of
> roots per process seems to vary according to run arguments, such as
On Fri, May 20, 2022 at 4:45 PM Mike Michell wrote:
> Thanks for the reply.
>
> > "What I want to do is to exchange data (probably just MPI_Reduce)" which
> confuses me, because halo exchange is a point-to-point exchange and not a
> reduction. Can you clarify?
> PetscSFReduceBegin/End seems to b
you,
>
> -Alfredo
>
> On Thu, May 19, 2022 at 12:31 PM Matthew Knepley
> wrote:
>
>> On Thu, May 19, 2022 at 7:27 AM Alfredo J Duarte Gomez <
>> aduar...@utexas.edu> wrote:
>>
>>> Good afternoon PETSC users,
>>>
>>> I am looking for s
On Thu, May 19, 2022 at 7:27 AM Alfredo J Duarte Gomez
wrote:
> Good afternoon PETSC users,
>
> I am looking for some suggestions on preconditioners/solvers.
>
> Currently, I have a custom preconditioner that solves 4 independent
> systems, let's call them A,B,C, and D.
>
> A is an advective, dif
On Tue, May 17, 2022 at 6:47 PM Toby Isaac wrote:
> A leaf point is attached to a root point (in a star forest there are only
> leaves and roots), so that means that a root point would be the point that
> owns a degree of freedom and a leaf point would have a ghost value.
>
> For a "point SF" of
On Mon, May 16, 2022 at 6:48 AM Mark Adams wrote:
> You generally want to use
> https://petsc.org/main/docs/manualpages/TS/TSMonitorSet/ for
> something like this.
> TSSetPostStep is for diagnostics.
> There are differences between the two but I don't recall them.
>
Yes, I think this belongs in
On Mon, May 16, 2022 at 5:03 AM Yang Zongze wrote:
> Hi,
>
>
>
> I am solving a Linear system with LU factorization. But failed with the
> following error.
>
> Is there some suggestions on debugging this error? Thanks!
>
This appears to be inside MUMPS. I would recommend two things:
1) Get a st
ing I-node routines
> maximum iterations=50, maximum function evaluations=1
> tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
> total number of function evaluations=20
> norm schedule ALWAYS
> Jacobian is built using a DMDA local Jacobian
> problem ex10 on 2
Your subdomain solves do not appear to be producing descent whatsoever.
Possible reasons:
1) Your subdomain Jacobians are wrong (this is usually the problem)
2) You have some global coupling field for which local solves give no
descent. (For this you want nonlinear elimination I think)
Tha
On Thu, May 12, 2022 at 9:09 AM Karabelas, Elias (
elias.karabe...@uni-graz.at) wrote:
> Dear Team,
>
> I ran into some issues using Petsc with Boomeramg and FieldSplit as PC on
> the ARCHER2 cluster.
>
> These are my options for solving a Navier-Stokes-like system and it ran
> fine on other clus
On Fri, May 6, 2022 at 9:28 AM Quentin Chevalier <
quentin.cheval...@polytechnique.edu> wrote:
> Sorry for forgetting the list. Making two matrices was more of a
> precaution then a carefully thought strategy.
>
> It would seem the MWE as I provided it above (with a setDimensions to
> reduce calc
On Tue, May 3, 2022 at 3:28 PM Barry Smith wrote:
>
> A difficult question with no easy answers.
>
> First, do you have a restart system so you can save your state just
> before your "bad behavior" and run experiments easily at the bad point?
>
> You could try to use SLEPc to compute the fi
obust {J}acobian
lagging in {N}ewton-type methods},
year = {2013},
booktitle = {International Conference on Mathematics and Computational
Methods Applied to Nuclear Science and Engineering},
pages = {2554--2565},
petsc_uses={KSP},
}
Thanks,
Matt
> Qi
>
>
> O
On Tue, May 3, 2022 at 2:58 AM Pierre Seize wrote:
> Hi,
>
> If I may, is this what you want ?
>
> https://petsc.org/main/docs/manualpages/SNES/SNESSetLagJacobian.html
Yes, this is a good suggestion.
Also, you could implement an approximation to the Jacobian.
You could then improve it at each
to initial all variables to 0. Uninitialized vars can
be NaN. That is the first place I would look. You
can usually find that with compiler warnings.
Thanks,
Matt
> Interrogating the optimized version of the code now…
>
>
> On Mon, May 2, 2022 at 11:11 AM Matthew Knepley
2.html
>
That turned out to be a bug in their code.
Thanks,
Matt
> On Mon, May 2, 2022 at 7:56 AM Barry Smith wrote:
>
>>
>>
>> On May 2, 2022, at 8:12 AM, Matthew Knepley wrote:
>>
>> On Mon, May 2, 2022 at 12:23 AM Ramakrishnan Thirumalaisamy &l
On Mon, May 2, 2022 at 12:23 PM Matteo Semplice <
matteo.sempl...@uninsubria.it> wrote:
> Thanks!
>
> On 02/05/2022 18:07, Matthew Knepley wrote:
>
> On Mon, May 2, 2022 at 11:25 AM Matteo Semplice <
> matteo.sempl...@uninsubria.it> wrote:
>
>> Hi.
>>
On Mon, May 2, 2022 at 11:25 AM Matteo Semplice <
matteo.sempl...@uninsubria.it> wrote:
> Hi.
>
> I know that when I create a DMDA I can select periodic b.c. per grid
> direction.
>
> I am facing a PDE with 2 dofs per node in which one dof has periodic
> b.c. in the x direction and the other one p
On Mon, May 2, 2022 at 12:23 AM Ramakrishnan Thirumalaisamy <
rthirumalaisam1...@sdsu.edu> wrote:
> Thank you. I have a couple of questions. I am solving the low Mach
> Navier-Stokes system using a projection preconditioner (pc_shell type) with
> GMRES being the outer solver and Richardson being t
,
we would have to be careful that nothing looked directly at the data.
2) We could reverse the points storage in-place. This is a little more
intrusive, but everything would work seamlessly.
It would take more work to do this in parallel, but not all that
much.
Thanks,
On Fri, Apr 29, 2022 at 8:27 AM Mike Michell wrote:
>
> Thanks for the answers and I agree. Creating dual mesh when the code
> starts and uses that dm will be the easiest way.
> But it is confusing how to achieve that. The entire DAG of the original
> mesh should change, except for vertices. How
to dm object. Basically,
> my vector objects (x, y, vel) are not seen from dm viewer & relevant output
> file. Do you have any recommendations?
>
> Thanks,
> Mike
>
> 2022년 4월 26일 (화) 오후 6:33, Matthew Knepley 님이 작성:
>
>> On Tue, Apr 26, 2022 at 7:27 PM Mike Michell
lv);
will give you a Vec with the local data in it that can be addressed by mesh
point (global vectors too). Also you would be able
to communicate this data if the mesh is redistributed, or replicate this
data if you overlap cells in parallel.
Thanks,
Matt
> Thanks,
> Mike
>
&g
ver, the file is okay if I print to "sol.vtu".
> From "sol.vtu" I can see the entire field with rank. Is using .vtu format
> preferred by petsc?
>
VTK is generally for debugging, but it should work. I will take a look.
VTU and HDF5 are the preferred formats.
Thanks,
parallel, since those checks
will only work in serial.
I have fixed the code, and added a parallel test. I have attached the new
file, but it is also in this MR:
https://gitlab.com/petsc/petsc/-/merge_requests/5173
Thanks,
Matt
> Thanks,
> Mike
>
> 2022년 4월 26일 (화) 오전
On Tue, Apr 26, 2022 at 8:52 AM Kirill Volyanskiy
wrote:
> Hello,
> There is no VTK format part in the code of the VecView
> (src/vec/vec/interface/vector.c) function. Therefore TSMonitorSolutionVTK
> doesn't work either.
>
Yes, VTK requires a mesh, so viewing a generic vector will not produce a
On Mon, Apr 25, 2022 at 9:41 PM Mike Michell wrote:
> Dear PETSc developer team,
>
> I'm trying to learn DMPlex to build a parallel finite volume code in 2D &
> 3D. More specifically, I want to read a grid from .msh file by Gmsh.
> For practice, I modified /dm/impls/plex/ex1f90.F90 case to read &
On Sun, Apr 24, 2022 at 7:36 AM Flavio Riche wrote:
> Hi,
>
> I am new to petsc. When I configure PETSC with
>
> ./configure complex --with-scalar-type=complex --with-cc=gcc
> --with-cxx=g++ --with-fc=gfortran --download-scalapack --download-mumps
> --download-fftw --download-mpich --download-fbl
On Wed, Apr 20, 2022 at 5:13 PM Phlipot, Greg
wrote:
> Hello,
>
> When using TS with the option TS_EXACT_FINALTIME_MATCHSTEP to force TS
> to stop at the final time, I'm seeing the adaptive step controller
> choose smaller time steps than the minimum time step that is set with
> TSAdaptGetStepLim
How did any Fortran tests compile in the CI?
Thanks,
Matt
On Mon, Apr 18, 2022 at 4:09 PM Satish Balay via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Its deprecated - but the removal in fortran interface was not intentional.
> So its added in
> https://gitlab.com/petsc/petsc/-/commi
e it is functioning. I think no one
bothered to really check it out.
Matt
> For example
>
> $ brew install valgrind
> valgrind: Linux is required for this software.
> Error: valgrind: An unsatisfied requirement failed this build.
>
>
> On Apr 17, 2022, at 9:25 PM,
On Sun, Apr 17, 2022 at 2:59 PM Sanjay Govindjee wrote:
> Codesigning is not the issue. My gdb is properly codesigned (here are my
> synopsized instructions based off the page your reference but with out the
> extraneous details http://feap.berkeley.edu/wiki/index.php?title=GDB).
>
> I think thi
On Fri, Apr 15, 2022 at 7:07 PM Jennifer Ellen Fromm
wrote:
> Thank you for your reply, with petsc4py the only error message I get is:
>
> Traceback (most recent call last):
> File "../../../exhume-fenics-prototype/demos/exhume_poisson.py", line
> 228, in
> solveKSP(dR_b,R_b,u_p, method=LI
ains
>> -pc_gasm_overlap 4
>> Inner subdomain:
>> 0 1 2 3 4
>> Outer subdomain:
>> 0 1 2 3 4 5 6 7 8
>> Inner subdomain:
>> 5 6 7 8
>> Outer subdomain:
>> 0 1 2 3 4 5 6 7 8
>>
>> Thanks,
>> Pierre
>>
>> Thank you very m
nerate an
>> overlap algebraically which is equivalent to the overlap you would have
>> gotten geometrically.
>> If you know that “geometric” overlap (or want to use a custom definition
>> of overlap), you could use
>> https://petsc.org/release/docs/manualpages/PC/PCA
On Wed, Apr 13, 2022 at 9:11 AM Mark Adams wrote:
>
>
> On Wed, Apr 13, 2022 at 8:56 AM Matthew Knepley wrote:
>
>> On Wed, Apr 13, 2022 at 6:42 AM Mark Adams wrote:
>>
>>> No, without overlap you have, let say:
>>> core 1: 1:32, 1:32
>>> co
On Wed, Apr 13, 2022 at 6:42 AM Mark Adams wrote:
> No, without overlap you have, let say:
> core 1: 1:32, 1:32
> core 2: 33:64, 33:64
>
> Overlap will increase the size of each domain so you get:
> core 1: 1:33, 1:33
> core 2: 32:65, 32:65
>
I do not think this is correct. Here is the
l.
>
Even here you do not get edge-nested meshes.
Matt
> Best regards,
> Ce
>
>
>
> Matthew Knepley 于2022年4月12日周二 18:47写道:
>
>> On Tue, Apr 12, 2022 at 2:10 AM Ce Qin wrote:
>>
>>> Thanks for your reply, Matthew.
>>>
>>> One more
, for custom things, turning off the automatic stuff might be the best
option.
Thanks,
Matt
> Thanks, best, Berend.
>
>
>
> On 4/12/22 12:49, Matthew Knepley wrote:
> > On Tue, Apr 12, 2022 at 2:50 AM Berend van Wachem
> > mailto:berend.vanwac...@ovgu.de>&
bute_overlap - The size of the overlap halo
from https://petsc.org/main/docs/manualpages/DM/DMSetFromOptions.html
Thanks,
Matt
> Many thanks, best regards,
>
> Berend.
>
>
>
>
> On 4/11/22 16:23, Matthew Knepley wrote:
> > On Wed, Apr 6, 2022 at 9:41 AM Bere
On Mon, Apr 11, 2022 at 2:33 PM Jean Marques
wrote:
> Thank you very much for your inputs.
>
> Matthew, this LS is a part of a rSVD algorithm (Halko et al, SIAM Review,
> 2009), hence I need to compute direct and adjoints system solutions.
>
The reason I asked was to understand whether direct so
out you have.
Also, the call to DMPlexDistribute() here (and the Partitioner calls) are
now superfluous.
Thanks,
Matt
> Many thanks for looking into this, best regards,
> Berend.
>
>
>
> On 4/4/22 23:05, Matthew Knepley wrote:
> > On Mon, Apr 4, 2022 at 3:36
On Fri, Apr 1, 2022 at 10:14 AM Ce Qin wrote:
> Dear all,
>
> I want to implement the adaptive finite element method using the DMPlex
> interface. So I would like to know whether DMPlex supports local (also
> hierarchical) refinements of tetrahedron elements. I found that there is an
> adaptation
On Sat, Apr 9, 2022 at 7:41 PM Jean Marques
wrote:
> Hi all,
>
> This may be a naive question, and I hope this is the right place to ask
> about it.
> I need to solve a direct linear system with a sparse matrix R, then an
> adjoint system the hermitian of R.
>
> I use a petsc4py, so what I do is
On Fri, Apr 8, 2022 at 12:57 PM Aleksandra Grudskaia <
agru...@mpa-garching.mpg.de> wrote:
> Dear PETSC team,
>
> Sometimes I get the error
>
> Internal error 2 in QAMD : Schur size expected: 0 Real: 1
>
This is an internal error in MUMPS, so we cannot control it. I would
submit this to the MUMP
On Thu, Apr 7, 2022 at 8:16 AM 高亚贺 via petsc-users
wrote:
> Dear Mr./Ms.,
>
>
> I have used ‘DMCreateMatrix’ to create a matrix *K*, and also the
> ‘DMCreateGlobalVector’ to create two vectors *U* (to be solved) and *F
> *(right-hand
> side), i.e. *KU*=*F*. Now, I want to add some complex constr
On Thu, Apr 7, 2022 at 6:12 AM Gabriela Nečasová wrote:
> Dear PETSc team,
>
> I would like to ask you a question about the matrix preallocation.
> I am using the routine MatMPIAIJSetPreallocation().
>
> Example: The matrix A has the size 18 x 18 with 168 nonzeros:
> A =
> 106.21 -91.667
On Mon, Apr 4, 2022 at 3:36 PM Berend van Wachem
wrote:
> Dear Petsc team,
>
> Since about 2 years we have been using Petsc with DMPlex, but since
> upgrading our code to Petsc-3.17.0 something has broken.
>
> First we generate a DM from a DMPlex with DMPlexCreateFromFile or
> creating one with
On Sat, Apr 2, 2022 at 8:59 PM Bhargav Subramanya <
bhargav.subrama...@kaust.edu.sa> wrote:
> Dear All,
>
> I am trying to solve Ax = b in parallel using MUMPS, where x is composed
> of velocity, pressure, and temperature. There is a null space due to the
> homogeneous Neumann pressure boundary co
you consider it in the first way it makes
> sense that it would be nxn.
>
The idea here is that the internal structure of P does not matter. it has
the same interface as the matrix A, so fom your point of view they are
identical.
Thanks,
Matt
> On Fri, Apr 1, 2022 at 12:00 PM Ma
lues again
> a second time to actually set the values of the parallel Mat you actually
> use to solve the system?
>
Yes.
Thanks,
Matt
> On Fri, Apr 1, 2022 at 11:50 AM Matthew Knepley wrote:
>
>> On Fri, Apr 1, 2022 at 12:45 PM Samuel Estes
>> wrote:
>>
< rEnd. So if you know (r, c) for each
nonzero, you know whether it is in the diagonal block.
Thanks,
Matt
> On Fri, Apr 1, 2022 at 11:34 AM Matthew Knepley wrote:
>
>> On Fri, Apr 1, 2022 at 12:27 PM Samuel Estes
>> wrote:
>>
>>> Hi,
>>>
&g
On Fri, Apr 1, 2022 at 12:27 PM Samuel Estes
wrote:
> Hi,
>
> I have a problem in which I know (roughly) the number of non-zero entries
> for each row of a matrix but I don't have a convenient way of determining
> whether they belong to the diagonal or off-diagonal part of the parallel
> matrix.
gt; *std::cerr << vecSize << '\t' << ierr << '\n';*
> *local_vec = PETSC_NULL;*
> * }*
>
> which should set *local_vec* to *PETSC_NULL* as soon as it is no longer
> in use.
>
You must be
/home/roland/Downloads/git-files/petsc/src/vec/vec/interface/rvector.c:1780
>
> I do not understand why it tries to access the vector, even though it has
> been set to PETSC_NULL in the previous free-call.
>
> What code is setting that pointer to NULL?
Thanks,
Matt
> Regar
On Thu, Mar 31, 2022 at 10:11 AM Medane TCHAKOROM <
medane.tchako...@univ-fcomte.fr> wrote:
> Hello,
>
> I got one issue with MatMult method that I do not understand.
>
> Whenever I multiply Matrix A by vector b (as shown below), the printed
> result
>
> show a value with an exponent that is far a
fferent PETSC_ARCH configures, and switch at runtime with that variable.
Thanks,
Matt
> Regards,
> Roland Richter
>
> Am 31.03.22 um 15:35 schrieb Matthew Knepley:
>
> On Thu, Mar 31, 2022 at 9:01 AM Roland Richter
> wrote:
>
>> Hei,
>>
>> Thank
anks,
Matt
> Regards,
>
> Roland Richter
> Am 31.03.22 um 12:14 schrieb Matthew Knepley:
>
> On Thu, Mar 31, 2022 at 5:58 AM Roland Richter
> wrote:
>
>> Hei,
>>
>> For a project I wanted to combine boost::odeint for timestepping and
>> PETSc-based
On Thu, Mar 31, 2022 at 5:58 AM Roland Richter
wrote:
> Hei,
>
> For a project I wanted to combine boost::odeint for timestepping and
> PETSc-based vectors and matrices for calculating the right hand side. As
> comparison for both timing and correctness I set up an armadillo-based
> right hand si
On Wed, Mar 23, 2022 at 11:09 AM Joauma Marichal <
joauma.maric...@uclouvain.be> wrote:
> Hello,
>
> I sent an email last week about an issue I had with DMSwarm but did not
> get an answer yet. If there is any other information needed or anything I
> could try to solve it, I would be happy to do t
gt; On Tue, Mar 22, 2022 at 1:21 PM Matthew Knepley wrote:
>
>> On Tue, Mar 22, 2022 at 4:16 PM Sam Guo wrote:
>>
>>> Here is one memory comparison (memory in MB)
>>> np=1np=2np=4np=8np=16
>>> shell 1614 1720 1874 1673 1248
>>> PETSc(using full
On Tue, Mar 22, 2022 at 4:16 PM Sam Guo wrote:
> Here is one memory comparison (memory in MB)
> np=1np=2np=4np=8np=16
> shell 1614 1720 1874 1673 1248
> PETSc(using full matrix) 2108 2260 2364 2215 1734
> PETSc(using symmetric matrix) 1750 2100 2189 2094 1727Those are the total
> water mark memo
>
>
>
> Thanks again!
>
Great! I am happy everything is working.
Matt
> Marco Cisternino
>
>
>
>
>
> *From:* Matthew Knepley
> *Sent:* martedì 22 marzo 2022 15:22
> *To:* Marco Cisternino
> *Cc:* Barry Smith ; petsc-users@mcs.anl.gov
> *Subject:* Re
On Tue, Mar 22, 2022 at 9:55 AM Marco Cisternino <
marco.cistern...@optimad.it> wrote:
> Thank you Barry!
> No, no reason for FGMRES (some old tests showed shorter wall-times
> relative to GMRES), I’m going to use GMRES.
> I tried GMRES with GAMG using PCSVD on the coarser level on real cases,
> l
On Mon, Mar 21, 2022 at 11:22 AM Ferrand, Jesus A.
wrote:
> Greetings.
>
> I am having trouble exporting a vertex-based solution field to ParaView
> when I run my PETSc script in parallel (see screenshots). The smoothly
> changing field is produced by my serial runs whereas the "messed up" one is
ull space components can be introduced by the
rest of the preconditioner, but when I use range-space smoothers and
local interpolation it tends to be much better for me. Maybe it is just my
problems.
Thanks,
Matt
> Thank you all.
>
>
>
> Marco Cisternino
>
>
>
>
On Mon, Mar 21, 2022 at 12:06 PM Mark Adams wrote:
> The solution for Neumann problems can "float away" if the constant is not
> controlled in some way because floating point errors can introduce it even
> if your RHS is exactly orthogonal to it.
>
> You should use a special coarse grid solver fo
id that you can add to for off procesor values and then
> you could use the CPU communication in DM.
>
>
> It would be GPU communication, not CPU.
>
>Matt
>
>
> On Thu, Mar 17, 2022 at 7:19 PM Matthew Knepley wrote:
>
> On Thu, Mar 17, 2022 at 4:46 PM Sajid
unication in DM.
>
It would be GPU communication, not CPU.
Matt
> On Thu, Mar 17, 2022 at 7:19 PM Matthew Knepley wrote:
>
>> On Thu, Mar 17, 2022 at 4:46 PM Sajid Ali Syed wrote:
>>
>>> Hi PETSc-developers,
>>>
>>> Is it possible to use VecSetVa
On Thu, Mar 17, 2022 at 4:46 PM Sajid Ali Syed wrote:
> Hi PETSc-developers,
>
> Is it possible to use VecSetValues with distributed-memory CUDA & Kokkos
> vectors from the device, i.e. can I call VecSetValues with GPU memory
> pointers and expect PETSc to figure out how to stash on the device it
ou would install it anywhere else. Then install PETSc in the
container.
I have done that for another project and got it to work.
Thanks,
Matt
> Cheers,
>
>
>
> Ernesto.
>
>
>
> *From:* Matthew Knepley
> *Sent:* Wednesday, March 16, 2022 5:45 AM
> *To:*
On Wed, Mar 16, 2022 at 1:04 AM Ernesto Prudencio via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hi.
>
>
>
> I have an application that uses MKL for some convolution operations. Such
> MKL functionality uses, I suppose, BLAS/LAPACK underneath.
>
>
>
> This same application of mine also uses P
801 - 900 of 2298 matches
Mail list logo