Mark,
It may be best to try jumping to the latest PETSc 3.12. ParMETIS had some
difficult issues with matrices we started to provide to it in the last year and
the code to handle the problems may not be in 3.11
If the problem persists in 3.12 then I would start with checking with
v
il update function,
> assuming the result will be passed into the matrix operation automatically?
>
> You update the information in the context associated with the shell matrix.
> No need to destroy it.
>
> Thanks,
>
> Matt
>
> Thanks,
> Yuyun
>
&
side global variables!)
>
> After I create such a shell matrix, can I use it like a regular matrix in KSP
> and utilize preconditioners?
>
> Thanks!
> Yuyun
> From: petsc-users on behalf of Yuyun Yang
>
> Sent: Sunday, February 16, 2020 3:12 AM
> To: Smith, Ba
Yuyun,
If you are speaking about using a finite difference stencil on a structured
grid where you provide the Jacobian vector products yourself by looping over
the grid doing the stencil operation we unfortunately do not have exactly that
kind of example.
But it is actually not diff
Richard,
It is likely that for these problems some of the integers become too large
for the int variable to hold them, thus they overflow and become negative.
You should make a new PETSC_ARCH configuration of PETSc that uses the
configure option --with-64-bit-indices, this will c
Given the 2040 either you or MUMPS is running out of communicators. Do you
use your own communicators in your code and are you freeing them when you don't
need them?
If it is not your code then it is MUMPs that is running out and you should
contact them directly
RECURSIVE SUBROU
Note that you can add -snes_fd_operator and get Newton's method with a
preconditioner built from the Picard matrix.
Barry
> On Feb 10, 2020, at 11:16 AM, Jed Brown wrote:
>
> Olek Niewiarowski writes:
>
>> Barry,
>> Thank you for your help and detailed suggestions. I will try to impl
ion methods. But I will probably need to look through a number of
> literatures before laying my hands on those (or bother you with more
> questions!). Anyway, thanks again for your kind help.
>
>
> All the best,
> Hao
>
>> On Feb 8, 2020, at 8:02 AM, Smith, Barr
.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecScatterBegin 84 1.0 5.2800e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSetUp 4 1.0 1.4765e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
> KSPSolve 1 1.0 1.8514e+00 1.0 4.31e+09 1.0 0.0e+00 0.0e+00
> 0.0e+00 85
4 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecNormalize 75 1.0 1.8462e-01 1.0 2.51e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00 3 3 0 0 0 3 3 0 0 0 1360
> KSPSetUp 4 1.0 1.1341e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.
anks,
>
> Alexander (Olek) Niewiarowski
> PhD Candidate, Civil & Environmental Engineering
> Princeton University, 2020
> Cell: +1 (610) 393-2978
> From: Matthew Knepley
> Sent: Thursday, February 6, 2020 5:33
> To: Olek Niewiarowski
> Cc: Smith, Barry F. ; pets
pport unpreconditioned in
> LEFT/RIGHT (either way). Is it possible to do that (output unpreconditioned
> residual) in PETSc at all?
-ksp_monitor_true_residualYou can also run GMRES (and some other
methods) with right preconditioning, -ksp_pc_side right then the residual
computed is by the a
lp you do this.
>
> Thanks,
>
> Matt
>
> On Wed, Feb 5, 2020 at 1:36 AM Smith, Barry F. via petsc-users
> wrote:
>
>I am not sure of everything in your email but it sounds like you want to
> use a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u).
o additional memory?
-ksp_type gmres or bcgs -pc_type jacobi (the sor won't work because the
zero diagonals) It will not be good preconditioner. Are you sure you don't
have additional memory for the preconditioner? A good preconditioner might
require up to 5 to 6 the
I am not sure of everything in your email but it sounds like you want to use
a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u). That is solve
A(u^{n}) (u^{n+1} - u^{n}) = F(u^{n}) - A(u^{n})u^{n} where A(u) = K(u) -
kaaT
PETSc provides code to this with SNESSetPicard() (see the manual p
I think this is a Python-Matlab question, not specifically related to PETSc
in any way. Googling python matrix hdf5 matlab there are mentions of h5py
library that can be used to write out sparse matrices in Matlab HDF5 format.
Which could presumably be read by PETSc. PETSc can also read in th
> On Feb 4, 2020, at 12:41 PM, Hao DONG wrote:
>
> Dear all,
>
>
> I have a few questions about the implementation of diagonal ILU PC in PETSc.
> I want to solve a very simple system with KSP (in parallel), the nature of
> the system (finite difference time-harmonic Maxwell) is probably no
and it form logs, it is required 459 MB and 52 MB for matrix and
> vector storage respectively. After summing of all objects for which memory is
> allocated we get only 517 MB.
>
> Thank you again for your time! Have a nice day.
>
> Kind regards,
> Dmitry
>
>
GMRES also can by default require about 35 work vectors if it reaches the
full restart. You can set a smaller restart with -ksp_gmres_restart 15 for
example but this can also hurt the convergence of GMRES dramatically. People
sometimes use the KSPBCGS algorithm since it does not require all
You might find this option useful.
--with-packages-download-dir=
Skip network download of package tarballs and locate them in specified
dir. If not found in dir, print package URL - so it can be obtained manually.
This generates a list of URLs to download so you don't need to
https://gitlab.com/petsc/petsc/-/merge_requests/2494
Will only turn off the hyper batch build if it is a KNL system. Will be
added to maint branch
Baryr
> On Jan 31, 2020, at 11:58 AM, Tomas Mondragon
> wrote:
>
> Hypre problem resolved. PETSc commit 05f86fb made in August 05, 201
MatGetSubMatrix() and then do the product on the sub matrix then VecSum
Barry
> On Jan 30, 2020, at 3:02 PM, Jeremy Theler wrote:
>
> Sorry if this is basic, but I cannot figure out how to do it in
> parallel and I'd rather not say how I do it in single-processor mode
> because I would
As Jed would say
--with-lgrind=0
> On Jan 30, 2020, at 2:49 PM, Fande Kong wrote:
>
>
> Hi All,
>
> It looks like a bug for me.
>
> PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The
> configuration log is attached. Any way to disable lgrind detection.
>
> Thanks
igure solved
> my problem.
> So I attached the associated log files named as
> configure_openblas_64-bit-indices.log and test_openblas_64-bit-indices.log
>
>
> All operations were performed with barry/2020-01-15/support-default-integer-8
> version of PETSc.
>
>
> Kin
ory"?
>
> Barry
>
>
> >
> > Thanks,
> > Sam
> >
> > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. wrote:
> >
> > Sam,
> >
> > I am not sure what your goal is but PETSc error return codes are error
> > r
because of the unknown error it could be that the releasing
of the memory causes a real crash.
Is your main concern when you use PETSc for a large problem and it errors
because it is "out of memory"?
Barry
>
> Thanks,
> Sam
>
> On Mon, Jan 20, 2020 at 4:
nload package OPENBLAS from:
>>> git://https://github.com/xianyi/OpenBLAS.git
>>> * If URL specified manually - perhaps there is a typo?
>>> * If your network is disconnected - please reconnect and rerun ./configure
>>> * Or perhaps you have a firewall blocking the download
>&
and use the configure option:
> --download-openblas=/yourselectedlocation
> Could not locate downloaded package OPENBLAS in
> /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages
>
> But I checked the last location (.../externalpac
Sam,
I am not sure what your goal is but PETSc error return codes are error
return codes not exceptions. They mean that something catastrophic happened and
there is no recovery.
Note that PETSc solvers do not return nonzero error codes on failure to
converge etc. You call, for exam
aborting MPI_COMM_WORLD (comm=0x4400), error 50152059, comm rank 0
>
> error analysis -
>
> [0] on DESKTOP-R88IMOB
> ./ex5f aborted the job. abort code 50152059
>
> error analysis -
> Completed test examples
>
> Kind regards,
> Dmitry Melnichuk
&
Dmitry,
I have completed and tested the branch
barry/2020-01-15/support-default-integer-8 it is undergoing testing now
https://gitlab.com/petsc/petsc/merge_requests/2456
Please give it a try. Note that MPI has no support for integer promotion so
YOU must insure that any MPI calls
) for functions */
> if (useFerr) {
> - OutputFortranToken( fout, 7, "integer" );
> + OutputFortranToken( fout, 7, "PetscErrorCode" );
> OutputFortranToken( fout, 1, errArgNameParm);
> } else if (is_function) {
> OutputFortranToken( fout, 7, ArgTo
That is superlu_dist and hypre.
Yes, but both backends are rather primitive and will be a little struggle to
use.
For superlu_dist you need to get the branch
barry/fix-superlu_dist-py-for-gpus and rebase it against master
I only recommend trying them if you are adventuresome. Not
Are you increasing your problem size with the number of ranks or same size
problem?
It could also be out of memory issues.
No error message is printed; which is not standard. It should print first a
message why it failed.
Are you sure all the libraries were rebuilt.
R
;
> Le mer. 15 janv. 2020 à 18:56, Matthew Knepley a écrit :
> I think that Mark is suggesting that no command line arguments are getting in.
>
> Timothee,
>
> Can you use any command line arguments?
>
> Thanks,
>
> Matt
>
> On Wed, Jan 15
Working on it now; may be doable
> On Jan 15, 2020, at 11:55 AM, Matthew Knepley wrote:
>
> On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук
> wrote:
> > And I'm not sure why you are having to use PetscInt for ierr. All PETSc
> > routines should be suing 'PetscErrorCode for ierr'
>
>
Should still work. Run in the debugger and put a break point in
snessetoptionsprefix_ and see what it is trying to do
Barry
> On Jan 15, 2020, at 8:58 AM, Timothée Nicolas
> wrote:
>
> Hi, thanks for your answer,
>
> I'm using Petsc version 3.10.4
>
> Timothée
>
> Le mer. 15 janv. 20
Works for me with PETSc 12, what version of PETSc are you using?
program main
#include
use petsc
implicit none
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PetscErrorCode ierr
SNES snes1
call PetscInitialize(PETSC_NULL_C
ges in parmetis between the two PETSc releases are these below,
> but I don’t see how they could cause issues
>
> kl-18448:pkg-parmetis szampini$ git log -2
> commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag:
> v4.0.3-p5, origin/master, origin/dalcinl/rando
Yes, with, for example, MATMPAIJ, the matrix entries are distributed among
the processes; first verify that you are using a MPI matrix, not Seq, since Seq
will keep an entire copy on each process.
But the parallel matrices do come with some overhead for meta data. So for
small matrices li
9 2019 +0300
GKlib: Use gk_randint32() to define the RandomInRange() macro
On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users
wrote:
This is extremely worrisome:
==23361== Use of uninitialised value of size 8
==23361==at 0x847E939: gk_randint64 (random.c:99)
=
Since PETSc does not use that format there, of course, has to be a time when
you have duplicate memory.
Barry
> On Jan 9, 2020, at 12:47 PM, Sam Guo wrote:
>
> Dear PETSc dev team,
>Suppose I have the matrix already in triplet format int int[] I, int[] J,
> double[] A, Is possi
https://www.mcs.anl.gov/petsc/documentation/changes/38.html
> On Jan 8, 2020, at 9:22 PM, TAY wee-beng wrote:
>
> Hi,
>
> After upgrading to the newer ver of PETSc 3.8.3, I got these error during
> compile in VS2008 with Intel Fortran:
>
> call PCMGSetLevels(pc,mg_lvl,PETSC_NULL_OBJECT,ierr)
This is extremely worrisome:
==23361== Use of uninitialised value of size 8
==23361==at 0x847E939: gk_randint64 (random.c:99)
==23361==by 0x847EF88: gk_randint32 (random.c:128)
==23361==by 0x81EBF0B: libparmetis__Match_Global (in
/space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-
Try the debugger.
> On Jan 8, 2020, at 4:01 PM, Anthony Paul Haas wrote:
>
> Hello,
>
> I am using Petsc 3.7.6.0. with Fortran code and I am getting a segmentation
> violation for the following line:
>
> call
> PetscOptionsGetBool(PETSC_NULL_CHARACTER,"-use_mumps_lu",flg_mumps_lu,flg,se
Yeah, this is an annoying feature of DMDA and PCMG in PETSc. Some coarse grid
ranges and particular parallel layouts won't work with geometric multigrid. You
are using 314 on the coarse and 628 on the fine grid. Try changing the them by
1 and start with one process.
Barry
> On Jan 8
> On Jan 7, 2020, at 8:59 AM, Mark Adams wrote:
>
> I’m not sure what the compilers, and C++ are doing here
>
> On Tue, Jan 7, 2020 at 9:17 AM Кудров Илья wrote:
> However, after configuring
>
> cout<<1. + 1.*PETSC_i<
> outputs (1, 0) instead of (1, 1).
Where after configure? PETSC_
Do you reset the initial tilmestep? Otherwise the second solve thinks it is
at the end. Also you may need to reset the iteration number
Something like
ierr = TSSetTime(appctx->ts, 0);CHKERRQ(ierr);
ierr = TSSetStepNumber(appctx->ts, 0);CHKERRQ(ierr);
ierr = TSSetTimeStep(appctx->ts,
h one (or which revision), I wil check it.
>
>
> Gesendet: Mittwoch, 25. Dezember 2019 um 00:53 Uhr
> Von: "Smith, Barry F."
> An: "Marius Buerkle"
> Cc: "Mark Adams" , "petsc-usersmcs.anl.gov"
>
> Betreff: Re: [petsc-users]
There are no leaks but it appears what is happening is that rather than
recycle the memory PETSc is returning to the system the system is generating
new space as needed. Since the old PETSc pages are never used again this should
be harmless.
Barry
> On Dec 24, 2019, at 9:47 AM, Marius
Thank you for the full and detailed report. The memory leak could be
anywhere but my guess is it is in the interface between PETSc and Hyper.
The first thing to check is if PETSc memory keeps increasing. The simplest
way to do this is run your code 3 independent times with -malloc_debug
Can you please send use the exact code and data file that causes the crash?
And any options.
There are bugs in Metis/Parmetis that we need to track down and eliminate
since it is so central to PETSc's work flow.
Barry
> On Dec 18, 2019, at 4:21 AM, Eda Oktay wrote:
>
> Hi all,
>
uerkle wrote:
>
> Hi,
>
> Is it actually possible to submit a pull (merge) request ? I followed the
> petsc wiki but this didn't work.
>
> Best
> Marius
>
>
> Gesendet: Donnerstag, 05. Dezember 2019 um 07:45 Uhr
> Von: "Marius Buerkle"
PETSC_ARCH to find the
configuration file AND use the compilers from the configuration file. It works
for me, please let me know if you have trouble.
Barry
> On Dec 8, 2019, at 10:38 PM, Yingjie Wu wrote:
>
> Thank you very much for your help.
> My programs are as follow
>
There is something missing in the cmake process that is causing needed
libraries not to be linked.
Please email your program and your CMake stuff (files you use) so we can
reproduce the problem and find a fix.
Barry
> On Dec 8, 2019, at 10:06 PM, Yingjie Wu wrote:
>
> Hi,
> Thank
an build and run C programs
> with MPI.
>
> I have -with-log=0 because there is an overhead if many small objects
> are created, which is my case.
>
> As for the fortran, I will remove --with-fortran=0
>
> On 12/5/2019 11:53 PM, Smith, Barry F. wrote:
>> Can you
Can you actually build and run C++ programs with the MPI?
Executing: mpicxx -o /tmp/petsc-Llvze6/config.setCompilers/conftest.exe
-fopenmp -fPIC /tmp/petsc-Llvze6/config.setCompilers/conftest.o
Possible ERROR while running linker: exit code 1
stderr:
/usr/lib/gcc/x86_64-pc-cygwin/7.4.0/
set it to the value it would get if it wasn't an edge, then the
> derivative isn't preserved anymore.
>
> This is where I get stuck.
>
> Ellen
>
>
> On 12/5/19 10:16 AM, Smith, Barry F. wrote:
>>
>> Are you using cell-centered or vertex
Hmm, for DMDA and DMStag is should not have a limit (certain ranges of
values are better optimized than others but more optimizations may be done).
For DMPLEX in theory again it should be what ever you like (again larger
values may require more optimization in our code to get really grea
Are you using cell-centered or vertex centered discretization ( makes a
slight difference)?
Our model is to use DM_BOUNDARY_MIRROR DMBoundaryType. This means that
u_first_real_grid_point - u_its_ghost_point = 0 (since DMGlobalToLocal will
automatically put into the physical ghost locat
Hmm, MPICH and OpenMPI have also passed this info in there compilers; perhaps
this is a newer version clang that no longer tolerates these options
I think we need to strip out those options as a guess
> On Dec 4, 2019, at 2:18 PM, Balay, Satish wrote:
>
> Yes - this is a link time opti
It still
> outputs "Elemental matrix (explicit ordering)" to StdOut which is kinda
> annoying, is there anyway to turn this off?
>
>
> Von: "Smith, Barry F."
> An: "Marius Buerkle"
> Cc: "petsc-users@mcs.anl.gov"
> Betreff: Re:
will the following time steps reuse the Jacobian built
> at the first time step?
>
> Best,
> Li
>
>
>
> On Tue, Dec 3, 2019 at 12:10 AM Smith, Barry F. wrote:
>
>
> > On Dec 2, 2019, at 2:30 PM, Li Luo wrote:
> >
> > -snes_mf fails to converge
sorry about this. The numerical values between C and Fortran got out of sync.
I've attached a patch file you can apply with
patch -p1 < format.patch
or you can use the branch https://gitlab.com/petsc/petsc/merge_requests/2346
Barry
> On Dec 3, 2019, at 1:10 AM, Marius Buerkle wrot
euse it forever.
You can also try -snes_mf -snes_lag_jacobian -2 which should compute the
Jacobian once, use that original one to build the preconditioner once and reuse
the same preconditioner but use the matrix free to define the operator.
Barry
>
> Regards,
> Li
>
378
> Number of rows 756
> 0 1188
> 1 1188
> 2 1188
> 3 1188
> 4 1188
> 5 1188
> ...
>
> Is this normal?
> When using MCFD, is there any difference using mpiaij and mpibaij?
>
> Best,
> Li
>
>
How many colors is it requiring? And how long is the MatGetColoring()
taking? Are you running in parallel? The MatGetColoring() MATCOLORINGSL uses a
sequential coloring algorithm so if your matrix is large and parallel the
coloring will take a long time. The parallel colorings are MATCOLO
I would first run with -ksp_monitor_true_residual -ksp_converged_reason to
make sure that those "very fast" cases are actually converging in those runs
also use -ksp_view to see what the GMAG parameters are. Also use the -info
option to have it print details on the solution process.
Ba
> On Nov 28, 2019, at 7:07 PM, baikadi pranay wrote:
>
> Hello PETSc users,
>
> I have a sparse matrix built and I want to output the matrix for viewing in
> matlab. However i'm having difficulty outputting the matrix. I am writing my
> program in Fortran90 and I've included the following
> I am basically trying to solve a finite element problem, which is why in 3D I
> have 7 non-zero diagonals that are quite farm apart from one another. In 2D I
> only have 5 non-zero diagonals that are less far apart. So is it normal that
> the set up time is around 400 times greater in the 3D
I agree this is confusing. https://gitlab.com/petsc/petsc/merge_requests/2331
the flag PETSC_HAVE_MPI will no longer be set when MPI is not used (only MPIUNI
is used).
Barry
The code API still has MPI* in it with MPI but they are stubs that just
handle the sequential code and do not r
"No, I have an unstructured mesh that increases in resolution away from the
center of the cuboid. See Figure: 5 in the ArXiv paper
https://arxiv.org/pdf/1907.02604.pdf for a slice through the midplane of the
cuboid. Given this type of mesh, will dmplex do a cuboidal domain
decomposition?"
You can possibly use the PETSc object AO (see AOCreate()) to manage the
reordering. The non-contiguous order you start with is the application ordering
and the new contiguous ordering is the petsc ordering. Note you will likely
need to reorder the cell vertex or edge numbers as well.
Bar
rch-linux2-cxx-opt/externalpackages/pastix_5.2.3/src/sopalin/src/sopalin_thread.c:548:
> undefined reference to `hwloc_bitmap_asprintf'
>
> Any idea is appreciated. I can attach configure.log as needed.
>
> Giang
>
>
> On Thu, Nov 7, 2019 at 12:18 AM hg wrote:
> Hi Ba
For a while I had put in an incorrect URL in the download location.
Perhaps you are using PETSc 3.12.0 and need to use 3.12.1 from
https://www.mcs.anl.gov/petsc/download/index.html
Otherwise please send configure.log
> On Nov 19, 2019, at 4:40 AM, Santiago Andres Triana via petsc-
> On Nov 17, 2019, at 5:32 PM, Zhang, Hong via petsc-users
> wrote:
>
> TSSetTimeStep()
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TSSetTimeStep.html#TSSetTimeStep
>
> If you want to decide the step size by yourself, make sure that the
> adaptivity is turned off, e
ld be done at the beginning of the RHS function?
>
> On Tue, Nov 12, 2019 at 3:41 PM Smith, Barry F. wrote:
>
>
> > On Nov 12, 2019, at 2:09 PM, Gideon Simpson
> > wrote:
> >
> > So this might be a resolution/another question. Part of the reason to us
you should not be
changing values and thus should use read().
>
> On Tue, Nov 12, 2019 at 10:43 AM Smith, Barry F. wrote:
>
> For any vector you only read you should use the read version.
>
> Sometimes the vector may not be locked and hence the other routine can be
&
ll do the check. If you call it on
local ghosted vectors it doesn't check if the vector is locked since the
ghosted version is a copy of the true locked vector.
Barry
>
> On Tue, Nov 12, 2019 at 12:33 AM Smith, Barry F. wrote:
>
>
> > On Nov 11, 2019, at 7:00 PM, Gide
> On Nov 11, 2019, at 7:00 PM, Gideon Simpson via petsc-users
> wrote:
>
> I noticed that when I am solving a problem with the ts and I am *not* using a
> da, if I want to use an implicit time stepping routine:
> 1. I have to explicitly provide the Jacobian
Yes
> 2. When I do provide th
Mark,
What are you using for KSP rtol ? It looks like 1.e-1 from
> 0 KSP Residual norm 2.654593713313e-03
> ...
> 41 KSP Residual norm 2.515907124549e-04
What about SNES stol, are you setting that?
> Line search: Ended due to ynorm < stol*xnorm (1.047067861804e-0
Make sure you have the latest PETSc and MUMPS installed; they have fixed
bugs in MUMPs over time.
Hanging locations are best found with a debugger; there is really no other
way. If you have a parallel debugger like DDT use it. If you don't you can use
the PETSc option -start_in_debugger
setaffinity: Invalid argument only happens when I launch the job with
> sbatch. Running without scheduler is fine. I think this has something to do
> with pastix.
>
> Giang
>
>
> On Wed, Nov 6, 2019 at 4:37 AM Smith, Barry F. wrote:
>
> Google finds this
> https:/
Google finds this
https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186
> On Nov 5, 2019, at 7:01 PM, Matthew Knepley via petsc-users
> wrote:
>
> I have no idea. That is a good question for the PasTix list.
>
> Thanks,
>
> Matt
>
> On Tue, Nov 5, 201
is pretty large (mesh is 12,001x 301). I am also attaching the
> output of the code in case that could provide more info. Do you know how I
> should proceed?
>
> Thanks,
>
> Anthony
>
> On Mon, Nov 4, 2019 at 1:46 PM Smith, Barry F. wrote:
>
>
>
>
&g
> On Nov 4, 2019, at 2:14 PM, Anthony Paul Haas via petsc-users
> wrote:
>
> Hello,
>
> I ran into an issue while using Mumps from Petsc. I got the following error
> (see below please). Somebody suggested that I compile Petsc with
> --with-64-bit-indices=1. Will that suffice?
Current
It works for me. Please send a complete code that fails.
> On Nov 3, 2019, at 11:41 PM, Emmanuel Ayala via petsc-users
> wrote:
>
> Hi everyone, thanks in advance.
>
> I have three parallel vectors: A, B and C. A and B have different sizes, and
> C must be contain these two vectors (MatL
> On Nov 1, 2019, at 4:50 PM, Zhang, Junchao via petsc-users
> wrote:
>
> I know nothing about Vec FFTW,
You are lucky :-)
> but if you can provide hdf5 files in your test, I will see if I can reproduce
> it.
> --Junchao Zhang
>
>
> On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-us
that the older OpenMPI
worked fine.
Barry
>
>> Am 01.11.2019 um 16:24 schrieb Smith, Barry F. :
>>
>>
>> Certain OpenMPI versions have bugs where even when you properly duplicate
>> and then free communicators it eventually "runs out of communicators&quo
Certain OpenMPI versions have bugs where even when you properly duplicate and
then free communicators it eventually "runs out of communicators". This is a
definitely a bug and was fixed in later OpenMPI versions. We wasted a lot of
time tracking down this bug in the past. By now it is an o
The problem is that this change DOES use the preprocessor on the f90 file,
does it not? We need a rule that does not use the preprocessor.
Barry
> On Oct 29, 2019, at 10:50 AM, Matthew Knepley via petsc-users
> wrote:
>
> On Tue, Oct 29, 2019 at 11:38 AM Randall Mackie wrote:
> Hi M
This won't work as written for two reasons
1) the VecScatterCreateToAll() will just concatenate the values from each
process in a long array on each process, thus the resulting values will be
"scrambled" and it won't be practical to access the values (because the
parallel layout of DMDA ve
You would need to investigate if the Nvidia cuSPARSE package supports such a
format. If it does then it would be reasonably straightforward for you to hook
up the required interface from PETSc. If it does not then it is a massive job
to provide such code and you should see if any open source
st line. But I was probably mistaken - if it was inserted it would have
> been
> row 0: (0, 1.), (9, 0.)
>
> on the first line instead?
>
> Thibaut
>
>
>
> On 25/10/2019 14:41, Smith, Barry F. wrote:
>>
>>
>> > On Oct 24, 2019, at 5:09 AM,
will take your advice and look at reformulating my outer problem for a SNES
> (line search) solve.
>
> Cheers, Dave.
>
> On Fri, 25 Oct. 2019, 2:52 am Smith, Barry F., wrote:
>
> If you are "throwing things" away in computing the Jacobian then any
> expec
> On Oct 24, 2019, at 5:09 AM, Thibaut Appel wrote:
>
> Hi Matthew,
>
> Thanks for having a look, your example runs just like mine in Fortran.
>
> In serial, the value (0.0,0.0) was inserted whereas it shouldn't have.
I'm sorry, I don't see this for the serial case:
$ petscmpiexec -n 1 ./e
rew Newton solvers, especially when tackling
problems with potentially interesting nonlinearities.
Barry
> On Oct 14, 2019, at 8:18 PM, Dave Lee wrote:
>
> Hi Barry,
>
> I've replied inline:
>
> On Mon, Oct 14, 2019 at 4:07 PM Smith, Barry F. wrote:
>
> T
See bottom
> On Oct 14, 2019, at 1:12 PM, Justin Chang via petsc-users
> wrote:
>
> It might depend on your application, but for my stuff on maximum principles
> for advection-diffusion, I found RS to be much better than SS. Here’s the
> paper I wrote documenting the performance numbers I c
Thanks for the test case. There is a bug in the code; the check is not in
the correct place. I'll be working on a patch for 3.12
Barry
> On Oct 23, 2019, at 8:31 PM, Matthew Knepley via petsc-users
> wrote:
>
> On Tue, Oct 22, 2019 at 1:37 PM Thibaut Appel
> wrote:
> Hi both,
>
>
; and "Stash has 0 entries, uses 0 mallocs."
>
> If I run the same code with -test_mat_type aijcusparse, it takes forever to
> finish step 10. Does this step really involve moving data from host to
> devices? Do I need to have more changes to use aijcusparse other than ju
1 - 100 of 955 matches
Mail list logo