Barry:
>
> The symbolic factorization is taking more time with more processes while
> the numerical factorization is taking less time. So the symbolic
> factorization is limiting the scalability. Note that the numerical times
> are great but at least they get better.
>
Parallel symbolic factori
I have added these fortran interfaces in the branch
barry/add-missing-apis/maint and merged into the next branch for testing. When
it passes the tests I will merge it into maint and it will then be available in
the next patch release.
Barry
> On Jun 27, 2016, at 7:50 AM, Constantin Nguye
The symbolic factorization is taking more time with more processes while the
numerical factorization is taking less time. So the symbolic factorization is
limiting the scalability. Note that the numerical times are great but at least
they get better.
Barry
> On Jun 27, 2016, at 7:59 PM, F
Faraz :
Direct sparse solvers are generally not scalable -- they are used for
ill-conditioned problems which cannot be solved by iterative methods.
Can you try sequential symbolic factorization instead of parallel, i.e.,
use mumps default '-mat_mumps_icntl_28 1'?
Hong
Thanks for the quick respon
I think it depends on the method. BLMVM is okay; its only using the gradient
and
not the Hessian. TRON might be okay as well, but I would have to check; it
should extract a sub matrix.
Todd.
> On Jun 27, 2016, at 5:45 PM, Justin Chang wrote:
>
> Thanks all,
>
> Btw, does Tao's Hessian ev
hi
Does anybody have a simple example of HDF5 I/O for transient data with DMPlex?
I can?t figure out how to handle time steps: there does not seem to be a way to
specify appending time steps when using PetscObjectViewFromOptions, and
PetscViewerHDF5PushGroup and seem to not have any effect whe
> On Jun 27, 2016, at 5:45 PM, Justin Chang wrote:
>
> Thanks all,
>
> Btw, does Tao's Hessian evaluation routines also "cheat" the way the Jacobian
> routines do? Or is it fine to supply the Hessian only once (assume it is
> independent of the solution)?
It is likely dependent on the spe
These are the only lines that matter
MatSolve 1 1.0 7.7200e+00 1.1 0.00e+00 0.0 2.6e+03 2.0e+04
3.0e+00 1 0 68 2 9 1 0 68 2 9 0
MatCholFctrSym 1 1.0 1.8439e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
5.0e+00 29 0 0 0 15 29 0 0 0 15 0
MatCholFctrNu
Thanks all,
Btw, does Tao's Hessian evaluation routines also "cheat" the way the
Jacobian routines do? Or is it fine to supply the Hessian only once (assume
it is independent of the solution)?
Thanks,
Justin
On Monday, June 27, 2016, Barry Smith wrote:
>
>There is the same issue with ODE i
With PETSc 3.7 you can do
PetscOptionsSetValue(NULL, "-log_summary",NULL); etc
PetscInitializeNoArguments();
I certainly don't recommend it because it removes the flexibility of
changing options at the command line, any time you want to change something you
need to recompile.
> On Jun 27, 2016, at 3:15 PM, Xiangdong wrote:
>
> Hello everyone,
>
> I am trying different number of da_overlap to see its effects on nasm and
> aspin preconditioner. The codes works fine with -da_overlap 0. However, when
> I change the option -da_overlap 1, it crashed with the error messa
There is the same issue with ODE integrators for linear problems. The
solvers tromp on the Jacobian.
We should actually add an error indicator in these TAO/TS solvers, if the
"Jacobian" state value is not increased in the next time step/iteration this
means the person did not supply th
Hello everyone,
I am trying different number of da_overlap to see its effects on nasm and
aspin preconditioner. The codes works fine with -da_overlap 0. However,
when I change the option -da_overlap 1, it crashed with the error message
like "zero pivot row 12544 value 0 tolerance 2.2e-14". The opt
Hi Justin,
I will have to look regarding the TAO semismooth solvers. The TAO
solvers probably "cheated" and modified the Jacobian matrix rather
than extracting submatrices and shifting the diagonal or using a
matrix-free version.
Note: the TAO interior-point and semismooth methods start from
So I figured it out. I had to explicitly form the Tao Gradient/Constraints
and Jacobian. I couldn't just "pre-process" the gradient Vec and Jacobian
Mat through SNESComputeXXX. Attached is the updated file and makefile.
My question now is, why exactly is this the case? This preprocessing
strategy
On 06/26/2016 11:25 PM, Satish Balay wrote:
> On Sun, 26 Jun 2016, Antonio Trande wrote:
No, tests not passed with
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
This is a build log from Fedora 24 64 bit:
Hi,
I'm tring to use ISGetTotalIndices in a Fortran program but I've got an
undefined reference error message when I compile it :"undefined
reference to `isgettotalindices_'".
I've also tried to replace IsGetIndices by ISGetTotalIndices and also
ISGetNonlocalIndices in this example
http://www
On 06/26/2016 11:25 PM, Satish Balay wrote:
> On Sun, 26 Jun 2016, Antonio Trande wrote:
>
>> On 06/26/2016 02:24 AM, Satish Balay wrote:
>>> On Sat, 25 Jun 2016, Antonio Trande wrote:
>>>
On 06/25/2016 09:41 PM, Antonio Trande wrote:
> On 06/25/2016 03:59 PM, Matthew Knepley wrote:
>
18 matches
Mail list logo