Send configure.log and make.log to petsc-ma...@mcs.anl.gov
Barry
> On Feb 5, 2019, at 10:48 PM, Yaxiong Chen wrote:
>
> Since mumps and scalapack are already installed on my computer, I only ran
> ./configure with --download-superlu_dist .
> After everything is done, I received the
Since mumps and scalapack are already installed on my computer, I only ran
./configure with --download-superlu_dist .
After everything is done, I received the error:
dyld: lazy symbol binding failed: Symbol not found:
_MatSolverTypeRegister_SuperLU_DIST
Referenced from:
Run ./configure with --download-superlu_dist --download-mumps
--download-scalapack and the you can use either -pc_factor_mat_solver_type
superlu_dist or -pc_factor_mat_solver_type mumps
Good luck
> On Feb 5, 2019, at 9:29 PM, Yaxiong Chen wrote:
>
> > Also, I found the solving time is
> Also, I found the solving time is also shorter when I use the direct
> solver(0.432s vs 4.332 s ). Is this due the small scale of the system? When
> I have a very large (e.g., 10*10 ) system, can I expect iterative
> solver being faster?
<< It sounds like the default
Hi Justin,
Typically, the power grid distribution systems have a radial structure
(unless it is an urban area) that leads to a, more or less, staircase type
matrix. So a MatLoad() or VecLoad() would presumably just just splits the
stairs, akin to a 1-D PDE. However, as you pointed out, it
> On Feb 5, 2019, at 5:06 PM, Sanjay Kharche via petsc-users
> wrote:
>
>
> Hi
>
> I use two mpi clusters (cluster 1 and 2). Whereas the petsc binary files I
> generate can be read on cluster 1, I get errors doing so on cluster 2. I also
> output vts files corresponding to each binary
Dolfin will need this PR to work with any PETSc 3.10.
https://bitbucket.org/fenics-project/dolfin/pull-requests/508/jed-petsc-310/diff
It's been chillin' there for a couple months; it appears that much of
the Dolfin development effort has moved to DolfinX and Firedrake.
"Huck, Moritz" writes:
Hi
I use two mpi clusters (cluster 1 and 2). Whereas the petsc binary files I
generate can be read on cluster 1, I get errors doing so on cluster 2. I also
output vts files corresponding to each binary file output, and it appears that
both clusters do produce meaningful results. I use ver
On Mon, Feb 4, 2019 at 4:17 PM Yaxiong Chen wrote:
> Hi Mark,
>
> Will the parameter MatMPIAIJSetPreallocation in influence the
> following part
> do i=mystart,nelem,nproc
> call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx)
> ne=size(idx)
>
On Tue, Feb 5, 2019 at 11:13 AM Andrew Parker
wrote:
> On Tue, 5 Feb 2019 at 15:27, Matthew Knepley wrote:
>
>> On Tue, Feb 5, 2019 at 9:47 AM Andrew Parker via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Does anyone have a MWE for DMPlexCreateCGNS to use in parallel? Ideally,
>>>
> On Feb 5, 2019, at 10:32 AM, Yaxiong Chen wrote:
>
> Thanks Barry,
> I will explore how to partition for parallel computation later. But now
> I still have some confusion on the sequential operation.
> I compared PETSC and Mumps. In both case, the subroutine for generating
> elemental
I am using the fenics library for my PDE discretization.
I can not compile fenics with PETSc 3.10.3.
I will locate the exact error tomorrow.
Von: Jed Brown
Gesendet: Dienstag, 5. Februar 2019 15:30:38
An: Huck, Moritz; Smith, Barry F.; Abhyankar, Shrirang
Ok, I'll add two things; (i) a TSEventSetPostEventTimeStep(), and (ii) a flag
that allows the user to select either the previous time-step or the original
time-step. For most users, the flag should suffice. For users who know how the
dynamics would behave after the event, they can set the
I would stay away from eigen estimates in the solver (but give us the
spectra to look at), so set -pc_gamg_agg_nsmooths 0 and use sor.
Applications that have lived on direct solvers can add sorts of crap like
penalty terms.
sor seemed to work OK so I'd check the coarse grids in GAMG. Test with
Andrew Parker writes:
> Thanks, so you would suggest a flat vector storing u, v, w (or indeed x, y,
> z) or interleaved and then construct eigen types on the fly?
Interleaved if you want to use Eigen types in the same memory, or if
your code (like most applications) benefits more from memory
Thanks, so you would suggest a flat vector storing u, v, w (or indeed x, y,
z) or interleaved and then construct eigen types on the fly? Can I ask, is
that because Vec cannot store user defined types (as in it's not
templatetable?)
Thanks,
Andy
On Tue, 5 Feb 2019 at 14:22, Jed Brown wrote:
>
Fazlul Huq via petsc-users writes:
> Hello PETSc Developers,
>
> may be this is a trivial question!
>
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't. As far as I can
> recall, I have installed PETSc in
On Tue, 5 Feb 2019, Fazlul Huq via petsc-users wrote:
> Hello PETSc Developers,
>
> may be this is a trivial question!
>
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't.
I don't know what you mean here -
"Huck, Moritz via petsc-users" writes:
> @Shri
> The system is very stiff, but the stiffness is handled well by ARKIMEX.
>
> I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to
> compatibilty with a third library),
What compatibility problem is this? 3.10.3 should be (binary and
My suggestion is to use PETSc like usual and inside your
residual/Jacobian evaluation, for each cell or batch of cells, create
Eigen objects. For size 2d or 3d, it won't matter much whether you make
them share memory with the PETSc Vec -- the Eigen types should mostly
exist in registers.
Andrew
@Shri
The system is very stiff, but the stiffness is handled well by ARKIMEX.
I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to compatibilty
with a third library), no special options for TS
Writing an MWE might take some time, if it is necessary I will write something
at the
Dear Matt,
Thank you for answering. I didn't recognize that part. But I still can't
find a matrix called "small" and there are some examples testing with
"medium". Am I missing something?
Thanks,
Eda
Matthew Knepley , 5 Şub 2019 Sal, 13:35 tarihinde şunu
yazdı:
> On Tue, Feb 5, 2019 at 4:44
Hi,
I am new in PETSc and trying to run some examples in MAT file.
I cannot run the ones that need to load matrix from file. Here are the
examples:
For ex12 in MAT file:
args: -f0 ${wPETSC_DIR}/share/petsc/datafiles/matrices/ns-real-int32-float64
requires: double !complex
Dear Matt,
Thanks for your reply, now it’s clear to me also why matArray is not destroyed,
contrarily to Amat.
Do you think that declaring explicitly matArray as a (Fortran) pointer (at line
113) and then using “matArray(1) => Amat” would be equivalent? It would be
certainly clearer from the
24 matches
Mail list logo