s/local/spack/software/gcc-7.5.0/cuda-10.1.243-v4ymjqcrr7f72qfiuzsstuy5jiajbuey/lib64/stubs
> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -lcuda
>
>
> You can see the `stubs` directory is not in rpath. We took a lot of
> effort to achieve that. You need to double check the reason.
>
&g
idea to search for "stubs" since
the system might have the correct ones in other places. Should not I do a
batch compiling?
Thanks,
Fande
On Wed, Jan 26, 2022 at 1:49 PM Fande Kong wrote:
> Yes, please see the attached file.
>
> Fande
>
> On Wed, Jan 26, 2022 at 11:
bad has extra
>>
>> -L/apps/local/spack/software/gcc-7.5.0/cuda-10.1.243-v4ymjqcrr7f72qfiuzsstuy5jiajbuey/lib64/stubs
>> -lcuda
>>
>> good does not.
>>
>> Try removing the stubs directory and -lcuda from the bad
>> $PETSC_ARCH/lib/petsc/conf/variables and likely the bad w
gt; $PETSC_ARCH/lib/petsc/conf/variables and likely the bad will start working.
>
It seems I still got the same issue after removing stubs directory and
-lcuda.
Thanks,
Fande
>
> Barry
>
> I never liked the stubs stuff.
>
> On Jan 25, 2022, at 11:29 PM, Fande Kong wrote:
>
&g
2021 +
Config: fix CUDA library and header dirs
:04 04 187c86055adb80f53c1d0565a704fec43a96
ea1efd7f594fd5e8df54170bc1bc7b00f35e4d5f M config
Started from this commit, and GPU did not work for me on our HPC
Thanks,
Fande
On Tue, Jan 25, 2022 at 7:18 PM Fande Kong wrote:
>
>
> (Jacob Fai - booss - oh - vitch)
>
> On Jan 21, 2022, at 12:01, Fande Kong wrote:
>
> Thanks Jacob,
>
> On Thu, Jan 20, 2022 at 6:25 PM Jacob Faibussowitsch
> wrote:
>
>> Segfault is caused by the following check at
>> src/sys/objects/device/impls/cupm/cupmdevic
make it possible to run a solve
> that doesn't use a GPU in PETSC_ARCH that supports GPUs, regardless of
> whether a GPU is actually present.
>
> Fande Kong writes:
>
> > I spoke too soon. It seems that we have trouble creating cuda/kokkos vecs
> > now. Go
e/sawtooth/moosegpu/scripts/../libmesh/installed/include/libmesh/petsc_vector.h:693
On Thu, Jan 20, 2022 at 1:09 PM Fande Kong wrote:
> Thanks, Jed,
>
> This worked!
>
> Fande
>
> On Wed, Jan 19, 2022 at 11:03 PM Jed Brown wrote:
>
>> Fande Kong writes:
>
Thanks, Jed,
This worked!
Fande
On Wed, Jan 19, 2022 at 11:03 PM Jed Brown wrote:
> Fande Kong writes:
>
> > On Wed, Jan 19, 2022 at 11:39 AM Jacob Faibussowitsch <
> jacob@gmail.com>
> > wrote:
> >
> >> Are you running on login nodes or
Thanks, Mark,
PETSc-main has no issue.
Fande
On Thu, Jan 20, 2022 at 9:14 AM Fande Kong wrote:
>
>
> On Thu, Jan 20, 2022 at 6:49 AM Mark Adams wrote:
>
>> Humm, I was not able to reproduce this on my Mac. Trying Crusher now.
>> Are you using main? or even a rec
in mpiaijkok.
>
> Thanks,
> Mark
>
> On Thu, Jan 20, 2022 at 12:12 AM Fande Kong wrote:
>
>> Hi All,
>>
>> It seems that mpiaijkok does not support 64-bit integers at this time. Do
>> we have any motivation for this? Or Is it just a bug?
>>
>
Hi All,
It seems that mpiaijkok does not support 64-bit integers at this time. Do
we have any motivation for this? Or Is it just a bug?
Thanks,
Fande
petsc/src/mat/impls/aij/mpi/kokkos/mpiaijkok.kokkos.cxx(306): error: a
value of type "MatColumnIndexType *" cannot be assigned to an entity of
ty
> code (cudaErrorStubLibrary) for it.
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
>
> On Jan 19, 2022, at 12:07, Fande Kong wrote:
>
> Thanks, Jacob, and Junchao
>
> The log was attached. I am using Sawtooth at INL
> https:
Hi All,
Upgraded PETSc from 3.16.1 to the current main branch. I suddenly got the
following error message:
2d_diffusion]$ ../../../moose_test-dbg -i 2d_diffusion_test.i
-use_gpu_aware_mpi 0 -gpu_mat_type aijcusparse -gpu_vec_type cuda
-log_view
[0]PETSC ERROR: - Error Message
>
> On Tue, 18 Jan 2022, Xiaoye S. Li wrote:
>
> > There was a merge error in the master branch. I fixed it today. Not sure
> > whether that's causing your problem. Can you try now?
> >
> > Sherry
> >
> > On Mon, Jan 17, 2022 at 11:55 AM Fande Kong
This is something I almost started a while ago.
https://gitlab.com/petsc/petsc/-/issues/852
It would be a very interesting addition to us.
Fande
> On Jan 12, 2022, at 12:04 AM, Barry Smith wrote:
>
>
> Why does it need to handle values?
>
>> On Jan 12, 2022, at 12:43 AM, Jed Brown w
On Thu, Nov 11, 2021 at 1:59 PM Matthew Knepley wrote:
> On Thu, Nov 11, 2021 at 3:44 PM Fande Kong wrote:
>
>> Thanks Matt,
>>
>> I understand completely, the actual error should be
>>
>> "
>> ln -s libHYPRE_parcsr_ls-2.20.0.so libHYPRE_parcsr_ls
HYPRE_parcsr_ls.so",
and then second try would see that ‘libHYPRE_parcsr_ls.so" already existed.
Thanks,
Fande
On Thu, Nov 11, 2021 at 1:29 PM Matthew Knepley wrote:
> On Thu, Nov 11, 2021 at 3:25 PM Fande Kong wrote:
>
>> Thanks, Satish
>>
>> "--with-make-np=
>
> Barry
>
>
> On Nov 9, 2021, at 6:10 PM, Fande Kong wrote:
>
> Hi All,
>
> We encountered a configuration error when running the PETSc
> configuration on a HPC system. Went through the log file, but could not
> find much. The log file was attached.
>
> Any thoughts?
>
> Thanks for your help, as always.
>
> Fande
>
>
>
>
>
>
>
>
>
>
"if (a->keepnonzeropattern)" branch does not change ilen so that
A->ops->assemblyend will be fine. It would help if you made sure that
elements have been inserted for these rows before you call MatZeroRows.
However, I am not sure it is necessary to call A->ops->assemblyend if we
already require a
There are some statements from MUMPS user manual
http://mumps.enseeiht.fr/doc/userguide_5.3.5.pdf
"
A full 64-bit integer version can be obtained compiling MUMPS with C
preprocessing flag -DINTSIZE64 and Fortran compiler option -i8,
-fdefault-integer-8 or something equivalent depending on your com
3-10/handle-pie-flag-conda/release *and send its
> configure.log if it fails.
>
>Thanks
>
> Barry
>
>
> On Mar 10, 2021, at 5:59 PM, Fande Kong wrote:
>
> Do not know what the fix should look like, but this works for me
>
>
> @staticmethod
> @@
output.find('ignoring option') >= 0 or output.find('ignored') >= 0 or
output.find('argument unused') >= 0 or output.find('not supported') >=
0 or
# When checking for the existence of 'attribute'
output.find('is
c/lib -lmpi -lpmpi"
>
MPI can not generate an executable because we took out "-pie".
Thanks,
Fande
>
> etc.. [don't know if you really need LDFLAGS options]
>
> Satish
>
> On Wed, 10 Mar 2021, Fande Kong wrote:
>
> > I guess it was encoded in
g a main
> executable
> Rejecting C linker flag -dynamiclib -single_module due to
>
> ld: warning: -pie being ignored. It is only used when linking a main
> executable
>
> This is the correct link command for the Mac but it is being rejected due
> to the warning mess
testpetsc/lib
> -L/Users/kongf/miniconda3/envs/testpetsc/lib
>
> Does conda compiler pick up '-pie' from this env variable? If so - perhaps
> its easier to just modify it?
>
> Or is it encoded in mpicc wrapper? [mpicc -show]
>
> Satish
>
> On Wed, 10 Mar 2021,
>
>
> Barry
>
>
> >
>
> > On Jan 13, 2021, at 12:32 PM, Fande Kong wrote:
> >
> > Hi All,
> >
> > I ran valgrind with mvapich-2.3.5 for a moose simulation. The
> motivation was that we have a few non-deterministic parallel sim
Hi All,
I ran valgrind with mvapich-2.3.5 for a moose simulation. The motivation
was that we have a few non-deterministic parallel simulations in moose. I
want to check if we have any memory issues. I got some complaints from
PetscAllreduceBarrierCheck
Thanks,
Fande
==98001== 88 (24 direct,
ot easy to debug.
Thanks,
Fande
>
> Barry
>
>
>
> On Jan 12, 2021, at 10:41 AM, Fande Kong wrote:
>
> Hi All,
>
> I am curious about why we subtract 128 from the max value of tag? Can we
> directly use the max tag value?
>
> Thanks,
>
> Fande,
&
Hi All,
I am curious about why we subtract 128 from the max value of tag? Can we
directly use the max tag value?
Thanks,
Fande,
PetscErrorCode PetscCommGetNewTag(MPI_Comm comm,PetscMPIInt *tag)
{
PetscErrorCode ierr;
PetscCommCounter *counter;
PetscMPIInt *maxval,flg;
MPI_Co
gt; > On Dec 15, 2020, at 1:23 AM, Barry Smith wrote:
> > >
> > >
> > > No idea. Perhaps petscmpiexec could be modified so it only ran
> valgrind on the first 10 ranks? Not clear how to do that. Or valgrind
> should get a MR that removes this small arbitrary limit
Hi All,
I tried to use valgrind to check if the simulation is valgrind clean
because I saw some random communication fails during the simulation.
I tried this command-line
petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i
-log_view -snes_view
But I got the following error mes
Hi All,
I can not find the actual implementation for that. And "
-snes_no_convergence_test" does not impact anything for me.
Thanks,
Fande,
Oh, cool.
Thanks, Jose,
I will try that.
Fande,
On Mon, Aug 31, 2020 at 11:11 AM Jose E. Roman wrote:
> Call EPSMonitorCancel() before EPSMonitorSet().
> Jose
>
>
> > El 31 ago 2020, a las 18:33, Fande Kong escribió:
> >
> > Hi All,
> >
> >
Hi All,
There is a statement on API EPSMonitorSet:
"Sets an ADDITIONAL function to be called at every iteration to monitor the
error estimates for each requested eigenpair."
I was wondering how to replace SLEPc EPS monitors instead of adding one? I
want to use my monitor only.
Thanks,
Fande,
I agreed, Barry.
A year ago, I enabled edge-weights and vertex weights for only ParMETIS and
PTScotch. I did not do the same thing for Chaco, Party, etc.
It is straightforward to do that, and I could add an MR if needed.
Thanks,
Fande,
On Sun, Aug 30, 2020 at 4:20 PM Barry Smith wrote:
>
>
>
you test with a C compiler and it gets included in
> C++ source.
>
> Fande Kong writes:
>
> > Hi All,
> >
> > We (moose team) hit an error message when compiling PETSc, recently. The
> > error is related to "PETSC_HAVE_CLOSURE." Everything runs well if I am
Hi All,
We (moose team) hit an error message when compiling PETSc, recently. The
error is related to "PETSC_HAVE_CLOSURE." Everything runs well if I am
going to turn this flag off by making the following changes:
git diff
diff --git a/config/BuildSystem/config/utilities/closure.py
b/config/Build
For this particular case (one subdoanin), it may be easy to fix in petsc.
We could create a partitioning index filled with zeros.
Fande,
On Mon, Aug 17, 2020 at 5:04 PM Fande Kong wrote:
> IIRC, Chaco does not produce an arbitrary number of subdomains. The number
> needs to be li
IIRC, Chaco does not produce an arbitrary number of subdomains. The number
needs to be like 2^n.
ParMETIS and PTScotch are much better, and they are production-level code.
If there is no particular reason, I would like to suggest staying with
ParMETIS and PTScotch.
Thanks,
Fande,
On Fri, Aug
One alternative is to support a plugable KSP/SNESReasonView system. We then
could hook up KSP/SNESReasonView_MOOSE.
We could call our views from SNES/KSP"done"Solve as well if such a
system is not affordable. What are the final functions we should call,
where we guarantee SNES/KSP is already done
ould possibly make sense.
>>
>> *Chris Hewson*
>> Senior Reservoir Simulation Engineer
>> ResFrac
>> +1.587.575.9792
>>
>>
>> On Mon, Jul 20, 2020 at 12:41 PM Mark Adams wrote:
>>
>>>
>>>
>>> On Mon, Jul 20, 2020 at
The most frustrating part is that the issue is not reproducible.
Fande,
On Mon, Jul 20, 2020 at 12:36 PM Fande Kong wrote:
> Hi Mark,
>
> Just to be clear, I do not think it is related to GAMG or PtAP. It is a
> communication issue:
>
> Reran the same code, and I just go
ng GAMG.
>
> Chris: It sounds like you just have one matrix that you give to MUMPS. You
> seem to be creating a matrix in the middle of your run. Are you doing
> dynamic adaptivity?
>
> I think we generate unique tags for each operation but it sounds like
> maybe a message is get
ize integers do you use?
>
We are using 64-bit via "--with-64-bit-indices"
I am trying to catch the cause of this issue by running more simulations
with different configurations.
Thanks,
Fande,
Thanks,
> Mark
>
> On Mon, Jul 20, 2020 at 1:17 AM Fande Kong wrot
Fande Kong wrote:
> I am not entirely sure what is happening, but we encountered similar
> issues recently. It was not reproducible. It might occur at different
> stages, and errors could be weird other than "ctable stuff." Our code was
> Valgrind clean since every PR in moo
I am not entirely sure what is happening, but we encountered similar issues
recently. It was not reproducible. It might occur at different stages, and
errors could be weird other than "ctable stuff." Our code was Valgrind
clean since every PR in moose needs to go through rigorous Valgrind checks
b
Hi All,
I was doing a large-scale simulation using 12288 cores and had the
following error. The code ran fine using less than 12288 cores.
Any quick suggestions to track down this issue?
Thanks,
Fande,
[3342]PETSC ERROR: - Error Message
---
Thanks, Jed,
It is fascinating. I will try to check if I can do anything to have this
kind of improvement as well.
Thanks,
Fande,
On Fri, Jun 12, 2020 at 7:43 PM Jed Brown wrote:
> Jed Brown writes:
>
> > Fande Kong writes:
> >
> >>> There's a lot more
Thanks, Jed,
On Tue, Jun 9, 2020 at 3:19 PM Jed Brown wrote:
> Fande Kong writes:
>
> > Hi All,
> >
> > I am trying to interpret the results from "make stream" on two compute
> > nodes, where each node has 48 cores.
> >
> > If my calculatio
bit higher speedup
>
>Jacobian evaluations often have higher arithmetic intensity but they
> may have MatSetValues() which is slow because no arithmetic intensity just
> memory motion
>
Got it.
Thanks,
Fande,
>
>Barry
>
>
>
> On Jun 9, 2020, at 3:43 PM,
Hi All,
I am trying to interpret the results from "make stream" on two compute
nodes, where each node has 48 cores.
If my calculations are memory bandwidth limited, such as AMG, MatVec,
GMRES, etc..
The best speedup I could get is 16.6938 if I start from one core?? The
speedup for function evalua
Hi Mark,
This should help: -pc_factor_mat_solver_type superlu_dist
Thanks,
Fande
> On Apr 19, 2020, at 9:41 AM, Mark Adams wrote:
>
>
>>
>>
>> > > --download-superlu --download-superlu_dist
>>
>> You are installing with both superlu and superlu_dist. To verify - remove
>> superlu -
bs=4. What happens if you try aij with
> '-matload_block_size 1 -mat_no_inode true'?
> Hong
>
> ------
> *From:* petsc-users on behalf of Fande
> Kong
> *Sent:* Monday, March 30, 2020 12:25 PM
> *To:* PETSc users list
> *Subject:* [petsc
Thanks Jed,
I will try and let you know,
Thanks again!
Fande,
On Fri, Apr 3, 2020 at 4:29 PM Jed Brown wrote:
> Oh, you just want an initial guess for SNES? Does it work to pull out the
> SNES and SNESSetComputeInitialGuess?
>
> Fande Kong writes:
>
> > No. I am w
,
Fande,
On Fri, Apr 3, 2020 at 1:10 PM Jed Brown wrote:
> This sounds like you're talking about a starting procedure for a DAE (or
> near-singular ODE)?
>
> Fande Kong writes:
>
> > Hi All,
> >
> > TSSetSolution will set an initial condition for the current
Hi All,
TSSetSolution will set an initial condition for the current TSSolve(). What
should I do if I want to set an initial guess for the current solution that
is different from the initial condition? The initial guess is supposed to
be really close to the current solution, and then will accelera
Hi All,
There is a system of equations arising from the discretization of 3D
incompressible Navier-Stoke equations using a finite element method. 4
unknowns are placed on each mesh point, and then there is a 4x4 saddle
point block on each mesh vertex. I was thinking to solve the linear
equations
In case someone wants to learn more about the hierarchical partitioning
algorithm. Here is a reference
https://arxiv.org/pdf/1809.02666.pdf
Thanks
Fande
> On Mar 25, 2020, at 5:18 PM, Mark Adams wrote:
>
>
>
>
>> On Wed, Mar 25, 2020 at 6:40 PM Fande Kong wro
On Wed, Mar 25, 2020 at 12:18 PM Mark Adams wrote:
> Also, a better test is see where streams pretty much saturates, then run
> that many processors per node and do the same test by increasing the nodes.
> This will tell you how well your network communication is doing.
>
> But this result has a
HI Lin,
Do you have a home-brew installed MPI?
"
configure:6076: mpif90 -v >&5
mpifort for MPICH version 3.3
Reading specs from
/home/lin/.linuxbrew/Cellar/gcc/5.5.0_7/bin/../lib/gcc/x86_64-unknown-linux-gnu/5.5.0/specs
"
MOOSE environment package should carry everything you need: compiler, mpi,
A PR here https://gitlab.com/petsc/petsc/-/merge_requests/2612
On Wed, Mar 18, 2020 at 3:35 PM Fande Kong wrote:
> Thanks, Satish,
>
> I keep investigating into this issue. Now, I have more insights. The
> fundamental reason is that: Conda-compilers (installed by: conda install -c
&
7; in arg or 'sysroot/lib' in arg: continue
if not arg in lflags:
lflags.append(arg)
self.logPrint('Found library directory: '+arg, 4, 'compilers')
PETSc should treat these libs as system-level libs???
Have a branch here: Fande-Kong/sk
Without touching the configuration file, the
option: --download-hypre-configure-arguments='LIBS="-lmpifort -lgfortran"',
also works.
Thanks, Satish,
Fande,
On Sat, Mar 14, 2020 at 4:37 PM Fande Kong wrote:
> OK. I finally got PETSc complied.
>
> "-lgfo
; Satish
> >
> > On Sat, 14 Mar 2020, Satish Balay via petsc-users wrote:
> >
> > > Its the same location as before. For some reason configure is not
> saving the relevant logs.
> > >
> > > I don't understand saveLog() restoreLog() stuf
r libraries check worked fine
> without -lgfortran.
> >
> > But now - flbaslapack check is failing without it.
> >
> > To work arround - you can use option LIBS=-lgfortran
> >
> > Satish
> >
> > On Thu, 12 Mar 2020, Fande Kong wrote:
> >
>
On Fri, Feb 7, 2020 at 11:43 AM Victor Eijkhout
wrote:
>
>
> On , 2020Feb7, at 12:31, Mark Adams wrote:
>
> BTW, one of my earliest talks, in grad school before I had any real
> results, was called "condition number does not matter”
>
>
> After you learn that the condition number gives an _upper
020 at 7:37 PM Fande Kong wrote:
>
>> Hi All,
>>
>> MOOSE team, Alex and I are working on some variable scaling techniques to
>> improve the condition number of the matrix of linear systems. The goal of
>> variable scaling is to make the diagonal of matrix as
to count eigenvalue clusters? For example, how many
eigenvalue clusters we have in the attach image respectively?
If you need more details, please let us know. Alex and I are happy to
provide any details you are interested in.
Thanks,
Fande Kong,
e:
> > > >
> > > > > The issue is:
> > > > >
> > > > > >>>
> > > > > [Errno 13] Permission denied: '/pbs/SLB'
> > > > > <<<
> > > > >
> > > > > Try re
Satish,
Do you have any suggestions for this?
Chris,
It may be helpful if you could share the petsc configuration log file with
us?
Fande,
-- Forwarded message -
From: Chris Thompson
Date: Tue, Dec 31, 2019 at 9:53 AM
Subject: Moose install troubleshooting help
To: moose-use
Did you try "--with-batch=1"? A suggestion was proposed by Satish earlier
(CCing here).
Fande,
On Wed, Dec 18, 2019 at 12:36 PM Tomas Mondragon <
tom.alex.mondra...@gmail.com> wrote:
> Yes, but now that I have tried this a couple of different ways with
> different --with-mpiexec options, I am be
Are you able to run your MPI code using " mpiexec_mpt -n 1 ./yourbinary"?
You need to use --with-mpiexec to specify what exactly command lines you
can run, e.g., --with-mpiexec="mpirun -n 1".
I am also CCing the email to PETSc guys who may know the answer to these
questions.
Thanks,
Fande,
On M
On Thu, Dec 5, 2019 at 12:34 PM Mark Adams wrote:
>
>
> On Thu, Dec 5, 2019 at 11:20 AM Eda Oktay wrote:
>
>> Hello all,
>>
>> I am trying to find edge cut information of ParMETIS and CHACO. When I
>> use ParMETIS,
>> MatPartitioningViewImbalance(part,partitioning)
>> works and it gives also num
a little bit the way in which ST is initialized, and
> maybe we modify this as well. It is not decided yet.
>
> Jose
>
>
> > El 5 nov 2019, a las 0:28, Fande Kong escribió:
> >
> > Thanks Jose,
> >
> > I think I understand now. Another question: what is t
On Mon, Apr 1, 2019 at 10:24 AM Matthew Knepley via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> On Mon, Apr 1, 2019 at 10:22 AM Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dear PETSc developers:
>> Hi,
>>
>> I've been using -snes_mf_operator and I've customized a precon
Hi All,
Since PetscTable will be replaced by khash in the future somehow, it is
better to use khash for new implementations. I was wondering where I can
find some examples that use khash? Do we have any petsc wrappers of khash?
Thanks,
Fande,
Hi Hong,
According to this PR
https://bitbucket.org/petsc/petsc/pull-requests/1061/a_selinger-feature-faster-scalable/diff
Should we set the scalable algorithm as default?
Thanks,
Fande Kong,
On Fri, Jan 11, 2019 at 10:34 AM Zhang, Hong via petsc-users <
petsc-users@mcs.anl.gov> wrote:
OK...,
Thanks for the words.
Fande,
On Thu, Jan 10, 2019 at 3:36 PM Matthew Knepley wrote:
> On Thu, Jan 10, 2019 at 5:31 PM Fande Kong wrote:
>
>> Thanks, Matt,
>>
>> And then what is the reason to remove PetscDataType? I am out of
>> curiosity.
>>
Hi All,
The second parameter is changed from PetscDataType to MPI_Datatype starting
from PETSc-3.9.x
Thanks,
Fande Kong,
Sorry, hit the wrong button.
On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote:
>
>
> On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote:
>
>> Also, you mentioned that you are using 10 levels. This is very strange
>> with GAMG. You can run with -info and grep on GAMG
ed. I understand you may still
>> want to reuse these data structures by default but for my simulation, the
>> preconditioner is fixed and there is no reason to keep the "c->ptap".
>>
>
>> It would be great, if we could have this optional functionality.
>
lt but for my simulation, the
preconditioner is fixed and there is no reason to keep the "c->ptap".
It would be great, if we could have this optional functionality.
Fande Kong,
On Thu, Dec 20, 2018 at 9:45 PM Zhang, Hong wrote:
> We use nonscalable implementation as default, and swit
Use -ksp_view to confirm the options are actually set.
Fande
Sent from my iPhone
> On Oct 16, 2018, at 7:40 PM, Ellen M. Price
> wrote:
>
> Maybe a stupid suggestion, but sometimes I forget to call the
> *SetFromOptions function on my object, and then get confused when
> changing the options
The error messages may have nothing to do with PETSc and MOOSE.
It might be from a package for MPI communication
https://github.com/openucx/ucx. I have no experiences on such things. It
may be helpful to contact your HPC administer.
Thanks,
Fande,
On Tue, Oct 2, 2018 at 9:24 AM Matthew Knepley
leaves) then A^2 is dense.
> If you have particular stencils for A and P, then we could tell you the
> fill ratio.
>
> Fande Kong writes:
>
> > Hi All,
> >
> > I was wondering how much memory is required to get PtAP done? Do you have
> > any simple for
Hi All,
I was wondering how much memory is required to get PtAP done? Do you have
any simple formula to this? So that I can have an estimate.
Fande,
[132]PETSC ERROR: - Error Message
--
[132]PETSC ERROR: Out of me
t know much about finite
> element.Or am I still using a loop of KSP in PETSc?I'm a newcomer to petsc,
> please give me some advice
>
>
> Thanks,
>
> Yingjie
>
>
> Fande Kong 于2018年9月27日周四 上午12:25写道:
>
>> I have implemented this algorithm in SLEPC. Tak
I have implemented this algorithm in SLEPC. Take a look at this example
http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex34.c.html
The motivation of the algorithm is also for neutron calculations (a
moose-based application).
Fande,
On Wed, Sep 26, 2018 at 10:02 AM Yingjie W
ay/petsc (maint=)
> $ git ls-files |grep mhypre.c
> src/mat/impls/hypre/mhypre.c
>
> So URL should be:
>
>
> http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/hypre/mhypre.c.html#MatHYPRESetPreallocation
>
> Satish
>
> On Thu, 13 Sep 2018, Fande Kong wrote:
&
http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/aij/hypre/mhypre.c.html#MatHYPRESetPreallocation
Fande
On Wed, Sep 5, 2018 at 9:54 AM Smith, Barry F. wrote:
>
> 2 should belong to one of the subdomains, either one is fine.
>
>Barry
>
>
> > On Sep 5, 2018, at 10:46 AM, Rossi, Simone wrote:
> >
> > I’m trying to setup GASM, but I’m probably misunderstanding something.
> >
> > If I have this m
Hi Barry,
I haven't had time to look into TS so far. But it is definitely
interesting. One simple question would like this: If I have a simple loop
for time steppers, and each step SNES is called. How hard to convert my
code to use TS?
Any suggestion? Where should I start from?
Fande,
On Thu,
This may help in this situation.
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatResetPreallocation.html
Fande,
On Thu, Jun 14, 2018 at 1:08 PM, Smith, Barry F. wrote:
>
>I am guessing your matrix has an "envelope" of nonzero values but the
> first time you fill the matr
Hi Satish,
A MOOSE user has troubles to build Metis that is "downloaded" from a local
directory. Do you have any idea?
Vi,
Could you share "configure.log" with PETSc team?
Thanks,
Fande,
-- Forwarded message --
From: Vi Ha
Date: Wed, Jun 6, 2018 at 11:00 AM
Subject: Re: Inst
Hi Eric,
I am curious if the parallel symbolic factoriation is faster than
the sequential version? Do you have timing?
Fande,
On Tue, May 22, 2018 at 12:18 PM, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:
>
>
> On 22/05/18 02:03 PM, Smith, Barry F. wrote:
>
>>
>> Hmm, why wo
--with-blaslapack-lib=-mkl -L' + os.environ['MKLROOT'] + '/lib/intel64
works.
Fande,
On Thu, May 3, 2018 at 10:09 AM, Satish Balay wrote:
> Ok you are not 'building blaslapack' - but using mkl [as per
> configure.log].
>
> I'll have to check the issue. It might be something to do with using
>
The default git gives me:
*Could not execute "['git', 'rev-parse', '--git-dir']"*
when I am configuring PETSc.
The manually loaded *gits* work just fine.
Fande,
On Wed, Apr 4, 2018 at 5:04 PM, Garvey, Cormac T
wrote:
> I though it was fixed, yes I will look into it again.
>
> Do you get an
On Tue, Apr 3, 2018 at 9:12 AM, Stefano Zampini
wrote:
>
> On Apr 3, 2018, at 4:58 PM, Satish Balay wrote:
>
> On Tue, 3 Apr 2018, Kong, Fande wrote:
>
> On Tue, Apr 3, 2018 at 1:17 AM, Smith, Barry F.
> wrote:
>
>
> Each external package definitely needs its own duplicated communicator;
> ca
;>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> *Checking environment...Traceback (most recent call last): File
> >>> "./configure", line 10, in
> >>> execfile(os.path.join(os.path.dirname(__file__), 'config',
> >>> 'configure.py')) File "./config/configure.py", line 206, in
> >>> log.write('PETSc install directory: '+petsc.destdir)AttributeError:
> PETSc
> >>> instance has no attribute 'destdir'*
> >>>
> >>>
> >>>
> >>> SLEPc may be needed to synchronized for new changes in PETSc.
> >>>
> >>> Thanks,
> >>>
> >>> Fande Kong
> >>
> >
>
>
1 - 100 of 233 matches
Mail list logo