Hi, Derek,
Try to apply this tiny (but dirty) patch on your version of PETSc to disable
the VecScatterMemcpyPlan optimization to see if it helps.
Thanks.
--Junchao Zhang
On Wed, Mar 20, 2019 at 6:33 PM Junchao Zhang
mailto:jczh...@mcs.anl.gov>> wrote:
Did you see the warning with small scale
Did you see the warning with small scale runs? Is it possible to provide a
test code?
You mentioned "changing PETSc now would be pretty painful". Is it because it
will affect your performance (but not your code)? If yes, could you try PETSc
master and run you code with or without -vecscatter_t
Trying to track down some memory corruption I'm seeing on larger scale runs
(3.5B+ unknowns). Was able to run Valgrind on it... and I'm seeing quite a
lot of uninitialized value errors coming from ghost updating. Here are
some of the traces:
==87695== Conditional jump or move depends on uninitia
Forgot to mention long VecScatter time might also due to local memory copies.
If the communication pattern has large local to local (self to self) scatter,
which often happens thanks to locality, then the memory copy time is counted in
VecScatter. You can analyze your code's communication patte
Sorry i meant 20 cores at one node. Ok i will retry with -log_sync and come
back. Thanks for your help.
On Wed, Mar 20, 2019 at 2:43 PM Zhang, Junchao wrote:
>
>
> On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera wrote:
>
>> Thanks for your answer, so for example i have a log for 200 cores across
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera
mailto:mvaler...@sdsu.edu>> wrote:
Thanks for your answer, so for example i have a log for 200 cores across 10
nodes that reads:
---
Thanks for your answer, so for example i have a log for 200 cores across 10
nodes that reads:
Event Count Time (sec) Flop
--- G
See the "Mess AvgLen Reduct" number in each log stage. Mess is the total
number of messages sent in an event over all processes. AvgLen is average
message len. Reduct is the number of global reduction.
Each event like VecScatterBegin/End has a maximal execution time over all
processes, and
Hello,
I am working on timing my model, which we made MPI scalable using petsc
DMDAs, i want to know more about the output log and how to calculate a
total communication times for my runs, so far i see we have "MPI Messages"
and "MPI Messages Lengths" in the log, along VecScatterEnd and
VecScatter
> On Mar 20, 2019, at 5:52 AM, Yingjie Wu via petsc-users
> wrote:
>
> Dear PETSc developers:
> Hi,
> Recently, I used PETSc to solve a non-linear PDEs for thermodynamic problems.
> In the process of solving, I found the following two phenomena, hoping to get
> some help and suggestions.
More precisely: something happens when upgrading the functions
MatPtAPNumeric_MPIAIJ_MPIAIJ and/or MatPtAPSymbolic_MPIAIJ_MPIAIJ.
Unfortunately, there are a lot of differences between the old and new
versions of these functions. I keep investigating but if you have any
idea, please let me know.
B
On Wed, Mar 20, 2019 at 8:30 AM Yingjie Wu via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Thank you very much for your reply.
> I think my statement may not be very clear. I want to know why the linear
> residual increases at gmres restart.
>
GMRES combines the functions in the Krylov subspa
Hi all,
I used git bisect to determine when the memory need increased. I found
that the first "bad" commit is aa690a28a7284adb519c28cb44eae20a2c131c85.
Barry was right, this commit seems to be about an evolution of
MatPtAPSymbolic_MPIAIJ_MPIAIJ. You mentioned the option "-matptap_via
scalable"
Thank you very much for your reply.
I think my statement may not be very clear. I want to know why the linear
residual increases at gmres restart.
I think I should have no problem with the residual evaluation function,
because after setting a large gmres restart, the results are also in line
with e
Do not add the ".c" extension. Read my answer: 'make DENEME_TEMIZ_ENYENI_FINAL'
> El 20 mar 2019, a las 11:15, Eda Oktay escribió:
>
> Before using mpicc, I just tried to compile with make DENEME_ENYENI-FINAL.c
> but it says there is nothing to do.
>
> On Wed, Mar 20, 2019, 12:39 PM Jose E. R
Before using mpicc, I just tried to compile with make DENEME_ENYENI-FINAL.c
but it says there is nothing to do.
On Wed, Mar 20, 2019, 12:39 PM Jose E. Roman wrote:
> You must compile your program with 'make DENEME_TEMIZ_ENYENI_FINAL' or
> just 'make', not with 'mpicc DENEME_TEMIZ_ENYENI_FINAL.c'
Mat objects in PETSc are parallel, meaning that the data structure is
distributed. You should use MatGetOwnershipRange() so that each process
accesses its local rows only.
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRange.html
This is very basic usage. You sh
You must compile your program with 'make DENEME_TEMIZ_ENYENI_FINAL' or just
'make', not with 'mpicc DENEME_TEMIZ_ENYENI_FINAL.c'
> El 20 mar 2019, a las 8:25, Eda Oktay via petsc-users
> escribió:
>
> Hello,
>
> I am trying to compile a parallel program DENEME_TEMIZ_ENYENI_FINAL.c in
> PETSc
Hello,
I wrote a code computing element wise absolute value of a matrix. When I
run the code sequentially, it works. However, when I try to use it in
parallel with the same matrix, I get the following error:
[1]PETSC ERROR: Argument out of range
[1]PETSC ERROR: Only local rows
The absolute value
Hello,
I am trying to compile a parallel program DENEME_TEMIZ_ENYENI_FINAL.c in
PETSc. I wrote the following makefile but it says that there is nothing to
do with the program:
export CLINKER = gcc
DENEME_TEMIZ_ENYENI_FINAL: DENEME_TEMIZ_ENYENI_FINAL.o chkopts
-${CLINKER} -o DENEME_TEMIZ_ENYENI_F
Dear Professor Roman,
I decided to install petsc-3.10.3 with slepc since I did it once
succesfully. Thanks for your help.
Eda
Jose E. Roman , 19 Mar 2019 Sal, 15:15 tarihinde şunu
yazdı:
> What is the output of 'make check' in $PETSC_DIR ?
>
> > El 19 mar 2019, a las 13:02, Eda Oktay escribió:
21 matches
Mail list logo