Hi,
There is no error now. Thank you so much.
Frank
On 10/5/2016 8:50 PM, Barry Smith wrote:
Sorry, as indicated in
http://www.mcs.anl.gov/petsc/documentation/changes/dev.html in order to get the
previous behavior of
DMDACreate3d() you need to follow it with the two lines
Sorry, as indicated in
http://www.mcs.anl.gov/petsc/documentation/changes/dev.html in order to get the
previous behavior of
DMDACreate3d() you need to follow it with the two lines
DMSetFromOptions(da);
DMSetUp(da);
Barry
> On Oct 5, 2016, at 9:11 PM, Hengjie Wang
The message "Scalar value must be same on all processes, argument # 2" comes
up often when a Nan or Inf as gotten into the computation. The IEEE standard
for floating point operations defines that Nan != Nan;
I recommend running again with -fp_trap this should cause the code to stop
Hi folks,
I am trying to track down a bug that is sometimes triggered when solving a
singular system (poisson+neumann). It only seems to happen in parallel and
halfway through the run. I can provide detailed information about the
actual problem, but the error message I get boils down to this:
Hi,
I just tried .F90. It had the error. I attached the full error log.
Thank you.
Frank
On 10/5/2016 6:57 PM, Barry Smith wrote:
PETSc fortran programs should always end with .F90 not .f90 can you try
again with that name? The capital F is important.
Barry
On Oct 5, 2016, at 7:57
PETSc fortran programs should always end with .F90 not .f90 can you try again
with that name? The capital F is important.
Barry
> On Oct 5, 2016, at 7:57 PM, frank wrote:
>
> Hi,
>
> I update petsc to the latest version by pulling from the repo. Then I find
> one of
Hi,
I did.
I am using GNU compiler 5.4.0. I don't know if this matters.
Thank you
Frank
On 10/5/2016 6:08 PM, Matthew Knepley wrote:
On Wed, Oct 5, 2016 at 7:57 PM, frank > wrote:
Hi,
I update petsc to the latest version by pulling from
On Wed, Oct 5, 2016 at 7:57 PM, frank wrote:
> Hi,
>
> I update petsc to the latest version by pulling from the repo. Then I find
> one of my old code, which worked before, output errors now.
> After debugging, I find that the error is caused by "DMCreateGlobalVector".
> I
Hi,
I update petsc to the latest version by pulling from the repo. Then I
find one of my old code, which worked before, output errors now.
After debugging, I find that the error is caused by "DMCreateGlobalVector".
I attach a short program which can re-produce the error. This program
works
> On Oct 5, 2016, at 2:30 PM, Matthew Overholt wrote:
>
> Hi Petsc-Users,
>
> I am trying to understand an issue where PetscCommDuplicate() calls are
> taking an increasing percentage of time as I run a fixed-sized problem on
> more processes.
>
> I am using the FEM
On Wed, Oct 5, 2016 at 2:30 PM, Matthew Overholt
wrote:
> Hi Petsc-Users,
>
>
>
> I am trying to understand an issue where PetscCommDuplicate() calls are
> taking an increasing percentage of time as I run a fixed-sized problem on
> more processes.
>
>
>
> I am using the FEM
Hi Petsc-Users,
I am trying to understand an issue where PetscCommDuplicate() calls are
taking an increasing percentage of time as I run a fixed-sized problem on
more processes.
I am using the FEM to solve the steady-state heat transfer equation (K.x =
q) using a PC direct solver, like
On 5 October 2016 at 18:49, Matthew Knepley wrote:
> On Wed, Oct 5, 2016 at 11:19 AM, E. Tadeu wrote:
>
>> Matt,
>>
>> Do you know if there is any example of solving Navier Stokes using a
>> staggered approach by using a different DM object such as
On Wed, Oct 5, 2016 at 11:19 AM, E. Tadeu wrote:
> Matt,
>
> Do you know if there is any example of solving Navier Stokes using a
> staggered approach by using a different DM object such as DMPlex?
>
SNES ex62 can do P2/P1 Stokes, which is similar. Is that what you want to
Thanks to all. Your answers were very helpful.
Best
praveen
Matt,
Do you know if there is any example of solving Navier Stokes using a
staggered approach by using a different DM object such as DMPlex?
Thanks,
Edson
On Tue, Oct 4, 2016 at 11:12 PM, Matthew Knepley wrote:
> On Tue, Oct 4, 2016 at 9:02 PM, Somdeb Bandopadhyay
On Tue, Oct 4, 2016 at 9:47 PM, Somdeb Bandopadhyay
wrote:
> Hi again,
>Please allow me to explain in detail here:-
>
>1. I am using Zang's (jcp 1994) method for incompressible flow on
>generalized collocated grid.
>2. The main difference lies on the
On Wed, Oct 5, 2016 at 7:54 AM, Praveen C wrote:
> Dear all
>
> I am using DMDA and create a vector with
>
> DMCreateGlobalVector
>
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateLocalVector.html
Matt
> However this does not have ghost values.
Praveen C writes:
> So I have to create a global vector AND a local vector using
> DMCreateLocalVector.
>
> Then I do DMGlobalToLocalBegin/End. Does this not lead to too much copying
> ?
It's typically more efficient -- the solver gets to work with contiguous
vectors and
丁老师 :
>How to broadcast a double value to all the nodes in the cluster with
> Petsc
>
MPI_Bcast().
Hong
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Praveen :
DMGetLocalVector().
See petsc/src/snes/examples/tutorials/ex19.c
Hong
> So I have to create a global vector AND a local vector using
> DMCreateLocalVector.
>
> Then I do DMGlobalToLocalBegin/End. Does this not lead to too much copying
> ? I see there is VecCreateGhost but no such thing
So I have to create a global vector AND a local vector using
DMCreateLocalVector.
Then I do DMGlobalToLocalBegin/End. Does this not lead to too much copying
? I see there is VecCreateGhost but no such thing for DMDA ?
Best
praveen
PS: Would be nice if the reply-to was set to mailing list. I
Praveen C writes:
> Dear all
>
> I am using DMDA and create a vector with
>
> DMCreateGlobalVector
>
>
> However this does not have ghost values. How should I create vector if I
> want to access ghost values ?
That's what local vectors are for.
signature.asc
Description:
Dear all
I am using DMDA and create a vector with
DMCreateGlobalVector
However this does not have ghost values. How should I create vector if I
want to access ghost values ?
Thanks
praveen
PETSc, by design, does not wrap any of the existing functionality of
MPI, so this would be accomplished with an MPI function like
MPI_Bcast().
On Wed, Oct 5, 2016 at 11:02 AM, 丁老师 wrote:
> Dear professor:
>How to broadcast a double value to all the nodes in the cluster
Dear professor:
How to broadcast a double value to all the nodes in the cluster with Petsc
26 matches
Mail list logo