t;
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
--
Zhengyong Ren
AUG Group, Institute of Geophysics
Department of Geosciences, ETH Zurich
NO H 47 Sonneggstrasse 5
CH-8092, Z?rich, Switzerland
Tel: +41 44 633 37561
e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch
Gmail: renzhengyong at gmail.com
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/3c60b66b/attachment.htm>
Switzerland
Tel: +41 44 633 37561
e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch
Gmail: renzhengyong at gmail.com
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/6ad0a0c3/attachment.htm>
: zhengyong.ren at aug.ig.erdw.ethz.ch
Gmail: renzhengyong at gmail.com
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/18eb0fae/attachment.htm>
There are two linear solves (for 1 SNES and 2 SNES) so there are two MGSetUp
on each level. Then a total of 9 multigrid iterations (in both linear solves
together) hence 9 smoother on level 0 (level 0 means coarse grid solve). One
smooth down and one smooth up on level 1 hence 18 total smoot
-pc_mg_log doesn't have anything to do with DA or DMMG it is part of the
basic PCMG. Are you sure you are calling SNESSetFromOptions()?
Barry
On Apr 3, 2012, at 6:56 PM, Yuqi Wu wrote:
> Hi, Mark,
>
> Thank you so much for your suggestion.
>
> The problem 1 is resolved by avoiding calli
Hi, Barry,
Thank you. If my program converges in two SNES iteration,
0 SNES norm 1.014991e+02, 0 KSP its (nan coarse its average), last norm
0.00e+00
1 SNES norm 9.925218e-05, 4 KSP its (5.25 coarse its average), last norm
2.268574e-06.
2 SNES norm 1.397282e-09, 5 KSP its (5.20 coarse its av
RELATIVE.
>>
>> //***
>> Below setup of my preconditioner,
>>
>> /* set up the MG preconditioner */
>> ierr = SNESGetKSP(snes,&fineksp);CHKERRQ(ierr);
>> ierr = KSPGetPC(fineksp,&finepc);CHKERRQ(ierr);
>> ierr = PCSetType(finepc,PCMG);CHKERRQ(ierr);
>> ierr = PCMGSetType(finepc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr);
>> ierr = PCMGSetLevels(finepc,2,PETSC_NULL);CHKERRQ(ierr);
>> ierr = PCMGSetCycleType(finepc,PC_MG_CYCLE_V);CHKERRQ(ierr);
>> ierr = PCMGSetNumberSmoothUp(finepc,1);CHKERRQ(ierr);
>> ierr = PCMGSetNumberSmoothDown(finepc,1);CHKERRQ(ierr);
>> ierr = PCMGSetGalerkin(finepc,PETSC_TRUE);CHKERRQ(ierr);
>> ierr =
>> PCMGSetResidual(finepc,1,PCMGDefaultResidual,algebra->J);CHKERRQ(ierr);
>>
>> ierr = PCMGSetInterpolation(finepc,1,ctx->Interp);CHKERRQ(ierr);
>>
>> /* set up the coarse solve */
>> ierr = PCMGGetCoarseSolve(finepc,&ctx->coarseksp);CHKERRQ(ierr);
>> ierr = KSPSetOptionsPrefix(ctx->coarseksp,"coarse_");CHKERRQ(ierr);
>> ierr = KSPSetFromOptions(ctx->coarseksp);CHKERRQ(ierr);
>>
>> /* set up the fine grid smoother */
>> ierr = PCMGGetSmoother(finepc,1,&kspsmooth);CHKERRQ(ierr);
>> ierr = KSPSetType(kspsmooth, KSPRICHARDSON);CHKERRQ(ierr);
>> ierr = KSPGetPC(kspsmooth,&asmpc);CHKERRQ(ierr);
>> ierr = PCSetType(asmpc,PCASM);CHKERRQ(ierr);
>> ierr = PCASMSetOverlap(asmpc,0);CHKERRQ(ierr);
>> ierr =
>> PCASMSetLocalSubdomains(asmpc,1,&grid->df_global_asm,PETSC_NULL);CHKERRQ(ierr);
>>
>>
>>
>>
>>
>>
>
-- next part --
An embedded and charset-unspecified text was scrubbed...
Name: mg_info.txt
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/4aac5bbd/attachment-0001.txt>
rich
> NO H 47 Sonneggstrasse 5
> CH-8092, Z?rich, Switzerland
> Tel: +41 44 633 37561
> e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch
> Gmail: renzhengyong at gmail.com
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/86db6e31/attachment.htm>
On Apr 3, 2012, at 3:25 PM, Yuqi Wu wrote:
> Dear All,
>
> I want to create two grid preconditioner for the linear Jacobian solve for
> the nonlinear problem. I am trying to use the inexact Newton as the nonlinear
> solver, and the fGMRES as the linear solve. For the preconditioner for the
>
ate either the operator or its
inverse. Other approaches would typically involve further knowledge of your
problem.
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/277a75d5/attachment-0001.htm>
Dear All,
I want to create two grid preconditioner for the linear Jacobian solve for the
nonlinear problem. I am trying to use the inexact Newton as the nonlinear
solver, and the fGMRES as the linear solve. For the preconditioner for the
linear solve, I want to create a two level ASM preconditi
lib/glnxa64
>>>>>>>>>> -leng -lmex -lmx -lmat -lut -licudata -licui18n -licuuc
>>>>>>>>>> -Wl,-rpath,/opt/epd-7.1-2-rh5-x86_64/lib
>>>>>>>>>> -L/opt/epd-7.1-2-rh5-x86_64/lib
>>>>>>>>>
MatNullSpaceDestroy(&matnull);CHKERRQ(ierr);
>
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/6ff7e507/attachment.htm>
On Tue, 3 Apr 2012, Anton Popov wrote:
> I support 100% what Barry said. Just get the work done. Cray and IBM Linux
> systems do not support ALL the systems calls that PETSc uses. So it's always
> kind of problem to purge manually petscconf.h in between of "configure" and
> "make" on their machine
ates have been set and if not check the matrix. This should be easy
>> to do; I'll look at the ML code to clone the API.
>> >>
>> >> One thing to keep in mind is that diagonal scaling breaks the null
>> space (ie, the rigid body modes have to be scaled appro
2012 10:07 PM
> > Subject: Re: [petsc-users] transfer vector data diagonally on DA
> >
> > On Sun, Apr 1, 2012 at 22:01, khalid ashraf
> wrote:
> > I want to transfer vector data diagonally in the DA grid like
> > for (k=zs; k > for (j=ys; j > for (i=xs; i > if(i!=mx-1 || j!=my-1 || k!=mz-1){
> > u_new[k+1][j+1][i+1]=u[k][j][i];}
> > }}}
> >
> > Could you please suggest the best way to do it minimizing interprocessor
> assignments.
> >
> > Both are on the same DMDA?
> >
> > Communicate U to Ulocal (DMGlobalToLocalBegin/End) using a BOX stencil
> with width at least 1, get the global array u_new[][][] from UGlobalNew and
> the local arrays u[][][] from Ulocal, then assign u_new[k][j][i] =
> u[k-1][j-1][i-1].
> >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> >
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/3cd2f82e/attachment.htm>
; >>
> >> One thing to keep in mind is that diagonal scaling breaks the null space
> >> (ie, the rigid body modes have to be scaled appropriately). Who owns the
> >> diagonal scaling? If it is Mat then we might want MatSetNearNullSpace to
> >> do this, otherwise we should think of a good way to deal with this. It is
> >> very error prone to not do the right thing here, we should at least throw
> >> an error.
> >>
> >>
> >
> >
>
>
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/24881e71/attachment.htm>
unknown
>> file
>> [0]PETSC ERROR: PETSC: Attaching gdb to ./defmod of pid 32384 on display
>> localhost:20.0 on machine nid10649
>> Unable to start debugger in xterm: No such file or directory
>> aborting job:
>> application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0
>> _pmii_daemon(SIGCHLD): [NID 10649] [c23-3c0s6n1] [Mon Apr 2 13:06:48 2012]
>> PE 0 exit signal Aborted
>> Application 133198 exit codes: 134
>> Application 133198 resources: utime ~1s, stime ~0s
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/0433a79d/attachment.htm>
I think you are mis-understanding the global ordering when using DA vectors
in parallel. You might consider using VecSetValuesLocal() or
VecSetValuesStencil() instead of VecSetValues(). Read up on the users manual
and examples about PETSc global ordering.
Barry
On Apr 3, 2012, at 3:46 A
ace() is a lightweight accessor, it doesn't increment the
reference count and you don't have to restore it.
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/98444c51/attachment-0001.htm>
);
ierr = MatSetNearNullSpace(A,matnull);CHKERRQ(ierr);
ierr = MatNullSpaceDestroy(&matnull);CHKERRQ(ierr);
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/32ae6df8/attachment.htm>
gt;>
>>>Could you please suggest the best way to do it minimizing interprocessor
>>>assignments.
>>
>>Both are on the same DMDA?
>>
>>
>>Communicate U to Ulocal (DMGlobalToLocalBegin/End) using a BOX stencil with
>>width at least 1, get the global array u_new[][][] from UGlobalNew and the
>>local arrays u[][][] from Ulocal, then assign u_new[k][j][i] =
>>u[k-1][j-1][i-1].
>>
>>
>
>
>
>--
>What most experimenters take for granted before they begin their experiments
>is infinitely more interesting than any results to which their experiments
>lead.
>-- Norbert Wiener
>
>
>
--
What most experimenters take for granted before they begin their experiments is
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120403/5a8cceba/attachment-0001.htm>
22 matches
Mail list logo