Hi again,
Please allow me to explain in detail here:-
1. I am using Zang's (jcp 1994) method for incompressible flow on
generalized collocated grid.
2. The main difference lies on the calculation of the grid matrix, for
which I am using Gaitonde et al (2002)'s work
3. I want
Hi again Sir,
Thank you very much for the quick response. I am planning to
implement a mustiphase algorithm on collocated grid. I already qrote a C
code for 2d case, but it wasn't very generalized . So for the final
version, I intend to use python as a script to interact with PETSc ker
On Tue, Oct 4, 2016 at 9:02 PM, Somdeb Bandopadhyay
wrote:
> Dear all,
> I want to write a solver for incompressible navier stokes
> using python and I want to use PETsc (particularly dmda & ksp) for this.
> May I know if this type of work is feasible/already done?
>
How do you plan
Dear all,
I want to write a solver for incompressible navier stokes using
python and I want to use PETsc (particularly dmda & ksp) for this. May I
know if this type of work is feasible/already done?
I intend to run my solver in a cluster and so am slightly
concerned about th
On Tue, Oct 4, 2016 at 3:26 PM, frank wrote:
>
> On 10/04/2016 01:20 PM, Matthew Knepley wrote:
>
> On Tue, Oct 4, 2016 at 3:09 PM, frank wrote:
>
>> Hi Dave,
>>
>> Thank you for the reply.
>> What do you mean by the "nested calls to KSPSolve"?
>>
>
> KSPSolve is called again after redistributin
On Tue, Oct 4, 2016 at 3:09 PM, frank wrote:
> Hi Dave,
>
> Thank you for the reply.
> What do you mean by the "nested calls to KSPSolve"?
>
KSPSolve is called again after redistributing the computation.
> I tried to call KSPSolve twice, but the the second solve converged in 0
> iteration. KSP
Hi Dave,
Thank you for the reply.
What do you mean by the "nested calls to KSPSolve"?
I tried to call KSPSolve twice, but the the second solve converged in 0
iteration. KSPSolve seems to remember the solution. How can I force both
solves start from the same initial guess?
Thank you.
Frank
On Tuesday, 4 October 2016, frank wrote:
> Hi,
> This question is follow-up of the thread "Question about memory usage in
> Multigrid preconditioner".
> I used to have the "Out of Memory(OOM)" problem when using the
> CG+Telescope MG solver with 32768 cores. Adding the "-matrap 0;
> -matptap_scal
-ksp_view in both cases?
> On Oct 4, 2016, at 1:13 PM, frank wrote:
>
> Hi,
>
> This question is follow-up of the thread "Question about memory usage in
> Multigrid preconditioner".
> I used to have the "Out of Memory(OOM)" problem when using the CG+Telescope
> MG solver with 32768 cores.
On Tue, Oct 4, 2016 at 1:13 PM, frank wrote:
> Hi,
> This question is follow-up of the thread "Question about memory usage in
> Multigrid preconditioner".
> I used to have the "Out of Memory(OOM)" problem when using the
> CG+Telescope MG solver with 32768 cores. Adding the "-matrap 0;
> -matptap_
Hi,
This question is follow-up of the thread "Question about memory usage in
Multigrid preconditioner".
I used to have the "Out of Memory(OOM)" problem when using the
CG+Telescope MG solver with 32768 cores. Adding the "-matrap 0;
-matptap_scalable" option did solve that problem.
Then I test
On Tue, Oct 4, 2016 at 10:43 AM, Jed Brown wrote:
> Matthew Knepley writes:
>
> > On Tue, Oct 4, 2016 at 10:23 AM, Jed Brown wrote:
> >
> >> Matthew Knepley writes:
> >>
> >> > On Mon, Oct 3, 2016 at 9:51 PM, Praveen C wrote:
> >> >
> >> >> DG for elliptic operators still makes lot of sense
Matthew Knepley writes:
> On Tue, Oct 4, 2016 at 10:23 AM, Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > On Mon, Oct 3, 2016 at 9:51 PM, Praveen C wrote:
>> >
>> >> DG for elliptic operators still makes lot of sense if you have
>> >>
>> >> problems with discontinuous coefficients
>> >
On Tue, Oct 4, 2016 at 10:23 AM, Jed Brown wrote:
> Matthew Knepley writes:
>
> > On Mon, Oct 3, 2016 at 9:51 PM, Praveen C wrote:
> >
> >> DG for elliptic operators still makes lot of sense if you have
> >>
> >> problems with discontinuous coefficients
> >>
> >
> > This is thrown around a lot
Matthew Knepley writes:
> On Mon, Oct 3, 2016 at 9:51 PM, Praveen C wrote:
>
>> DG for elliptic operators still makes lot of sense if you have
>>
>> problems with discontinuous coefficients
>>
>
> This is thrown around a lot, but without justification. Why is it better
> for discontinuous coeff
On Tue, Oct 4, 2016 at 2:39 AM, Sander Arens wrote:
> I think it would also be interesting to have something similar to TS ex25,
> but now with DMPlex and DG.
>
I think this would be my first target. I realize that the Laplacian is part
of it, so that Justin's suggestion of ex12
follows from tha
On Mon, Oct 3, 2016 at 9:51 PM, Praveen C wrote:
> DG for elliptic operators still makes lot of sense if you have
>
> problems with discontinuous coefficients
>
This is thrown around a lot, but without justification. Why is it better
for discontinuous coefficients? The
solution is smoother than
I think it would also be interesting to have something similar to TS ex25,
but now with DMPlex and DG.
On 4 October 2016 at 04:57, Justin Chang wrote:
> Advection-diffusion equations. Perhaps SNES ex12 could be modified to
> include an advection term?
>
> On Mon, Oct 3, 2016 at 9:52 PM, Barry Sm
18 matches
Mail list logo