On Thu, 3 Dec 2009 16:40:10 -0600, Barry Smith bsmith at mcs.anl.gov wrote:
The vector has a pointer to the DM so the VecView() for that
derived vector class has access to the DM information. The same
viewer object can be used with a bunch of different sized Vecs since
it gets the
);
}/*.Petsc_Solve.*/
-- next part --
An HTML attachment was scrubbed...
URL:
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091204/e2568379/attachment.html
On Dec 4, 2009, at 2:58 AM, Jed Brown wrote:
On Thu, 3 Dec 2009 16:40:10 -0600, Barry Smith bsmith at mcs.anl.gov
wrote:
The vector has a pointer to the DM so the VecView() for that
derived vector class has access to the DM information. The same
viewer object can be used with a bunch
...
URL:
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091204/df374477/attachment.html
On Fri, 4 Dec 2009 08:23:15 -0600, Barry Smith bsmith at mcs.anl.gov wrote:
Any ideas?
Maybe a -snes_mf has crept in.
I have a problem with a routine that evaluates a Jacobian matrix.
The problem is that PETSc never enters the RHSJacobian()
routine. I know that PETSc enters the
On Fri, 4 Dec 2009 08:52:44 -0600, Barry Smith bsmith at mcs.anl.gov wrote:
This is not accurate. The SAMRAI vector class does not implement
it. Yes, this means the SAMRAI vector class cannot use any PETSc built
in matrix classes, but that is ok it provides its own.
Right, so I would
On Dec 4, 2009, at 10:31 AM, Jed Brown wrote:
On Fri, 4 Dec 2009 08:52:44 -0600, Barry Smith bsmith at mcs.anl.gov
wrote:
This is not accurate. The SAMRAI vector class does not implement
it. Yes, this means the SAMRAI vector class cannot use any PETSc
built
in matrix classes, but
Suggestion:
1) Discard PETSc
2) Develop a general Py{CL, CUDA, OpenMP-C} system that dispatches
tasks onto GPUs and multi-core systems (generally we would have one
python process per compute node and local parallelism would be done
via the low-level kernels to the cores and/or GPUs.)