VecLoadIntoVector and double dispatch

2009-12-04 Thread Jed Brown
On Thu, 3 Dec 2009 16:40:10 -0600, Barry Smith bsmith at mcs.anl.gov wrote: The vector has a pointer to the DM so the VecView() for that derived vector class has access to the DM information. The same viewer object can be used with a bunch of different sized Vecs since it gets the

Fwd: [petsc-maint #38493] TSSetIJacobian()

2009-12-04 Thread Barry Smith
); }/*.Petsc_Solve.*/ -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091204/e2568379/attachment.html

VecLoadIntoVector and double dispatch

2009-12-04 Thread Barry Smith
On Dec 4, 2009, at 2:58 AM, Jed Brown wrote: On Thu, 3 Dec 2009 16:40:10 -0600, Barry Smith bsmith at mcs.anl.gov wrote: The vector has a pointer to the DM so the VecView() for that derived vector class has access to the DM information. The same viewer object can be used with a bunch

VecLoadIntoVector and double dispatch

2009-12-04 Thread Matthew Knepley
... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091204/df374477/attachment.html

Fwd: [petsc-maint #38493] TSSetIJacobian()

2009-12-04 Thread Jed Brown
On Fri, 4 Dec 2009 08:23:15 -0600, Barry Smith bsmith at mcs.anl.gov wrote: Any ideas? Maybe a -snes_mf has crept in. I have a problem with a routine that evaluates a Jacobian matrix. The problem is that PETSc never enters the RHSJacobian() routine. I know that PETSc enters the

VecLoadIntoVector and double dispatch

2009-12-04 Thread Jed Brown
On Fri, 4 Dec 2009 08:52:44 -0600, Barry Smith bsmith at mcs.anl.gov wrote: This is not accurate. The SAMRAI vector class does not implement it. Yes, this means the SAMRAI vector class cannot use any PETSc built in matrix classes, but that is ok it provides its own. Right, so I would

VecLoadIntoVector and double dispatch

2009-12-04 Thread Barry Smith
On Dec 4, 2009, at 10:31 AM, Jed Brown wrote: On Fri, 4 Dec 2009 08:52:44 -0600, Barry Smith bsmith at mcs.anl.gov wrote: This is not accurate. The SAMRAI vector class does not implement it. Yes, this means the SAMRAI vector class cannot use any PETSc built in matrix classes, but

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-04 Thread Barry Smith
Suggestion: 1) Discard PETSc 2) Develop a general Py{CL, CUDA, OpenMP-C} system that dispatches tasks onto GPUs and multi-core systems (generally we would have one python process per compute node and local parallelism would be done via the low-level kernels to the cores and/or GPUs.)