release Monday Dec 16th?

2008-12-11 Thread Matthew Knepley
Fine with me. Everything seems to be going for me.

   Matt

On Thu, Dec 11, 2008 at 4:36 PM, Lisandro Dalcin dalcinl at gmail.com wrote:

 OK, fine with me...

 BTW, Had you any chance to look and try examples at tutorials/python ?


 On Thu, Dec 11, 2008 at 7:50 PM, Barry Smith bsmith at mcs.anl.gov wrote:
 
Ok, we cannot put this off for ever. Can we all make a commitment to
  making the PETSc 3.0 release
  on Monday Dec 16th?
 
Thanks
 
Barry
 
 



 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20081211/787e6f1b/attachment.html


Request related to SLEPc

2008-12-12 Thread Matthew Knepley
I need to understand what you use this variable for. If you are just
linking, then you
do not need it. If you want configure information, then you need it in order
to get
anything, so specifying in a file will not help.

   Matt

On Fri, Dec 12, 2008 at 9:01 AM, Jose E. Roman jroman at dsic.upv.es wrote:

 SLEPc's configure.py uses the value of $PETSC_ARCH in order to setup
 everything for installation. We never had a $SLEPC_ARCH variable because our
 configure.py does not add platform-dependent functionality.

 Now the problem comes when PETSc has been configured with --prefix and
 installed with make install. In that case, $PETSC_ARCH is no longer
 available and SLEPc's configure.py is in trouble.

 A simple workaround would be that PETSc's configure (or make install) would
 add a variable (e.g. PETSC_ARCH_NAME) in file petscvariables. We parse this
 file so the arch name would be readily available even if $PETSC_ARCH is
 undefined.

 Can someone do this? Other solutions are welcome.

 Thanks,
 Jose




-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20081212/24ec32b2/attachment.html


Request related to SLEPc

2008-12-12 Thread Matthew Knepley
On Fri, Dec 12, 2008 at 9:32 AM, Lisandro Dalcin dalcinl at gmail.com wrote:

 I would like to add that, despite the new buildsystem if by far better
 than the old one, PETSc has lost a nice feature of being able of being
 installed in a central location for multiple $PETSC_ARCH's . This
 feature is something I need, as I have to maintain the PETSc
 intallation in our cluster, and I really need to have at least debug
 and optimized builds because our applications can also be built in the
 two modes.


I do not understand the problem. Here we follow the Linux install model. If
you
want different versions, they are just in different directories.

  Matt



 Up to now, I'm using this rule: I pass to configure
 --prefix=/usr/local/petsc/3.0.0/$PETSC_ARCH. But then, after
 installation, the actual $PETSC_DIR should be passed something like
 this: /usr/local/petsc/3.0.0/linux-gnu. In petsc4py I've tried to be
 smart: it can build against the build directory, against an standard
 install (I mean, when you pass --prefix=/path/to/petsc) or my special
 rule (--prefix=/path/to/petsc/$PETSC_ARCH). Moreover, petsc4py can be
 built against MANY different $PETSC_ARCH's, this way, before running
 an script, you just setenv PETSC_ARCH=some-arch, and the Python import
 machinery will internally load the appropriate extension module. This
 is really, really nice, as I can run a small problem with debug libs,
 and next run to a larger problem with optimized libs, with just
 exporting an environmental variable.

 When using the PETSc makefiles for other C/C++ apps, my special
 install rule will not work the same than when building against the
 PETSc build directory. Of course, I believe it should be easy to make
 it work, but I'm thinking that many other users will run in the same
 need.


 Now, regarding the specific cuestions of Jose, I've noticed that the
 header petscconf.h has a line #define PETSC_ARCH_NAME XXX. I
 cannot figure out how this define is generated (autoconf stuff?), but
 if this define is guaranteed to be the same as the $PETSC_ARCH used to
 build PETSc, then Jose perhaps could use a regex to look for a
 meaningfull $PETSC_ARCH value.


 On Fri, Dec 12, 2008 at 1:01 PM, Jose E. Roman jroman at dsic.upv.es wrote:
  SLEPc's configure.py uses the value of $PETSC_ARCH in order to setup
  everything for installation. We never had a $SLEPC_ARCH variable because
 our
  configure.py does not add platform-dependent functionality.
 
  Now the problem comes when PETSc has been configured with --prefix and
  installed with make install. In that case, $PETSC_ARCH is no longer
  available and SLEPc's configure.py is in trouble.
 
  A simple workaround would be that PETSc's configure (or make install)
 would
  add a variable (e.g. PETSC_ARCH_NAME) in file petscvariables. We parse
 this
  file so the arch name would be readily available even if $PETSC_ARCH is
  undefined.
 
  Can someone do this? Other solutions are welcome.
 
  Thanks,
  Jose
 
 



 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20081212/2187efb8/attachment.html


Request related to SLEPc

2008-12-12 Thread Matthew Knepley
If PETSC_ARCH is not set, I would just use the default arch generated by
configure, like
we do in the PETSc configure.

  Matt

On Fri, Dec 12, 2008 at 11:29 AM, Jose E. Roman jroman at dsic.upv.es wrote:


 On 12/12/2008, Jose E. Roman wrote:

  SLEPc's configure.py uses the value of $PETSC_ARCH in order to setup
 everything for installation. We never had a $SLEPC_ARCH variable because our
 configure.py does not add platform-dependent functionality.

 Now the problem comes when PETSc has been configured with --prefix and
 installed with make install. In that case, $PETSC_ARCH is no longer
 available and SLEPc's configure.py is in trouble.

 A simple workaround would be that PETSc's configure (or make install)
 would add a variable (e.g. PETSC_ARCH_NAME) in file petscvariables. We parse
 this file so the arch name would be readily available even if $PETSC_ARCH is
 undefined.

 Can someone do this? Other solutions are welcome.

 Thanks,
 Jose


 Let me explain the situation a bit better.

 In SLEPc's configure.py we now (in slepc-dev) create a directory called
 $PETSC_ARCH in $SLEPC_DIR, then $PETSC_ARCH/lib contains the compiled SLEPc
 libraries and $PETSC_ARCH/conf contains log files and a slepcvariables
 file. After building, we allow 'make install' if a --prefix was specified in
 SLEPc's configure.py (this is not working in slepc-dev yet).

 The thing is that if $PETSC_ARCH is not set, then we cannot create the
 $PETSC_ARCH directory. I guess it would be ok to create the files in the
 root $SLEPC_DIR directory, but this may be confusing for users if they
 expect a $PETSC_ARCH directory, and more complicated for our makefiles.

 Any suggestion?

 The latest SLEPc snapshot is here:
 http://www.grycap.upv.es/slepc/download/distrib/pre/slepc-dev-081210.tgz

 Jose




-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20081212/8a2b3210/attachment.html


how do you access the nonzero elements in SeqAIJ matrices

2008-08-04 Thread Matthew Knepley
On Mon, Aug 4, 2008 at 3:14 AM, Ahmed El Zein ahmed at azein.com wrote:
 I am working on a project where I would like to copy a sparse matrix in
 CSR format.

 I have tried MatGetRow() which works OK but I would really like to get
 pointers to the 3 arrays directly.

 I also tried MatGetRowIJ() which allows me to get the i and j arrays but
 I can't see how to access the nonzero elements.

You can use MatGetArray().

  Matt

 and finally I attempted to access the arrays directly like this:
 Mat_SeqAIJ  *a = (Mat_SeqAIJ*)A-data;
 MatScalar *val = a-a;
 PetscInt  *ptr = a-i;
 PetscInt  *ind = a-j;

 However when accessing directly I get different values for ptr and
 SIGSEGV when accessing val or ind.

 also I get a bogus number for a-nz (134630032 instead of 21)

 Can someone please explain when I am doing wrong?

 Ahmed





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




how do you access the nonzero elements in SeqAIJ matrices

2008-08-04 Thread Matthew Knepley
On Mon, Aug 4, 2008 at 9:58 AM, Ahmed El Zein ahmed at azein.com wrote:
 On Mon, 2008-08-04 at 04:17 -0500, Matthew Knepley wrote:
 On Mon, Aug 4, 2008 at 3:14 AM, Ahmed El Zein ahmed at azein.com wrote:
  I am working on a project where I would like to copy a sparse matrix in
  CSR format.
 
  I have tried MatGetRow() which works OK but I would really like to get
  pointers to the 3 arrays directly.
 
  I also tried MatGetRowIJ() which allows me to get the i and j arrays but
  I can't see how to access the nonzero elements.

 You can use MatGetArray().
 Thanks. The man page states that:
 The result of this routine is dependent on the underlying matrix data
 structure, and may not even work for certain matrix types.

 How do I find out which matrix types support it? and is there a method
 that works across all matrix types?

1) There is nothing that works for all matrix types

2) There should NEVER be something that works for all matrix type. That is
the point of using an interface, so we can use arbitrary implementation
structures underneath.

   Matt

 Thanks,
 Ahmed

   Matt

  and finally I attempted to access the arrays directly like this:
  Mat_SeqAIJ  *a = (Mat_SeqAIJ*)A-data;
  MatScalar *val = a-a;
  PetscInt  *ptr = a-i;
  PetscInt  *ind = a-j;
 
  However when accessing directly I get different values for ptr and
  SIGSEGV when accessing val or ind.
 
  also I get a bogus number for a-nz (134630032 instead of 21)
 What is the correct way to get the number of nonzeros?

 
  Can someone please explain when I am doing wrong?
 
  Ahmed
 
 








-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Is c2html a required external package?

2008-08-08 Thread Matthew Knepley
On Fri, Aug 8, 2008 at 12:09 AM, Shi Jin jinzishuai at yahoo.com wrote:
 Hi there,

 I was trying to build petsc-dev on a IBM AIX but failed at c2html. I tried 
 not to use it with --with-c2html=0 but it seemed still be needed. Why do we 
 need it? I don't need to generate documentations for this test.
 Thanks.

It is for documentation, but we do not consider an installation with
broken documentation fully functional. Can you send
the configure.log?

  Thanks,

 Matt

 Shi

  --
 Shi Jin, PhD
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




petsc and x-fem

2008-08-12 Thread Matthew Knepley
On Mon, Aug 4, 2008 at 6:49 AM, Techas techas at gmail.com wrote:

 Hi Matthew,

 I'm Sergio (X-FEM), we meet some weeks ago in Davis.
 I'm playing a little bit with petsc to evaluate how much work will
 take me mount my x-fem code on petsc.


Sorry I am just replying now. I returned from Norway yesterday.



 My first question is: can you tell me a good example of the assembly
 of a global finite element matrix (distributed in n procs) based on a
 distributed mesh?


The answers is in two parts. If you have a structured grid, then the DA
construct can do everything and is very simple. You just call
MatSetValuesStencil()
as in KSP ex2.

If you have an unstructured grid, we have new functionality (only in the
development version) in a Mesh object. This will be in the upcoming release,
but you can see it now in src/dm/mesh/sieve/problems/Bratu.hh. The function
to examine is Rhs_Unstructured() which forms the residual for this equation
on an unstructured mesh. Unfortunately, this is new and has little
documentation.
Before the release, I will write some, but until then you will have to ask
me
questions if you want to try it out.

  Thanks,

 Matt



 If you want me to write to the petsc list just tell me.

 thank you!
 Sergio.

 --
 Sergio Zlotnik, PhD
 Group of Dynamics of the Lithosphere -GDL-
 Department of Geophysics  Tectonics
 Institute of Earth Sciences - CSIC
 Sole Sabaris s/n
 Barcelona 08028 - SPAIN

 Tel: +34 93 409 54 10
 Fax: +34 93 411 00 12
 email: szlotnik at ija.csic.es

 Web page http://www.ija.csic.es/gt/sergioz/




-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20080812/ea4e1a33/attachment.html


PCApply_Shell ghosts...

2008-08-13 Thread Matthew Knepley
PETSc knows nothing about your domain, so it can't know what you might want
ghosted. I think the thing to do is use a VecScatter to map the input Vec to a
ghosted Vec (called a local vector in DA language).

  Matt

On Wed, Aug 13, 2008 at 3:41 PM, Eric Chamberland
Eric.Chamberland at giref.ulaval.ca wrote:
 Hi,

 we are using PETSc in our code and we have a problem with, I think, the
 ghosted values that we expect.

 We developed our own pre-conditioner, and try to use it in a parallel
 environment.  With other precondtioners (PETSc built-in), everything works
 fine.  But with our home-made one, here is the problem that we have:

 When PCApply_Shell give use the vectors, there are no ghost values in
 them...  In our code, we expect to always have these values...

 Not having the ghosts in the vectors passed by PCApply_shell, is that a
 normal behavior?

 Thanks for the attention,

 Eric





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




[petsc4py] What should be the 'default' communicator?

2008-08-25 Thread Matthew Knepley
I would still maintain that PETSC_COMM_WORLD is the correct default. There
are better paradigms for embarassingly parallel operation, like Condor. PETSc
is intended for parallel, domain decomposition runs.

   Matt

On Mon, Aug 25, 2008 at 10:54 AM, Lisandro Dalcin dalcinl at gmail.com wrote:
 After working hard on mpi4py, this week I'll spend my time cleaning-up
 and adding features to the new Cython-based petsc4py. Then, I'll be
 asking questions to this list requesting for advise.

 In all calls that create new PETSc objects, I've decided to make the
 'comm' argument optional. If the communicator is not passed,
 PETSC_COMM_WORLD is currently used. This is the approach PETSc uses in
 some C++ calls implemented through PetscPolymorphicFunction().

 But now I believe that is wrong, and that PETSC_COMM_SELF should be
 the default. Or perhaps even better, I should let users set the
 default communicator used by petsc4py to create new (parallel)
 objects.

 An anecdote: some time ago, a petsc4py user wrote a sequential code
 and created objects without passing communicator arguments, next he
 wanted to solve many of those problems in different worker processes
 in a ambarrasingly parallel fashion and collect results at the
 master process. Of course, he run into trouble. Then I asked him to
 initialize PETSc in such a way that PETSC_COMM_WORLD was actually
 PETSC_COMM_SELF (by setting the world comm before PetscInitalize()).
 This mostly works, but has a problem: we have lost the actual
 PETSC_COMM_WORLD, so we are not able to create a parallel object after
 PetscInitialize().

 Any thoughts?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




[petsc4py] What should be the 'default' communicator?

2008-08-25 Thread Matthew Knepley
On Mon, Aug 25, 2008 at 1:22 PM, Lisandro Dalcin dalcinl at gmail.com wrote:
 On Mon, Aug 25, 2008 at 1:08 PM, Matthew Knepley knepley at gmail.com wrote:
 I would still maintain that PETSC_COMM_WORLD is the correct default. There
 are better paradigms for embarassingly parallel operation, like Condor. PETSc
 is intended for parallel, domain decomposition runs.

 Yes, you are completelly right. But I believe that many people still
 use PETSc in a sequential way just because PETSc is full featured,
 well designed, easy to learn, etc. So, despite PETSc being intended
 for parallel, domain decomposition applications, many people are going
 to use it for sequential apps and embarassingly parallel operations.

I agree that people will do this, I just don't agree that it should be
the default.

 To be honest, I've never looked too much at paradigms like Condor. But
 using them implies to learn yet another framework. Another anecdote: a
 guy sent me a mail with questions about mpi4py for solving a
 embarassingly parallel problems. I asked why he was trying to use such
 a heavy weight approach. And then he answered he was tired of the
 complications and performance of using a Grid-based approach, and that
 'mpiexec' a Python script with some coordinating MPI calls was far
 easier to setup, extend, and maintain and had better overall running
 times than submitting jobs to The Grid.

That is my experience with grid software as well. However, in the
particular case
of Condor, I disagree. It is fairly easy to setup and has great
features like fault
tolerance, automatic migration and balancing, that make it much more useful
that just MPI.

   Matt

 On Mon, Aug 25, 2008 at 10:54 AM, Lisandro Dalcin dalcinl at gmail.com 
 wrote:
 After working hard on mpi4py, this week I'll spend my time cleaning-up
 and adding features to the new Cython-based petsc4py. Then, I'll be
 asking questions to this list requesting for advise.

 In all calls that create new PETSc objects, I've decided to make the
 'comm' argument optional. If the communicator is not passed,
 PETSC_COMM_WORLD is currently used. This is the approach PETSc uses in
 some C++ calls implemented through PetscPolymorphicFunction().

 But now I believe that is wrong, and that PETSC_COMM_SELF should be
 the default. Or perhaps even better, I should let users set the
 default communicator used by petsc4py to create new (parallel)
 objects.

 An anecdote: some time ago, a petsc4py user wrote a sequential code
 and created objects without passing communicator arguments, next he
 wanted to solve many of those problems in different worker processes
 in a ambarrasingly parallel fashion and collect results at the
 master process. Of course, he run into trouble. Then I asked him to
 initialize PETSc in such a way that PETSC_COMM_WORLD was actually
 PETSC_COMM_SELF (by setting the world comm before PetscInitalize()).
 This mostly works, but has a problem: we have lost the actual
 PETSC_COMM_WORLD, so we are not able to create a parallel object after
 PetscInitialize().

 Any thoughts?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener





 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




[petsc4py] What should be the 'default' communicator?

2008-08-25 Thread Matthew Knepley
I am cool with this.

  Matt

On Mon, Aug 25, 2008 at 4:48 PM, Barry Smith bsmith at mcs.anl.gov wrote:

   This is fine for me, except I vote against the setter/getter. Just let the
 power user access the variable PETSC_COMM_DEFAULT directly.

   Barry


 On Aug 25, 2008, at 4:43 PM, Lisandro Dalcin wrote:

 On Mon, Aug 25, 2008 at 6:22 PM, Matthew Knepley knepley at gmail.com
 wrote:

 I agree that people will do this, I just don't agree that it should be
 the default.

 Would you agree with the following:

 At petsc4py initialization (and after calling PetscInitialize()), I
 define PETSC_COMM_DEFAULT = PETSC_COMM_WORLD. All parallel PETSc
 objects created through petsc4py use PETSC_COMM_DEFAULT if the
 communicator is not explicitelly passed as an argument. Additionally,
 I expose in petsc4py a getter/setter enabling users to change at ANY
 TIME the default communicator to use. With this approach, the world
 communicator will be default, unless changed by the (power) user.



 To be honest, I've never looked too much at paradigms like Condor. But
 using them implies to learn yet another framework. Another anecdote: a
 guy sent me a mail with questions about mpi4py for solving a
 embarassingly parallel problems. I asked why he was trying to use such
 a heavy weight approach. And then he answered he was tired of the
 complications and performance of using a Grid-based approach, and that
 'mpiexec' a Python script with some coordinating MPI calls was far
 easier to setup, extend, and maintain and had better overall running
 times than submitting jobs to The Grid.

 That is my experience with grid software as well. However, in the
 particular case
 of Condor, I disagree. It is fairly easy to setup and has great
 features like fault
 tolerance, automatic migration and balancing, that make it much more
 useful
 that just MPI.

  Matt

 On Mon, Aug 25, 2008 at 10:54 AM, Lisandro Dalcin dalcinl at gmail.com
 wrote:

 After working hard on mpi4py, this week I'll spend my time cleaning-up
 and adding features to the new Cython-based petsc4py. Then, I'll be
 asking questions to this list requesting for advise.

 In all calls that create new PETSc objects, I've decided to make the
 'comm' argument optional. If the communicator is not passed,
 PETSC_COMM_WORLD is currently used. This is the approach PETSc uses in
 some C++ calls implemented through PetscPolymorphicFunction().

 But now I believe that is wrong, and that PETSC_COMM_SELF should be
 the default. Or perhaps even better, I should let users set the
 default communicator used by petsc4py to create new (parallel)
 objects.

 An anecdote: some time ago, a petsc4py user wrote a sequential code
 and created objects without passing communicator arguments, next he
 wanted to solve many of those problems in different worker processes
 in a ambarrasingly parallel fashion and collect results at the
 master process. Of course, he run into trouble. Then I asked him to
 initialize PETSc in such a way that PETSC_COMM_WORLD was actually
 PETSC_COMM_SELF (by setting the world comm before PetscInitalize()).
 This mostly works, but has a problem: we have lost the actual
 PETSC_COMM_WORLD, so we are not able to create a parallel object after
 PetscInitialize().

 Any thoughts?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener





 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener





 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594






-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




[petsc4py] What should be the 'default' communicator?

2008-08-25 Thread Matthew Knepley
On Mon, Aug 25, 2008 at 8:13 PM, Barry Smith bsmith at mcs.anl.gov wrote:

 On Aug 25, 2008, at 5:40 PM, Lisandro Dalcin wrote:

 On Mon, Aug 25, 2008 at 6:48 PM, Barry Smith bsmith at mcs.anl.gov wrote:

  This is fine for me, except I vote against the setter/getter. Just let
 the
 power user access the variable PETSC_COMM_DEFAULT directly.


 Barry, sorry, I do not completely understand your comments. All my
 concern about this is only relevant to petsc4py and not core PETSc.
 With that in mind, I prefer to hide PETSC_COMM_DEFAULT from users,
 and ask them to call the getter/setter routines. You know: in Python,
 changing module-level globals is a bit unsafe: a user could do:

 from petsc4py import PETSc
 PETSc.COMM_DEFAULT = None # or watever

 and then latter get a failure. Furthermore, the setter can be in
 charge of all the relevant error checking: the comm have to actually
 be an instance of 'PETSc.Comm' type, and its comunicator cannot be the
 MPI_COMM_NULL.

   Can you  not overload the assignment = to automatically call whatever
 fancy setter you want to use? Then you get the simplicity I crave and the
 safety you desire?

Yes, you can make it a 'property'.

  Matt

   Barry


 On Aug 25, 2008, at 4:43 PM, Lisandro Dalcin wrote:

 On Mon, Aug 25, 2008 at 6:22 PM, Matthew Knepley knepley at gmail.com
 wrote:

 I agree that people will do this, I just don't agree that it should be
 the default.

 Would you agree with the following:

 At petsc4py initialization (and after calling PetscInitialize()), I
 define PETSC_COMM_DEFAULT = PETSC_COMM_WORLD. All parallel PETSc
 objects created through petsc4py use PETSC_COMM_DEFAULT if the
 communicator is not explicitelly passed as an argument. Additionally,
 I expose in petsc4py a getter/setter enabling users to change at ANY
 TIME the default communicator to use. With this approach, the world
 communicator will be default, unless changed by the (power) user.



 To be honest, I've never looked too much at paradigms like Condor. But
 using them implies to learn yet another framework. Another anecdote: a
 guy sent me a mail with questions about mpi4py for solving a
 embarassingly parallel problems. I asked why he was trying to use such
 a heavy weight approach. And then he answered he was tired of the
 complications and performance of using a Grid-based approach, and that
 'mpiexec' a Python script with some coordinating MPI calls was far
 easier to setup, extend, and maintain and had better overall running
 times than submitting jobs to The Grid.

 That is my experience with grid software as well. However, in the
 particular case
 of Condor, I disagree. It is fairly easy to setup and has great
 features like fault
 tolerance, automatic migration and balancing, that make it much more
 useful
 that just MPI.

 Matt

 On Mon, Aug 25, 2008 at 10:54 AM, Lisandro Dalcin dalcinl at gmail.com
 wrote:

 After working hard on mpi4py, this week I'll spend my time
 cleaning-up
 and adding features to the new Cython-based petsc4py. Then, I'll be
 asking questions to this list requesting for advise.

 In all calls that create new PETSc objects, I've decided to make the
 'comm' argument optional. If the communicator is not passed,
 PETSC_COMM_WORLD is currently used. This is the approach PETSc uses
 in
 some C++ calls implemented through PetscPolymorphicFunction().

 But now I believe that is wrong, and that PETSC_COMM_SELF should be
 the default. Or perhaps even better, I should let users set the
 default communicator used by petsc4py to create new (parallel)
 objects.

 An anecdote: some time ago, a petsc4py user wrote a sequential code
 and created objects without passing communicator arguments, next he
 wanted to solve many of those problems in different worker processes
 in a ambarrasingly parallel fashion and collect results at the
 master process. Of course, he run into trouble. Then I asked him to
 initialize PETSc in such a way that PETSC_COMM_WORLD was actually
 PETSC_COMM_SELF (by setting the world comm before PetscInitalize()).
 This mostly works, but has a problem: we have lost the actual
 PETSC_COMM_WORLD, so we are not able to create a parallel object
 after
 PetscInitialize().

 Any thoughts?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a
 (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica
 (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener





 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET

blocked index sets

2008-08-27 Thread Matthew Knepley
There is no concept of global for IS. They are purely serial. AO is the only
global construct with indices.

  Matt

On Wed, Aug 27, 2008 at 10:09 AM, Lisandro Dalcin dalcinl at gmail.com wrote:
 I believe we have to review the interface of ISBlock. Currently,
 ISBlockGetSize() return the number of LOCAL block indices. This is not
 consistent with other naming conventions for getting local and glocal
 sizes. I propose to change this to the following

 1) change: ISBlockGetSize() returns the number global blocks
 2) addition:  ISBlockGetLocalSize() return the number of local blocks

 Comments?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




blocked index sets

2008-08-27 Thread Matthew Knepley
On Wed, Aug 27, 2008 at 12:06 PM, Lisandro Dalcin dalcinl at gmail.com wrote:
 So, Do all us agree my proposed fix should be pushed? I'll wait for
 Matt comments/complaints...

I complain that IS is a fake parallel object. However, if
GetSize/GetLocalSize already
do this, then yes we should change the ISBlock version as well.

   Matt

 On Wed, Aug 27, 2008 at 1:13 PM, Barry Smith bsmith at mcs.anl.gov wrote:

 On Aug 27, 2008, at 10:23 AM, Matthew Knepley wrote:

 There is no concept of global for IS. They are purely serial. AO is the
 only
 global construct with indices.

   This is kind of true, and maybe used to be completely true. But IS does
 have a communicator and that communicator can be MPI_COMM_WORLD or
 any parallel communicator.  In other words the IS is evolving to be an
 object
 that can be parallel in the same sense as vecs or mats

   There are already ISGetSize() and ISGetLocalSize() so it sure makes sense
 to have the same paradgm for the ISGetBlockSize().


   Barry

 Originally IS had no parallel concept, then we added the ISGetSize/LocalSize
 but forgot to do it for the ISBlock...




  Matt

 On Wed, Aug 27, 2008 at 10:09 AM, Lisandro Dalcin dalcinl at gmail.com
 wrote:

 I believe we have to review the interface of ISBlock. Currently,
 ISBlockGetSize() return the number of LOCAL block indices. This is not
 consistent with other naming conventions for getting local and glocal
 sizes. I propose to change this to the following

 1) change: ISBlockGetSize() returns the number global blocks
 2) addition:  ISBlockGetLocalSize() return the number of local blocks

 Comments?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener






 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




blocked index sets

2008-08-27 Thread Matthew Knepley
On Wed, Aug 27, 2008 at 1:08 PM, Barry Smith bsmith at mcs.anl.gov wrote:

 On Aug 27, 2008, at 12:13 PM, Matthew Knepley wrote:

 On Wed, Aug 27, 2008 at 12:06 PM, Lisandro Dalcin dalcinl at gmail.com
 wrote:

 So, Do all us agree my proposed fix should be pushed? I'll wait for
 Matt comments/complaints...

 I complain that IS is a fake parallel object.

  In what way is IS a fake parallel object? Maybe we can resolve
 your concern about it being a fake parallel object.

I said fake because it has a single collective operation (or even one
with communication).
We even use IS in situations where GetSize() makes no sense, or at least are not
careful about the comm used. Also, none of the other functions work
over the comm.
For instance, the ISIsPermutation() or ISSorted() do not look at the
indices collectively.

  Matt

   Barry


 However, if
 GetSize/GetLocalSize already
 do this, then yes we should change the ISBlock version as well.

  Matt

 On Wed, Aug 27, 2008 at 1:13 PM, Barry Smith bsmith at mcs.anl.gov wrote:

 On Aug 27, 2008, at 10:23 AM, Matthew Knepley wrote:

 There is no concept of global for IS. They are purely serial. AO is the
 only
 global construct with indices.

  This is kind of true, and maybe used to be completely true. But IS does
 have a communicator and that communicator can be MPI_COMM_WORLD or
 any parallel communicator.  In other words the IS is evolving to be an
 object
 that can be parallel in the same sense as vecs or mats

  There are already ISGetSize() and ISGetLocalSize() so it sure makes
 sense
 to have the same paradgm for the ISGetBlockSize().


  Barry

 Originally IS had no parallel concept, then we added the
 ISGetSize/LocalSize
 but forgot to do it for the ISBlock...




 Matt

 On Wed, Aug 27, 2008 at 10:09 AM, Lisandro Dalcin dalcinl at gmail.com
 wrote:

 I believe we have to review the interface of ISBlock. Currently,
 ISBlockGetSize() return the number of LOCAL block indices. This is not
 consistent with other naming conventions for getting local and glocal
 sizes. I propose to change this to the following

 1) change: ISBlockGetSize() returns the number global blocks
 2) addition:  ISBlockGetLocalSize() return the number of local blocks

 Comments?


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener






 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener






-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




blocked index sets

2008-08-27 Thread Matthew Knepley
On Wed, Aug 27, 2008 at 2:10 PM, Barry Smith bsmith at mcs.anl.gov wrote:
   Even if an object (class) has NO collective operations, if, when you use
 that object, you must have partners are all
 other processes in a MPI_Comm then I think it is a good approach to have
 that be a parallel
 object that shares the comm.

I will not suggest that we go back on IS now. However, I am not sure I buy the
above argument. I see IS as just managing a list of integers, and
maybe reporting
some local properties. All the parallel actions are done by different
objects, like
Scatter or Mat. This is different from KSP or Vec which have natural
parallel actions.

   Matt

  Barry
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Block matrices and Schur complement preconditioning

2008-08-27 Thread Matthew Knepley
Just so I can remove this from my todo mail pile, it appears Barry is
doing this.
The actually Schur stuff is a new KSP thing so we do not link Mat to KSP, but
it should work as you want.

   Matt

On Mon, Aug 11, 2008 at 4:16 PM, Jed Brown jed at 59a2.org wrote:
 For context, I'm thinking of LNKS optimization or coupled indefinite
 systems, perhaps with many fields, where the Jacobian is applied matrix-free
 (using a MatShell in my usage).  In these circumstances, it isn't useful to
 actually form all the blocks in the Jacobian when constructing the
 preconditioning matrix, just those which are the operator or preconditioning
 matrix in a KSP inside the Schur complement/reduced space preconditioner.
 In order to make the preconditioner slightly more generic, it would be nice
 to have a matrix type which is really just a wrapper for the blocks which
 are actually needed by the preconditioner.  For a simple concrete example,
 consider a mixed discretization of the Stokes problem  J = [A B'; B 0]  where
 J is applied matrix-free.  For preconditioning, we'll need an approximate
 Schur complement  S = -B \tilde{A^{-1}} B'  where  \tilde{A^{-1}}  may be
 V-cycle of AMG applied to  \hat{A}  (an approximation to  A) which is the
 only matrix which needs to be actually formed.  Normally the pressure mass
 matrix  M  would also be formed to precondition  S.  Now we could define the
 preconditioning matrix  P = [\hat{A} 0; 0 M], but I don't like it.

 Of course  \hat{A}  and  M  need not have the same matrix type, but it seems
 logical to assemble them in the Jacobian assembly function.  What I
 currently do is to put them in the PCShell context and assemble them in
 PCSetUp(), but this means that the PC needs access to the problem
 description.  We could put them in P = [\hat{A} 0; 0 M] (of type MatShell)
 and extract the pieces with MatGetSubMatrix(), but it seems to me that
 MatGetSubMatrix() ought to be able to succeed with any valid pair of IS.

 So perhaps it would be useful to have another function MatGetSubBlock()
 which just takes a pair of integers and returns the block if it is
 available.  That is, we could have MatGetSubBlock(P,0,0,Ahat) give the
 explicit viscosity matrix and MatGetSubBlock(J,1,0,B) give a MatShell which
 implements B, but MatGetSubBlock(P,1,0,Bhat) return NULL since an explicit
 form of that matrix is not available (or it could return the MatShell B).
 On the other hand, what should MatGetSubBlock(P,1,1,C) give?  Logically, it
 should be the zero matrix or NULL, but the preconditioner needs to get
 access to M somehow.  So it looks like we are back to the original situation
 where the assembly code needs to know details about the preconditioner or
 the preconditioner needs to know how to assemble the matrices it needs.  We
 can get slightly more separation by using P as a container for those
 matrices that the preconditioner would need, but now its looking like P
 isn't a matrix at all (it wouldn't really make sense to implement
 MatMult()).

 Any ideas for a better way?

 Jed




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Fortran integer declaration in petscdef.h

2008-04-07 Thread Matthew Knepley
On Mon, Apr 7, 2008 at 9:48 AM, Thomas DE-SOZA thomas.de-soza at edf.fr wrote:

 Hi,

 I was wondering if in $PETSC_DIR/include/finclude/petscdef.h :

 50#if defined(PETSC_HAVE_MPIUNI)
 51#define MPI_Comm PetscFortranInt
 52#define PetscMPIInt PetscFortranInt
 53#else
 54#define MPI_Comm integer
 55#define PetscMPIInt integer
 56#endif

 the integer declaration should not be changed to something dependent on
 PETSC_SIZEOF_INT.

This is an internal type we use to represent communicators in our own mini-MPI
implementation. Our choice merely limits the number of unique communicators
in the program to 2^32 on a 32bit machine. This is not usually an obstacle.

  Matt

 Regards,

 Thomas
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Preconditioning for saddle point problems

2008-04-29 Thread Matthew Knepley
1) I believe the Wathen-Elman-Silvester stuff is the best out there of the
shelf. I love the review

 A.C. de Niet and F.W. Wubs Two preconditioners for saddle point
problems in fluid flows
 Int. J. Num. Meth. Fluids 2007: 54: 355-377

2) Note in there that Augmented Lagrangian preconditioning (ala Axelsson) works
even better, but the system is harder to invert. I like these
because you only need
one field in the formulation, instead of having a mixed system.
This is detailed in
Brenner  Scott (Iterated Penalty method).

3) If you want a mixed system, there is new code in PETSc
(PCFieldSplit) to do exactly
what you want, and it works automatically with DAs. If you do it
by hand, you provide
explicitly the IS for each field.

  Matt

On Tue, Apr 29, 2008 at 10:14 AM, Jed Brown jed at 59a2.org wrote:
 On Tue 2008-04-29 10:44, Lisandro Dalcin wrote:
   Well, I've worked hard on similar methods, but for incompressible NS
   equations (pressure-convection preconditioners, Elman et al.). I
   abandoned temporarily this research, but I was not able to get decent
   results. However, for Stokes flow it seens to work endeed, but never
   studied this seriously.

  My experiments with the Stokes problem shows that it takes about four times 
 as
  long to solve the indefinite Stokes system as it takes to solve a poisson
  problem with the same number of degrees of freedom.  For instance, in 3D with
  half a million degrees of freedom, the Stokes problem takes 2 minutes on my
  laptop while the poisson problem takes 30 seconds (both are using algebraic
  multigrid as the preconditioner).  Note that these tests are for a Chebyshev
  spectral method where the (unformed because it is dense) system matrix is
  applied via DCT, but a low-order finite difference or finite element
  approximation on the collocation nodes is used to obtain a sparse matrix with
  equivalent spectral properties, to which AMG is applied.  With a finite
  difference discretization (src/ksp/ksp/examples/tutorials/ex22.c) the same 
 sized
  3D poisson problem takes 13 seconds with AMG and 8 with geometric multigrid.
  This is not a surprise since the conditioning of the spectral system is much
  worse, O(p^4) versus O(n^2), since the collocation nodes are quadratically
  clustered.

  I've read Elman et al. 2002 ``Performance and analysis of saddle point
  preconditioners for the discrete steady-state Navier-Stokes equations'' but I
  haven't implemented anything there since I'm mostly interested in slow flow.
  Did your method work well for the Stokes problem, but poorly for NS?  I found
  that performance was quite dependent on the number of iterations at each 
 level
  and the strength of the viscous preconditioner.  I thought my approach was
  completely na?ve, but it seems to work reasonably well.  Certainly it is much
  faster than SPAI/ParaSails which is the alternative.


   I'll comment you the degree of abstraction I could achieve. In my base
   FEM code, I have a global [F, G; D C] matrix (I use stabilized
   methods) built from standard linear elements and partitioned across
   processors in a way inherited by the mesh partitioner (metis). So the
   F, G, D, C entries are all 'interleaved' at each proc.
  
   In order to extract the blocks as parallel matrices from the goblal
   saddle-point parallel matrix, I used MatGetSubmatrix, for this I
   needed to build two index set for momentum eqs and continuity eqs
   local at each proc but in global numbering. Those index set are the
   only input required (apart from the global matrix) to build the
   preconditioner.

  This seems like the right approach.  I am extending my collocation approach 
 to a
  hp-element version, so the code you wrote might be very helpful.  How 
 difficult
  would it be to extend to the case where the matrices could be MatShell?  That
  is, to form the preconditioners, we only need entries for approximations S' 
 and
  F' to S and F respectively; the rest can be MatShell.  In my case, F' is a
  finite difference or Q1 finite element discretization on the collocation 
 nodes
  and S' is the mass matrix (which is the identity for collocation).

  Would it be useful for me to strip my code down to make an example?  It's not
  parallel since it does DCTs of the entire domain, but it is a spectrally
  accurate, fully iterative solver for the 3D Stokes problem with nonlinear
  rheology.  I certainly learned a lot about PETSc while writing it and there
  aren't any examples which do something similar.

  Jed




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Preconditioning for saddle point problems

2008-04-29 Thread Matthew Knepley
On Tue, Apr 29, 2008 at 12:28 PM, Boyce Griffith griffith at cims.nyu.edu 
wrote:

 Hi, Matt et al. --

 Do people ever use standard projection methods as preconditioners for these 
 kinds of problems?

 I have been playing around with doing this in the context of a staggered grid 
 (MAC) finite difference scheme.  It is probably not much of a surprise, but 
 for problems where an exact projection method is actually an exact Stokes 
 solver (e.g., in the case of periodic boundary conditions), one can obtain 
 convergence with a single application of the projection preconditioner when 
 it is paired up with FGMRES.  I'm still working on implementing physical 
 boundaries and local mesh refinement for this formulation, so it isn't clear 
 how well this approach works for less trivial situations.

If I understand you correctly, Wathen and Golub have a paper on this.
Basically, it says using

  / \hat AB \
  \ B^T  0 /

as a preconditioner is great since all the eigenvalues for the
constraint are preserved.

  Matt


 Thanks,

 -- Boyce




 Matthew Knepley wrote:

  1) I believe the Wathen-Elman-Silvester stuff is the best out there of the
 shelf. I love the review
 
  A.C. de Niet and F.W. Wubs Two preconditioners for saddle point
  problems in fluid flows
  Int. J. Num. Meth. Fluids 2007: 54: 355-377
 
  2) Note in there that Augmented Lagrangian preconditioning (ala Axelsson) 
  works
 even better, but the system is harder to invert. I like these
  because you only need
 one field in the formulation, instead of having a mixed system.
  This is detailed in
 Brenner  Scott (Iterated Penalty method).
 
  3) If you want a mixed system, there is new code in PETSc
  (PCFieldSplit) to do exactly
 what you want, and it works automatically with DAs. If you do it
  by hand, you provide
 explicitly the IS for each field.
 
   Matt
 
  On Tue, Apr 29, 2008 at 10:14 AM, Jed Brown jed at 59a2.org wrote:
 
   On Tue 2008-04-29 10:44, Lisandro Dalcin wrote:
 Well, I've worked hard on similar methods, but for incompressible NS
 equations (pressure-convection preconditioners, Elman et al.). I
 abandoned temporarily this research, but I was not able to get decent
 results. However, for Stokes flow it seens to work endeed, but never
 studied this seriously.
  
My experiments with the Stokes problem shows that it takes about four 
   times as
long to solve the indefinite Stokes system as it takes to solve a poisson
problem with the same number of degrees of freedom.  For instance, in 3D 
   with
half a million degrees of freedom, the Stokes problem takes 2 minutes on 
   my
laptop while the poisson problem takes 30 seconds (both are using 
   algebraic
multigrid as the preconditioner).  Note that these tests are for a 
   Chebyshev
spectral method where the (unformed because it is dense) system matrix is
applied via DCT, but a low-order finite difference or finite element
approximation on the collocation nodes is used to obtain a sparse matrix 
   with
equivalent spectral properties, to which AMG is applied.  With a finite
difference discretization (src/ksp/ksp/examples/tutorials/ex22.c) the 
   same sized
3D poisson problem takes 13 seconds with AMG and 8 with geometric 
   multigrid.
This is not a surprise since the conditioning of the spectral system is 
   much
worse, O(p^4) versus O(n^2), since the collocation nodes are 
   quadratically
clustered.
  
I've read Elman et al. 2002 ``Performance and analysis of saddle point
preconditioners for the discrete steady-state Navier-Stokes equations'' 
   but I
haven't implemented anything there since I'm mostly interested in slow 
   flow.
Did your method work well for the Stokes problem, but poorly for NS?  I 
   found
that performance was quite dependent on the number of iterations at each 
   level
and the strength of the viscous preconditioner.  I thought my approach 
   was
completely na?ve, but it seems to work reasonably well.  Certainly it is 
   much
faster than SPAI/ParaSails which is the alternative.
  
  
 I'll comment you the degree of abstraction I could achieve. In my base
 FEM code, I have a global [F, G; D C] matrix (I use stabilized
 methods) built from standard linear elements and partitioned across
 processors in a way inherited by the mesh partitioner (metis). So the
 F, G, D, C entries are all 'interleaved' at each proc.

 In order to extract the blocks as parallel matrices from the goblal
 saddle-point parallel matrix, I used MatGetSubmatrix, for this I
 needed to build two index set for momentum eqs and continuity eqs
 local at each proc but in global numbering. Those index set are the
 only input required (apart from the global matrix) to build the
 preconditioner.
  
This seems like the right approach.  I am extending my collocation 
   approach

Preconditioning for saddle point problems

2008-04-29 Thread Matthew Knepley
On Tue, Apr 29, 2008 at 12:54 PM, Boyce Griffith griffith at cims.nyu.edu 
wrote:


  Matthew Knepley wrote:

  On Tue, Apr 29, 2008 at 12:28 PM, Boyce Griffith griffith at cims.nyu.edu
 wrote:
 
 
   Hi, Matt et al. --
  
   Do people ever use standard projection methods as preconditioners for
 these kinds of problems?
  
   I have been playing around with doing this in the context of a staggered
 grid (MAC) finite difference scheme.  It is probably not much of a surprise,
 but for problems where an exact projection method is actually an exact
 Stokes solver (e.g., in the case of periodic boundary conditions), one can
 obtain convergence with a single application of the projection
 preconditioner when it is paired up with FGMRES.  I'm still working on
 implementing physical boundaries and local mesh refinement for this
 formulation, so it isn't clear how well this approach works for less trivial
 situations.
  
 
  If I understand you correctly, Wathen and Golub have a paper on this.
  Basically, it says using
 
   / \hat AB \
   \ B^T  0 /
 
  as a preconditioner is great since all the eigenvalues for the
  constraint are preserved.
 

  Hi, Matt --

  Are you referring to Golub  Wathen, SIAM J. Sci. Comput. 1998?  I think

Could be. It sound sright.

 they are doing something different.  I am solving the time-dependent Stokes
 equations, and am preconditioning via a fully second-order accurate version
 of the Kim-Moin projection method, i.e., following the approach of Brown,
 Cortez, and Minion, J. Comput. Phys. 2001.

These all look different, but I think they are really the same thing.
Its also the same
as what Vivek Sarin does. All of them project exactly onto the
constraint manifold.
They only differ in how A is preconditioned. I mention WathenGloub because in
their analysis, you can use any preconditioner for A, which is the most general.
However, they do not give a prescription for inverting the preconditioner, which
Vivek does (in O(N) time and space).

  Matt

  (Note that at this point, I am not trying to treat the advection terms
 implicitly; this is really just a warm-up to doing implicit timestepping for
 fluid-structure interaction.)

  -- Boyce
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




adding SNESSetLinearSolve()

2007-11-01 Thread Matthew Knepley
Actually, making the regularization parameter independent for each
process is much more efficient. Gene Golub had a poster on this at
the last SIAM CSE meeting.

   Matt

On Nov 1, 2007 9:24 AM, Lisandro Dalcin dalcinl at gmail.com wrote:
 On 10/31/07, Barry Smith bsmith at mcs.anl.gov wrote:
Lisandro,
 A followup to our previous discussion. It sounds to me like you
  are actually solving an n+1 unknown nonlinear problem where the
  special unknown is kept secret from SNES and managed somehow by the
  application code?

 That's exactly the case. Furthermore, this 'special' unknown in just a
 regularization parameter who tends to zero as the nonlinear solution
 is reached. Unfortunatelly, this unknow is coupled with all other
 degree of freedom, thus generating a full dense row and a dense column
 in the Jacobian matrix. But fortunatelly, the special unknown is just
 a single scalar, thus computing the schur complement is feasible, but
 requires two linear solves with the other 'sparse' block of the
 Jacobian.

You can guess how I feel about this :-).

 Yes, of course. I agree that PETSc API must be consistent and clean.
 But I also feel that some time I need more features. Please remember I
 use PETSc exclusively from Python. And then is so easy to manage
 complicated application setups. But at some point I need more
 low-level support from PETSc to make it working.

 For example, I would love to have SNESSetPresolve/SNESSetPostSolve and
 SNESSetPreStep/SNESSetPostStep, and perhaps a
 SNESSetPreLinearSolve/SNESSetPostLinearSolve. Of course, this make the
 API grow with features that are not frecuently needed.

  PETSc/SNES is suppose
  to be good enough to allow you to feed it the ENTIRE nonlinear problem
  in a way that would allow as efficient solution as if you handled the
  special unknown specially.

 Even for my particular case? Can I pass-over the issue with the full
 denses rows and columns?

  In particular for this problem the intended
  solution is to use the PETSc DMComposite object via DMCompositeCreate()
You may want to look at this construct to see if it is suitable
  for your friends needs. And what we need to add (note that though 
  DMComposite
  can be used with DMMG it does not have to be, it can be used directly
  with a SNES also.

 I'll definitely take a loop ASAP.

 Regards,

 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




error checking macros with optimization

2007-11-23 Thread Matthew Knepley
On Nov 23, 2007 10:25 AM, Lisandro Dalcin dalcinl at gmail.com wrote:
 I would to propose some changes to error checking macros for optimized builds.

 1.- SETERRQXXX: define them as

 #define SETERRQ[1|2|..](ierr,...)   return ierr

I think this is fine. However for it to matter, you need 2.

 2.- CHKERRQ: define them as

 #define CHKERRQ(ierr)  if (ierr) return ierr

 For (1), it should be no performace impact. For (2), the extra check
 at almost every line of PETSc source code could impact performace, but
 any of you have a clear idea of how much?

This is a big problem because it completely blows the pipeline and disrupts
speculative execution (unless you always guess correctly). We could try
testing it, but someone will always complain. Optimized is supposed to be
as fast as possible, and this will be slower.

   Matt

 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




move maint to bin/maint

2007-11-24 Thread Matthew Knepley
Are they all executables?

  Matt

On 11/23/07, Barry Smith bsmith at mcs.anl.gov wrote:

   Can we move maint to bin/maint?

Barry




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




move maint to bin/maint

2007-11-24 Thread Matthew Knepley
I thought he meant LFS.

  Matt

On Nov 24, 2007 3:52 PM, Barry Smith bsmith at mcs.anl.gov wrote:


   Seems unlike that it would have the rule no subdirectory for bin.
 Especially since
 the H stands for hierarchical.

Barry



 On Nov 24, 2007, at 3:49 PM, Dmitry Karpeev wrote:
 From what I know, HFS used to be the default filesystem for Mac OS  (of some
 old variety?) and
 was later replaced by HFS+, which, presumably, is still being used by OS X.

 Dmitry.

  On 11/24/07, Barry Smith bsmith at mcs.anl.gov wrote:
 
 HFS? The first 6 pages of a google search don't point to anything
  relevent, so clearly HFS cannot be important :-).
 
  Barry
 
  I think maint stuff doesn't get installed anyway :-)
 
  On Nov 24, 2007, at 2:34 PM, Lisandro Dalcin wrote:
 
   In case you plan later to 'install' something there in standard
   loctions, remember the HFS prohibits subdirs inside '/bin'.
  
   On 11/24/07, Barry Smith  bsmith at mcs.anl.gov wrote:
  
 Can we move maint to bin/maint?
  
  Barry
  
  
  
  
   --
   Lisandro Dalc?n
   ---
   Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
   Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
   Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
   PTLC - G?emes 3450, (3000) Santa Fe, Argentina
   Tel/Fax: +54-(0)342-451.1594
  
 
 






-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




warn message in log summary

2007-11-26 Thread Matthew Knepley
On Nov 26, 2007 4:19 PM, Lisandro Dalcin dalcinl at gmail.com wrote:
 On 11/26/07, Barry Smith bsmith at mcs.anl.gov wrote:
 I've looked long and hard for a PETSc bug that would cause this
  problem.
  No luck. It seems to happen mostly (only?) on certain machines.

 Ups! I've found a possilbe source of the problem, at least for my
 case! Those negative times I got  Order(-1e9) were in fact originated
 from premature returns due to CHKERRQ macros.

 As I was doing Python unittesting, I was making calls generating
 error, and catching exceptions, in order to check the error was
 correctly set.

 However, this way of using PETSc is not safe at all, in general PETSc
 does not always recover correctly after an error, and this seems to be
 specially true for log machinery.

 After surfing the code and hacked PetscLogPrintSummary(),  I added a
 check (eventInfo[event].depth == 0) in order to skip reductions of
 time values for 'unterminated' events. This worked as expected, and
 the even info did not show-up and the warning was not generated...

 Could this be a possible 'fix' for this issue??

There is, but it is more painful than the disease. We would have to protect
all SETERRQ() statements by freeing all resources (like events, memory, ...)
This is huge job, and better handled by exception mechanisms.

  Matt

 Richard... Are you completelly sure the negative timmings you were
 getting are not related to an error being silenced because of a
 missing CHKERRQ macro???



  On Nov 26, 2007, at 11:03 AM, Lisandro Dalcin wrote:
 
   I even get consistent time deltas using 'gettimeofday' on my box!!
   Perhaps PETSc has some bug somewere?? What do you think??
  
   On 11/26/07, Richard Tran Mills rmills at ornl.gov wrote:
   Lisandro,
  
   Unfortunately, I see the same negative timings problem on the Cray
   XT3/4
   systems when I configure PETSc to use MPI_Wtime() for all its
   timings.  So
   that doesn't necessarily fix anything...
  
   --Richard
  
   Lisandro Dalcin wrote:
  
   Perhaps PETSc should use MPI_Wtime as default timer. If a better one
   is available, then use it. But then MPIUNI have to also provide an
   useful, default implementation.
  
   Runing a simple test, like this (MPICH2):
  
   int main(void)
   {
int i;
double t0[100],t1[100];
MPI_Init(0,0);
for (i=0; i100; i++) {
  t0[i] = MPI_Wtime();
  t1[i] = MPI_Wtime();
}
for (i=0; i 100; i++) {
  printf(t0=%e, t1=%e, dt=%e\n,t0[i],t1[i],t1[i]-t0[i]);
}
MPI_Finalize();
return 0;
   }
  
   and in the SAME box I get the PETSc warning, it consistently gives
   me
   positive time deltas of the order of MPI_Wtick()...
  
  
  
  
   --
   Lisandro Dalc?n
   ---
   Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
   Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
   Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
   PTLC - G?emes 3450, (3000) Santa Fe, Argentina
   Tel/Fax: +54-(0)342-451.1594
  
 
 


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




warn message in log summary

2007-11-26 Thread Matthew Knepley
That seems fine.

  Matt

On Nov 26, 2007 4:56 PM, Lisandro Dalcin dalcinl at gmail.com wrote:
 On 11/26/07, Matthew Knepley knepley at gmail.com wrote:
  On Nov 26, 2007 4:19 PM, Lisandro Dalcin dalcinl at gmail.com   After 
  surfing the code and hacked PetscLogPrintSummary(),  I added a
   check (eventInfo[event].depth == 0) in order to skip reductions of
   time values for 'unterminated' events. This worked as expected, and
   the even info did not show-up and the warning was not generated...
  
   Could this be a possible 'fix' for this issue??
 
  There is, but it is more painful than the disease. We would have to protect
  all SETERRQ() statements by freeing all resources (like events, memory, ...)
  This is huge job, and better handled by exception mechanisms.

 Matt, not sure if you understood me.. I know that reworking PETSc
 error management is not so easy... What I was asking is for skipping
 at -log_summary time all those events that were not 'finalized', that
 is, its 'depth' call is not zero... I believe this should fix the
 issue with negative timmings... Or we perhaps could issue the real
 warning, saying that some event was not properly finalized...



  
On Nov 26, 2007, at 11:03 AM, Lisandro Dalcin wrote:
   
 I even get consistent time deltas using 'gettimeofday' on my box!!
 Perhaps PETSc has some bug somewere?? What do you think??

 On 11/26/07, Richard Tran Mills rmills at ornl.gov wrote:
 Lisandro,

 Unfortunately, I see the same negative timings problem on the Cray
 XT3/4
 systems when I configure PETSc to use MPI_Wtime() for all its
 timings.  So
 that doesn't necessarily fix anything...

 --Richard

 Lisandro Dalcin wrote:

 Perhaps PETSc should use MPI_Wtime as default timer. If a better one
 is available, then use it. But then MPIUNI have to also provide an
 useful, default implementation.

 Runing a simple test, like this (MPICH2):

 int main(void)
 {
  int i;
  double t0[100],t1[100];
  MPI_Init(0,0);
  for (i=0; i100; i++) {
t0[i] = MPI_Wtime();
t1[i] = MPI_Wtime();
  }
  for (i=0; i 100; i++) {
printf(t0=%e, t1=%e, dt=%e\n,t0[i],t1[i],t1[i]-t0[i]);
  }
  MPI_Finalize();
  return 0;
 }

 and in the SAME box I get the PETSc warning, it consistently gives
 me
 positive time deltas of the order of MPI_Wtick()...




 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594

   
   
  
  
   --
   Lisandro Dalc?n
   ---
   Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
   Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
   Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
   PTLC - G?emes 3450, (3000) Santa Fe, Argentina
   Tel/Fax: +54-(0)342-451.1594
  
  
 
 
 
  --
  What most experimenters take for granted before they begin their
  experiments is infinitely more interesting than any results to which
  their experiments lead.
  -- Norbert Wiener
 
 


 --

 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Compiling on Solaris 10 for X86

2007-05-10 Thread Matthew Knepley
Nothing is wrong with the PETSc build or libraries. However, your make
(probably not gmake) has a problem with the make rule that cleans up files
in bmake/common/rules:359. You can just ignore it (or tell us what is wrong).

  Thanks,

 Matt

On 5/10/07, Yi-Feng Adam Zhang Adam.Zhang at sun.com wrote:
 Hi all,

 Recently I am compiling the PETSc 2.3.2 -P10 on Solaris 10 for X86 with
 GCC and Sun's f90.
 Here is the command I give to configure.py:
 ./config/configure.py --download-f-blas-lapack=1 --with-mpi=0

 Yes. I don't use the MPI intendedly.  Because I want to make sure every
 thing is ok before using MPI.
 When I input the command make all, every  thing looks fine.
 But when I try to use make test to verify, I get the following error:
 --
 -bash-3.00# more test_log_solaris2.10-c-debug
 Running test examples to verify correct installation
 C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1
 MPI process
 Graphics example src/snes/examples/tutorials/ex19 run successfully with
 1 MPI process
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex19.exe; then /usr/bin/rm -f ex19.exe; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex19.ilk; then /usr/bin/rm -f ex19.ilk; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex19.pdb; then /usr/bin/rm -f ex19.pdb; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex19.tds; then /usr/bin/rm -f ex19.tds; fi
 Fortran example src/snes/examples/tutorials/ex5f run successfully with 1
 MPI process
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex5f.exe; then /usr/bin/rm -f ex5f.exe; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex5f.ilk; then /usr/bin/rm -f ex5f.ilk; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex5f.pdb; then /usr/bin/rm -f ex5f.pdb; fi
 sh: test: argument expected
 *** Error code 1 (ignored)
 The following command caused the error:
 if test -e ex5f.tds; then /usr/bin/rm -f ex5f.tds; fi
 Completed test examples
 ---

 Does anyone have any suggestion?  Thanks in advance!

 Regards,
 Adam




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Compiling on Solaris 10 for X86

2007-05-10 Thread Matthew Knepley
On 5/10/07, Adam Zhang Adam.Zhang at sun.com wrote:
 Thank you for input. I will try to use gmake later.

 If this compilation is ok. can I release this as a Solaris package on
 the internet?

I guess. What is it for?

  Thanks,

Matt

 Regards,
 Adam


 Matthew Knepley wrote:
  Nothing is wrong with the PETSc build or libraries. However, your make
  (probably not gmake) has a problem with the make rule that cleans up
  files
  in bmake/common/rules:359. You can just ignore it (or tell us what is
  wrong).
 
   Thanks,
 
  Matt
 
  On 5/10/07, Yi-Feng Adam Zhang Adam.Zhang at sun.com wrote:
  Hi all,
 
  Recently I am compiling the PETSc 2.3.2 -P10 on Solaris 10 for X86 with
  GCC and Sun's f90.
  Here is the command I give to configure.py:
  ./config/configure.py --download-f-blas-lapack=1 --with-mpi=0
 
  Yes. I don't use the MPI intendedly.  Because I want to make sure every
  thing is ok before using MPI.
  When I input the command make all, every  thing looks fine.
  But when I try to use make test to verify, I get the following error:
  --
  -bash-3.00# more test_log_solaris2.10-c-debug
  Running test examples to verify correct installation
  C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1
  MPI process
  Graphics example src/snes/examples/tutorials/ex19 run successfully with
  1 MPI process
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex19.exe; then /usr/bin/rm -f ex19.exe; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex19.ilk; then /usr/bin/rm -f ex19.ilk; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex19.pdb; then /usr/bin/rm -f ex19.pdb; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex19.tds; then /usr/bin/rm -f ex19.tds; fi
  Fortran example src/snes/examples/tutorials/ex5f run successfully with 1
  MPI process
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex5f.exe; then /usr/bin/rm -f ex5f.exe; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex5f.ilk; then /usr/bin/rm -f ex5f.ilk; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex5f.pdb; then /usr/bin/rm -f ex5f.pdb; fi
  sh: test: argument expected
  *** Error code 1 (ignored)
  The following command caused the error:
  if test -e ex5f.tds; then /usr/bin/rm -f ex5f.tds; fi
  Completed test examples
  ---
 
  Does anyone have any suggestion?  Thanks in advance!
 
  Regards,
  Adam
 
 
 
 




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




DDM with PETSc

2007-03-31 Thread Matthew Knepley
I would look at some PETSc examples, for instance

http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/ksp/ksp/examples/tutorials/ex32.c.html

   Matt

On 3/31/07, Waad Subber wsubber at connect.carleton.ca wrote:
 Hello everyone:

 I am new to PETSC. Reading the tutorials, I understand that PETSC
 supports domain decomposition with additive Schwartz and iterative
 substructuring (balancing Neumann-Neumann). I am looking for some
 example codes involving these domain decomposition methods for 1D and
 2D  PDE (preferable linear PDE) so that I can get started. Can anyone
 kindly point me to the right place where I can find them.

 Thanks :)

 waad




-- 
One trouble is that despite this system, anyone who reads journals widely
and critically is forced to realize that there are scarcely any bars to eventual
publication. There seems to be no study too fragmented, no hypothesis too
trivial, no literature citation too biased or too egotistical, no design too
warped, no methodology too bungled, no presentation of results too
inaccurate, too obscure, and too contradictory, no analysis too self-serving,
no argument too circular, no conclusions too trifling or too unjustified, and
no grammar and syntax too offensive for a paper to end up in print. --
Drummond Rennie




changes to PETSc-dev bmake system and library locations

2007-06-10 Thread Matthew Knepley
This is very vague concerning the structure of exernalpackages. I cannot tell
where libraries are supposed to end up, and how/when/why they might be moved.
It seems that the directory information coming from PETSc/package.py
has also changed.

   Matt

On 6/8/07, Barry Smith bsmith at mcs.anl.gov wrote:

   PETSc-dev users,

 After picking Satish's brain, I have made a set of changes to
 petsc-dev related to compiling and linking programs.

Goal:  Support the GNU; config/configure.py; make; make install model
 including all external packages PETSc builds for you. After make install
 PETSC_ARCH should not be needed.

Constraints:
 * Allow skipping the make install step and yet having everything
 fully functional even with shared and dynamic libraries
 * Allow multiple builds in the non-make install approach which you can
 switch between by changing PETSC_ARCH
 * Not require any file links
 * A system that does not mix generated files and non-generated in the same
 directory in $PETSC_DIR
 * A system no more complicated then the previous version.

   Solution:

 In place, before make install

 petsc-dev/include  same as now
  /bin  same as now
  /conf basically the same as bmake/common was
  $PETSC_ARCH/include   generated includes: petscconf.h petscfix.h 
 ..
  lib   generated libraries
  bin   generated programs
  conf  basically the same as bmake/$PETSC_ARCH/
   except not the include files

 After make install

 prefix/include  all includes
   /bin  all programs, including mpiexec, mpicc if 
 generated
   /conf the stuff previous in bmake/common and 
 bmake/$PETSC_ARCH
   /lib  the libraries, including from external 
 packages

 The whole trick is that in the PETSc bmake files (now conf files :-)) the 
 $PETSC_ARCH/
 disappears in the make install version.

 I have fixed the external packages MPI.py, BlasLapack.py and Chaco.py but the 
 others
 need to be modified to stick their libraries and includes in the new correct 
 place.

 The only change you should need to your makefiles is to replace
 include ${PETSC_DIR}/bmake/common/base with
 include ${PETSC_DIR}/conf/base
 Bug reports to petsc-maint at mcs.anl.gov questions to petsc-dev at 
 mcs.anl.gov

Barry




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




changes to PETSc-dev bmake system and library locations

2007-06-11 Thread Matthew Knepley
Maybe you have an old BuildSystem?

  Matt

On 6/11/07, Todd Munson tmunson at mcs.anl.gov wrote:

 I just downloaded the new updates and tried configuring the development
 version and get the following python errors...It may be a problem on my
 side.  I'll try with a new clone of the repository...

 akita % ./config/configure.py --with-c++
 =
 Configuring PETSc to compile on your system
 =*
 UNABLE to FIND MODULE for config/configure.py
 ---
 No module named Fiat
 *

   File ./config/configure.py, line 175, in petsc_configure
 framework =
 config.framework.Framework(sys.argv[1:]+['--configModules=PETSc.Configure','--optionsModule=PETSc.compilerOptions'],
 loadArgDB = 0)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 91, in __init__
 self.createChildren()
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 317, in createChildren
 self.getChild(moduleName)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 302, in getChild
 config.setupDependencies(self)
   File /sandbox/tmunson/projects/petsc-dev/python/PETSc/Configure.py,
 line 44, in setupDependencies
 utilityObj  =
 self.framework.require('PETSc.'+d+'.'+utilityName, self)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 322, in require
 config = self.getChild(moduleName, keywordArgs)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 302, in getChild
 config.setupDependencies(self)
   File /sandbox/tmunson/projects/petsc-dev/python/PETSc/packages/FFC.py,
 line 11, in setupDependencies
 self.fiat = self.framework.require('config.packages.Fiat', self)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 322, in require
 config = self.getChild(moduleName, keywordArgs)
   File
 /sandbox/tmunson/projects/petsc-dev/python/BuildSystem/config/framework.py,
 line 275, in getChild
 type   = __import__(moduleName, globals(), locals(),
 ['Configure']).Configureakita %



 !+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!+-+!
 Todd Munson  (630) 252-4279  office
 Argonne National Laboratory  (630) 252-5986  fax
 9700 S. Cass Ave.tmunson at mcs.anl.gov
 Argonne, IL 60439http://www.mcs.anl.gov/~tmunson


 On Sun, 10 Jun 2007, Barry Smith wrote:

 
 
  On Sun, 10 Jun 2007, Matthew Knepley wrote:
 
   This is very vague concerning the structure of exernalpackages. I cannot 
   tell
   where libraries are supposed to end up, and how/when/why they might be 
   moved.
 
They end up in
 
 $PETSC_ARCH/lib (same place as PETSc libraries, without a make install) 
  and
 prefix/lib  (with a make install)
 
   It seems that the directory information coming from PETSc/package.py
   has also changed.
 
 Yes. package.py had a /lib hardwired to the end of the install directory
  returned by the particular package. Now the particular packages set the
  entire path where the library goes (that is a /lib is not automatically
  appended).
 
 Barry
 
  
 Matt
  
   On 6/8/07, Barry Smith bsmith at mcs.anl.gov wrote:
   
  PETSc-dev users,
   
After picking Satish's brain, I have made a set of changes to
petsc-dev related to compiling and linking programs.
   
   Goal:  Support the GNU; config/configure.py; make; make install model
including all external packages PETSc builds for you. After make 
install
PETSC_ARCH should not be needed.
   
   Constraints:
* Allow skipping the make install step and yet having everything
fully functional even with shared and dynamic libraries
* Allow multiple builds in the non-make install approach which you can
switch between by changing PETSC_ARCH
* Not require any file links
* A system that does not mix generated files and non-generated in the 
same
directory in $PETSC_DIR
* A system no more complicated then the previous version.
   
  Solution:
   
In place, before make install
   
petsc-dev/include  same as now
 /bin  same as now
 /conf basically the same as bmake/common 
was
 $PETSC_ARCH/include   generated includes: petscconf.h
petscfix.h ..
 lib   generated libraries
 bin   generated programs
 conf

PETSc sparsity

2007-06-15 Thread Matthew Knepley
Also, I have talked to Wolfgang many times about this. I am a firm
believer in eliminating the boundary values during assembly at the
element level. PETSc provides an easy mechanism for this. By default,
all negative indices in calls to VecSetValues and MatSetValues are
ignored.

  Matt

On 6/15/07, Mark Adams adams at pppl.gov wrote:
 Just a note, my way is much simpler - its two lines of code in a loop
 over the boundary nodes, followed by an MatAsseblyBegin/End, and you
 don't have to deal with parallel issues explicitly - PETSc does.  For
 my FE codes the cost of this (dumb) way is negligible, PETSc
 implements these methods pretty well.

 Mark

 On Jun 15, 2007, at 4:28 PM, Toby Young wrote:

 
 
  Barry,
 
  Thank you for an interesting response.
 
For algorithms that require dealing with the sparsity structure of
  the matrix we generally just include the appropriate private
  include file
  for the matrix format and access the data directly in the
  underlying format.
 
  Can you please elaborate. What do you mean by the appropriate privat
  include file for the matrix? Sorry, I got lost there.
 
  Best,
Toby
 
  -
 
  Toby D. Young (Adiunkt)
  Department of Computational Science
  Institute of Fundamental Technological Research
  Polish Academy of Science
  Room 206, ul. Swietokrzyska 21
  00-049 Warszawa, POLAND
 

 **
 Mark Adams Ph.D.   Columbia University
 289 Engineering TerraceMC 4701
 New York NY 10027
 adams at pppl.govwww.columbia.edu/~ma2325
 voice: 212.854.4485  fax: 212.854.8257
 **





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




invitation to give talk at SDSC summer institute

2007-06-20 Thread Matthew Knepley
I am not going to ICIAM anymore (since international travel has become
impossible), so I am available I guess.

  Matt

On 6/20/07, Barry Smith bsmith at mcs.anl.gov wrote:

  Amit,

Thank you for the invitation to speak. Unfortunately I am not available
 at that time, and I think a couple of the other ANL PETSc folks are
 also not available. I am sending this out to petsc-dev to
 see if anyone is available; I know many of them could give
 an excellent presentation.

Good luck,


   Barry

 On Tue, 19 Jun 2007, Amitava Majumdar wrote:

  Barry,
  SDSC hosts HPC oriented summer institute every year. We bring in about 30
  graduate students from all over US for a week and we provide funding for
  their travel, lodging etc for the week. We have speakers from within SDSC
  and outside SDSC (national labs, other universities, vendors). This year
  the workshop is on the week of July 16th. On July 17th morning we
  would like to invite you to give a talk on PETSc (may be focused on linear
  system solvers). We provide travel and hotel expenses for speakers.
  Please let us know if you, or someone else from PETSc group, would be
  interested and available. The talk is for hour and half.
  Thanks.
 
 




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




test fail (because of bad makefile?)

2007-07-06 Thread Matthew Knepley
Pushed a fix.

  Matt

On 7/6/07, Lisandro Dalcin dalcinl at gmail.com wrote:
 Can someone look at this?

 testexamples_C in: /u/dalcinl/Devel/PETSc/petsc-dev/src/dm/adda
 make[4]: *** No rule to make target `testexamples_C', needed by `tree'.  Stop.

 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




configure.py script

2007-07-17 Thread Matthew Knepley
We do generate makefiles, just not all of them. Automake is an unreadable,
inpenetrable mess that should never be used by anyone. Thus, we don't use it.

  Thanks,

 Matt

On 7/17/07, Sumit Vaidya sumit_vaidya at persistent.co.in wrote:




 Hi,



 I have downloaded the version 2.3.3-p3.

 There is one file under config folder viz configure.py. I am building
 the library on Red Hat Linux 9.



 I saw some makefiles already there in the downloaded package.

 Then what is the use of configure.py script? I have no idea about python
 code, But I redirected the output of configure and came to know it is
 testing some folders.



 As per my knowledge, configure script is used to generate makefiles.

 What is the use of configure script in PETSc 2.3.3-p3 version?



 Waiting for your reply,

 Sumit

 DISCLAIMER == This e-mail may contain privileged and confidential
 information which is the property of Persistent Systems Pvt. Ltd. It is
 intended only for the use of the individual or entity to which it is
 addressed. If you are not the intended recipient, you are not authorized to
 read, retain, copy, print, distribute or use this message. If you have
 received this communication in error, please notify the sender and delete
 all copies of this message. Persistent Systems Pvt. Ltd. does not accept any
 liability for virus infected mails.


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




AOApplicationToPetscPermuteInt() help needed

2007-07-17 Thread Matthew Knepley
Done.

  Matt

On 7/17/07, Barry Smith bsmith at mcs.anl.gov wrote:

   Will whoever wrote the manual pages for all the
 AOApplicationToPetscPermute*() routines please add more
 documentation explaining what they do? Maybe some algebraic
 formula or something: Permutes an array of blocks of reals
 in the PETSc ordering to the application-defined ordering.
 means nothing to me. How long is array[], what does permute
 mean in this context?

Barry





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




COVERITY static source code analysis

2007-07-19 Thread Matthew Knepley
I'm for it. Barry, do you want to mail scan-admin at coverity.com?

  Matt

On 7/19/07, Lisandro Dalcin dalcinl at gmail.com wrote:
 Have any of you ever consider asking PETSc being included here, as
 it is an open source project?

 http://scan.coverity.com/index.html

 From many sources (mainly related to Python), it seems the results are
 impressive.

 Regards,

 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




Cray and PETSc

2007-08-23 Thread Matthew Knepley
On 8/23/07, Adrian Tate adrian at cray.com wrote:

 Hi Barry

 Thanks for your reply.

 We will expect to provide the new full releases of PETSc as soon as we 
 possibly can. Obviously there is something of a lag because Cray need to 
 test, package and release the library, but we'll try to do so with minimal 
 delay. Obviously any heads-up you can give us in advance will help reduce the 
 lag. With respect to patch versions, it is going to be very difficult for 
 Cray to release every

The PETSc development repository is completely open. I recommend you setup
a test build which points to that, along with your release build. That way, when
we release, you will have absolutely nothing to do.

  Thanks,

 Matt

 Thanks!
 Adrian

 -Original Message-
 From: Barry Smith [mailto:bsmith at mcs.anl.gov]
 Sent: Monday, August 20, 2007 11:12 AM
 To: Adrian Tate
 Cc: petsc-developers at mcs.anl.gov; petsc-dev at mcs.anl.gov
 Subject: Re: Cray and PETSc


Adrian,

 Thank you for in the inquiry.


 On Thu, 16 Aug 2007, Adrian Tate wrote:

  Hello Barry
 
  I'm not sure we met in person - I am the lead of the libraries group
 at Cray. We decided some time ago that our iterative solver strategy
 would be to leverage PETSc, and to hopefully provide some Cray
 specific tunings to improve performance of the KSP solvers (largely
 through our custom sparse BLAS and some parallel tuning for our
 interconnect). I believe that John Lewis has been in communication
 with you with respect to the tuning of Sparse BLAS and their
 integration with the PETSc build.
 
 We are however, considering packaging and supplying PETSc along with
 the scientific library that we provide (libSci). This allows for
 better integration of our internal kernels and also it means that we
 are no longer requiring users (including benchmakers and internal
 users) to build their own PETSc. I see from your online page that
 doing so is acceptable as long as we use a copyright switch during
 configure. By applying this switch, do we make our PETSc library
 unsupported from your perspective?

The GNU copyright code is tiny; it would not effect the usability
 or support of PETSc

  We do not expect to be able to
 provide anywhere near the degree of support that your team provide,
 and I was hoping to supply a pre-built library whose users could still
 seek assistance through your normal support channels - is this
 realistic?
 
Yes.

The two isses that concern us with pre-packaged versions of PETSc are
 1) keeping up to date on our releases. We generally make two releases
 a year and much prefer that users use the most recently release.
 If they are using an older release it means we are less able to help
 them.
 2) keeping up to date on our patches. We may make several bug patches
 to each release. Users with a pre-packaged version have trouble keeping
 up with the source code patches we provide.

 
 
 
 
  Also, I would be interested to know your degree of interest in the
 Cray-specific modifications that we make - would you prefer those to
 be channeled back into the PETSc library?

   If they involved directly changing PETSc code, we much prefer to get
 it channeled back into the master copy of the source code. Makes it
 much easier to debug user code. If it is auxiliary code, like a faster
 ddot() etc then it is more appropriate to not try to include it.

 Any other comments you have
 on the way that Cray can contribute to the PETSc project, I would be
 very glad to hear.

   PETSc has a variety of tuning factors that could theoretically be
 set optimally for a particular machine, these range from simple things
 like compiler options, to a choice between C and Fortran versions of
 the same code (what we call them Fortran kernels), to different loop
 unrollings (in the inline.h file), even something like PetscMemzero()
 which has five possible forms. Now currently we do not tune these
 or even have a test harness for selecting a good tuning. One thing
 you could/would like to do, is determine good choices for these
 options on your machines. Just as a simple example, on some Linux systems
 the basic libraries are just compiled with the GNU compiler, hence the
 system memset() is not particularly effective. A version compiled of PETSc
 compiled using a Fortran memset may be much faster, or Intel provides
 its own _intel_fast_memset() which is better. I've seen a few percent
 increase in performance of entire nonlinear PDE solver applications
 just from using the non-default memset(). [This was specifically on
 an Itanium system, but is likely on other configurations also].


Barry


 
 
 
  Regards,
 
 
  Adrian Tate
 
  ---
 
  Technical Lead
 
  Math Software
  Cray Inc.
 
  (206) 349 5868
 
 



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




rename ISLocalToGlobalMapping?

2007-08-27 Thread Matthew Knepley
It would probably be better. AOMapping and ISLocaltoGlobal have
about the same interface.

  Matt

On 8/27/07, Barry Smith bsmith at mcs.anl.gov wrote:

 Should AO and ISLocalToGlobalMapping be merged into
 a PetscMapping class?

 struct _p_ISLocalToGlobalMapping{
   PETSCHEADER(int);
   PetscInt n;  /* number of local indices */
   PetscInt *indices;   /* global index of each local index */
   PetscInt globalstart;/* first global referenced in indices */
   PetscInt globalend;  /* last + 1 global referenced in indices */
   PetscInt *globals;   /* local index for each global index between
 start and end */
 };

 typedef struct {
   PetscInt N;
   PetscInt *app;   /* app[i] is the partner for petsc[appPerm[i]] */
   PetscInt *appPerm;
   PetscInt *petsc; /* petsc[j] is the partner for app[petscPerm[j]] */
   PetscInt *petscPerm;
 } AO_Mapping;

 typedef struct {
   PetscInt N;
   PetscInt *app,*petsc;  /* app[i] is the partner for the ith PETSc slot */
  /* petsc[j] is the partner for the jth app slot */
 } AO_Basic;


   Barry



 On Mon, 27 Aug 2007, Matthew Knepley wrote:

  On 8/27/07, Barry Smith bsmith at mcs.anl.gov wrote:
  
 Lisandro,
  
   Sounds fine to me. ISLocalToGlobalMapping - LGMapping
 
  If we are getting picky, I like long names, but I would get rid of IS
  since it seems
  more like implementation to me.
 
   BUT, AO is called AO, not AOMapping? Shouldn't it be AOMapping?
  (then the AO_Mapping needs to be changed, why it is called
   Mapping, Matt?, I do not know).
 
  The default AO implementation has the semantic guarantee that it is a
  permutation.
  The Mapping implementation allows subsets of the index space.
 
Matt
 
  Barry
  
  
   On Thu, 16 Aug 2007, Lisandro Dalcin wrote:
  
Did you never thinnk about the possibility of renaming
ISLocalToGlobalMapping to something shorter? IMOH it is painfuly long
name.
   
In petsc4py, I call this LGMapping, because ISLocalToGlobalMapping, in
my view, is not an IS, and its usage is similar to AOMapping.
   
   
   
  
  
 
 
 




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




KSPSkipConverged

2007-08-28 Thread Matthew Knepley
Yes, definitely. Go ahead and push it.

  Matt

On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
 Does it make sense to change KSPSkipConverged to return
 KSP_CONVERGED_ITS if  iternum==maxit ?

 KSP_DIVERGED_ITS means convergence failure, but IMHO, KSPSkipConverged
 should not imply convergence failure (this has implications in SNES).


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




KSPSkipConverged

2007-08-28 Thread Matthew Knepley
On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
 On 8/28/07, Matthew Knepley knepley at gmail.com wrote:
  Yes, definitely. Go ahead and push it.

 I started to try this by implementing first on petsc4py with
 petsc-2.3.3-p4, by solving a trivial SPD diagonal system { A_ii =
 1/(i+1) } with no PC and maxit=5. Below the results, some things seems
 broken.

 I think I will do the following:

 1- Correct things in release-2.3.3, KSP's should not set
 KSP_DIVERGED_ITS if the convergence test returned other than
 KSP_CONVERGED_ITERATING (all GMRES's, RICHARDSON and TCQMR seems to do
 this). It also seems that I have to review KSP type GLTR (it stopped
 at iteration 4 and not 5, as it should).

 2- Modify KSPSkipConverged and push on petsc-dev. Or perhpas can we
 also push this on release-2.3.3? The previous way is rather buggy,
 especially in conjunction with KSP_NORM_NO.

Sounds good. I will ask Todd about gltr since it might be supposed to
do something
funny. You really do not want to look at it.

  Matt

 Below the result (petsc4py is a nice tool for test/debug, isn't it?)


 tfqmr  - CONVERGED_ITS- iters:   5, ||r|| = 4.881889e-03,
 x0,N-1=0.96,9.978373
 minres - CONVERGED_ITS- iters:   5, ||r|| = 5.356222e-02,
 x0,N-1=1.09,9.701171
 fgmres - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
 x0,N-1=1.09,9.701171
 stcg   - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 qcg- CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=-1.52,-9.771470
 cg - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 lgmres - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
 x0,N-1=1.09,9.701171
 cgne   - CONVERGED_ITS- iters:   5, ||r|| = 7.192229e-02,
 x0,N-1=1.00,7.116166
 chebychev  - DIVERGED_ITS - iters:   5, ||r|| = 2.591834e+00,
 x0,N-1=0.636559,0.708271
 cgs- CONVERGED_ITS- iters:   5, ||r|| = 1.457830e-03,
 x0,N-1=1.00,9.994777
 bicg   - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 lsqr   - CONVERGED_ITS- iters:   5, ||r|| = 4.612376e-01,
 x0,N-1=1.00,7.116166
 gltr   - CONVERGED_ITS- iters:   4, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 tcqmr  - DIVERGED_ITS - iters:   5, ||r|| = 0.00e+00,
 x0,N-1=1.09,9.701171
 bcgs   - CONVERGED_ITS- iters:   5, ||r|| = 2.566301e-03,
 x0,N-1=0.999703,9.982615
 cr - CONVERGED_ITS- iters:   5, ||r|| = 5.356222e-02,
 x0,N-1=1.09,9.701171
 symmlq - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 bcgsl  - CONVERGED_ITS- iters:   5, ||r|| = 1.232695e-03,
 x0,N-1=0.88,9.999315
 lcd- DIVERGED_ITS - iters:   5, ||r|| = 5.629995e-02,
 x0,N-1=1.52,9.771470
 preonly- CONVERGED_ITS- iters:   1, ||r|| = 0.00e+00,
 x0,N-1=1.00,1.00
 gmres  - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
 x0,N-1=1.09,9.701171
 richardson - DIVERGED_ITS - iters:   5, ||r|| = 1.215430e+00,
 x0,N-1=1.00,4.095100




 
  On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
   Does it make sense to change KSPSkipConverged to return
   KSP_CONVERGED_ITS if  iternum==maxit ?
  
   KSP_DIVERGED_ITS means convergence failure, but IMHO, KSPSkipConverged
   should not imply convergence failure (this has implications in SNES).
  
  
   --
   Lisandro Dalc?n
   ---
   Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
   Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
   Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
   PTLC - G?emes 3450, (3000) Santa Fe, Argentina
   Tel/Fax: +54-(0)342-451.1594
  
  
 
 
  --
  What most experimenters take for granted before they begin their
  experiments is infinitely more interesting than any results to which
  their experiments lead.
  -- Norbert Wiener
 
 


 --
 Lisandro Dalc?n
 ---
 Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
 Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
 Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
 PTLC - G?emes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




KSPSkipConverged

2007-08-28 Thread Matthew Knepley
On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
 On 8/28/07, Matthew Knepley knepley at gmail.com wrote:
  Todd's response was that there are reasons it can stop early, like happy
  breakdown, so we really need to test this with the dev version to track
  down this behavior.

 OK, my tests script just looped over all available KPS types, but some
 of them are special purpose. However, GLTR does not seems to stopping
 because of happy breakdown, it always stop at maxit-1 (with my skip
 converged), so perhaps there are a problem with the loop index.

Okay, should be fixed in dev.

 Matt, from your previous mail, something is not clear to me:

 Can I modify KSPSkipConverged for release-2.3.3 ?

Yes, that seems fine. Have you made a patch to release before?

  Thanks,

Matt

 
Matt
 
  On 8/28/07, Matthew Knepley knepley at gmail.com wrote:
   On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
On 8/28/07, Matthew Knepley knepley at gmail.com wrote:
 Yes, definitely. Go ahead and push it.
   
I started to try this by implementing first on petsc4py with
petsc-2.3.3-p4, by solving a trivial SPD diagonal system { A_ii =
1/(i+1) } with no PC and maxit=5. Below the results, some things seems
broken.
   
I think I will do the following:
   
1- Correct things in release-2.3.3, KSP's should not set
KSP_DIVERGED_ITS if the convergence test returned other than
KSP_CONVERGED_ITERATING (all GMRES's, RICHARDSON and TCQMR seems to do
this). It also seems that I have to review KSP type GLTR (it stopped
at iteration 4 and not 5, as it should).
   
2- Modify KSPSkipConverged and push on petsc-dev. Or perhpas can we
also push this on release-2.3.3? The previous way is rather buggy,
especially in conjunction with KSP_NORM_NO.
  
   Sounds good. I will ask Todd about gltr since it might be supposed to
   do something
   funny. You really do not want to look at it.
  
 Matt
  
Below the result (petsc4py is a nice tool for test/debug, isn't it?)
   
   
tfqmr  - CONVERGED_ITS- iters:   5, ||r|| = 4.881889e-03,
x0,N-1=0.96,9.978373
minres - CONVERGED_ITS- iters:   5, ||r|| = 5.356222e-02,
x0,N-1=1.09,9.701171
fgmres - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
x0,N-1=1.09,9.701171
stcg   - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
qcg- CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
x0,N-1=-1.52,-9.771470
cg - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
lgmres - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
x0,N-1=1.09,9.701171
cgne   - CONVERGED_ITS- iters:   5, ||r|| = 7.192229e-02,
x0,N-1=1.00,7.116166
chebychev  - DIVERGED_ITS - iters:   5, ||r|| = 2.591834e+00,
x0,N-1=0.636559,0.708271
cgs- CONVERGED_ITS- iters:   5, ||r|| = 1.457830e-03,
x0,N-1=1.00,9.994777
bicg   - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
lsqr   - CONVERGED_ITS- iters:   5, ||r|| = 4.612376e-01,
x0,N-1=1.00,7.116166
gltr   - CONVERGED_ITS- iters:   4, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
tcqmr  - DIVERGED_ITS - iters:   5, ||r|| = 0.00e+00,
x0,N-1=1.09,9.701171
bcgs   - CONVERGED_ITS- iters:   5, ||r|| = 2.566301e-03,
x0,N-1=0.999703,9.982615
cr - CONVERGED_ITS- iters:   5, ||r|| = 5.356222e-02,
x0,N-1=1.09,9.701171
symmlq - CONVERGED_ITS- iters:   5, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
bcgsl  - CONVERGED_ITS- iters:   5, ||r|| = 1.232695e-03,
x0,N-1=0.88,9.999315
lcd- DIVERGED_ITS - iters:   5, ||r|| = 5.629995e-02,
x0,N-1=1.52,9.771470
preonly- CONVERGED_ITS- iters:   1, ||r|| = 0.00e+00,
x0,N-1=1.00,1.00
gmres  - DIVERGED_ITS - iters:   5, ||r|| = 5.356222e-02,
x0,N-1=1.09,9.701171
richardson - DIVERGED_ITS - iters:   5, ||r|| = 1.215430e+00,
x0,N-1=1.00,4.095100
   
   
   
   

 On 8/28/07, Lisandro Dalcin dalcinl at gmail.com wrote:
  Does it make sense to change KSPSkipConverged to return
  KSP_CONVERGED_ITS if  iternum==maxit ?
 
  KSP_DIVERGED_ITS means convergence failure, but IMHO, 
  KSPSkipConverged
  should not imply convergence failure (this has implications in 
  SNES).
 
 
  --
  Lisandro Dalc?n
  ---
  Centro Internacional de M?todos Computacionales en Ingenier?a 
  (CIMEC)
  Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica 
  (INTEC)
  Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
  PTLC

PETSc/ridgeSim 3D scaling on BG/L

2006-04-02 Thread Matthew Knepley
Thats awesome. 80% true efficiency for 1K nodes on under
1 million unknowns. Thats only 1,000 unknowns per proc. If
we were cranking the problems size, I am sure you'd be up
in the 90s. Hopefully this summer we can deliver you much
improved serial performance as well. BTW, I want to talk
sometime about the subduction benchmark. Van Keken talked
about it at Purdue, so I got a diametrically opposed viewpoint
from a sympathetic person. It was eye opening.

   Matt

On 4/2/06, Richard Katz katz at ldeo.columbia.edu wrote:

 Hi All,

 Thought you might be interested to see this.

 The simulation is finite-volume, steady-state, non-Newtonian CFD in 3D.
 And, of course, it is done using PETSc (version 2.3.1).

 How does this compare to other PETSc-based simulations?

 Cheers
 Rich

 Barry -- I had a problem with the the jobs that we discussed but I'm
 hoping to run them properly tonight.






--
Failure has a thousand explanations. Success doesn't need one -- Sir Alec
Guiness
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20060402/fec33cfc/attachment.html


BuildSystem question

2005-09-25 Thread Matthew Knepley
S V N Vishwanathan vishy at mail.rsise.anu.edu.au writes:

 Hi!

 I am tying myself up in knots in trying to figure out your BuildSystem
 and how to adapt it to LINEAL. PETSc is one primary component of LINEAL
 but going forward there will be others (viz. TAO, OOQP). 

 I want that my users should be exposed to one and only one config + make
 step if possible. A typical user interaction should be as follows:

 1) If PETSc exists under $LINEAL_DIR/externalpackages and is configured
and built with the right switches then we just build LINEAL.

 Good.

 2) If the user wishes to use their own custom PETSc then they just
specify --with-petsc-dir= and I should be able to pick it up from
there. 

 Also good.

 3) If neither 1 nor 2 is true then I should offer to download, configure
and build PETSc (much akin to what the current PETSc build scripts
currently do). 

 Exactly.

 I am unclear about where external packages required by PETSc should
 go. Many packages (like PETSc, TAO, OOQP) might want to use some common
 external package (like MPICH). But it seems like a wasted effort if we
 replicate the PETSc build scripts which already install these packages
 into $PETSC_DIR/externalpackages.

PETSc does have --with-external-packages-dir, which defaults to
$PETSC_DIR/externalpackages, but you can initialize it to whatever you want.

 When I try to grab the framework object from RDict.db and use a require
 method on it, it seems to bomb because many of the scripts in
 $PETSC_DIR/python/PETSc seem to make an implicit assumption that the
 configure scripts are run from $PETSC_DIR (a bad assumption I think, esp
 given that you have the PETSC_DIR variable available to you).

We would be willing to work on this with you. That requirement has been
discussed. The point of view from the other side is that users (very) often
get confused and we would like to do as much as possible to make sure they
are doing the correct thing (like installing from PETSC_DIR). Also, we
never assume that PETSC_DIR is defined during configure. That said, I'm
sure these outright dependencies can be fixed.

 The lack of documentation adds to my woes. I don't want to be a cry
 baby, but I did spend considerable time and effort trying to figure
 things out and still don't have a good feel for how different scripts
 interact. 

  I tried to document the configure (Python) part as much as I could. I
know the make part is not documented, but it is an artifact of an earlier
age when make systems were considered the equivalent of hammers, which
are rarely documented. Feel free to mail questions.

 So my question is simple, if you were in my position i.e. designing the
 build scripts for LINEAL, how would approach the problem?

  I would decide up front whether I wanted to use:

  a) make

  b) autotools

  c) something totally new

From this basic design decision flows almost every other decision. Also, it
gives you an idea of what kind of investment is necessary. I will try and
indicate why I choose c). The kind of work involved in making a new system
is different and appeals to me. If you use a), generally you spend lots of time
writing shell and working out bizzare tricks with stamp files to get flow
control. If you choose b), you spend lots of time understanding someone else's
system, and for autotools that means even more, even more baroque shell.
With c), you have to write much more and work things out, but you can fix
many things and perhaps introduce a more powerful paradigm. However, all
projects are different and all options shuold be considered every time.

 Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




BuildSystem question

2005-09-26 Thread Matthew Knepley
S V N Vishwanathan vishy at mail.rsise.anu.edu.au writes:
 Ideally, I would like to use your BuildSystem to hack up a quick build
 for Numerix. Then as we add and integrate more tools in, I would work on
 making it fancier. 

  Cool, we can do that.

 Here is the plan I came up with (after sleeping over it for a night):

 1) I assume that the PETSc Framework object can be made modular i.e. can
be invoked as long as the PETSC_DIR is set and points to a sane
directory. 

  Yes. I currently have nothing that does this, so you sending bug reports
would be fine for now, and I will fix what is wrong.

 2) I will create a new Configure object for Numerix and add it to the
PETSc framework using require(). My configure script will also set
the PETSC_DIR env variable to a sane directory. 

  Okay.

 3) All options that are passed to my build are first passed to the PETSc
build. I also pass it
--with-external-packages-dir=$LINEAL_DIR/externalpackages 

  Actually, the first thing that happens is all options are stored in the
RDict, so all you have to do is instantiate the Petsc part with the same
RDict. This is what happens with the ParMetis build for instance.

 4) Any extra flags which are not handled by PETSc (or are shared between
packages) will be handled and passed to appropriate builds by the
Numerix configure. 

  See above answer.

 Does this make sense?

 How do we handle the case when say PETSc libs are installed by using a
 package manager like rpm or apt?

  If they follow the PETSc build process (which they should), the RDict
will be right there waiting for you. Otherwise, we will have to smack them
around for a while. There is code to look for the libraries/includes, but
it also wants the RDict right now.

 I guess my problem at a high level is as follows:

 What is the best approach to take for a package which depends on PETSc
 to use 
 a) The BuildSystem module
 b) The PETSc framework object itself

 I am sorry if some of my questions are incoherent. I am still trying to
 figure out how to structure out build and I realize it is not an easy
 task. 

  Hmmm, I do not quite understand the question. The framework object does
configuration, and other parts of BuildSystem do build. I admit that the
build stuff is not as good or as mature, but I think it is harder to do.

  Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




tracking petsc

2005-11-03 Thread Matthew Knepley
Simon Burton simon at arrowtheory.com writes:

 On Thu, 03 Nov 2005 17:49:46 -0600
 Matthew Knepley knepley at mcs.anl.gov wrote:

 
 Simon Burton simon at arrowtheory.com writes:
 
   This was patched in 2.3.0, but maybe you have an unpatched version. It 
 is
 easy to check. The symbol comes from 

 We have been tracking your bitkeeper repository. I don't know if this is
 sane or not because we want to release something (in a few weeks)
 and that basically means it is up to us to somehow stabilize PETSc,
 or at least bless some particular version of PETSc.

 No, that doesn't sound right either: fixes that you guys make will go into
 the bk repository, so... I guess what I am asking is, are we just insane to
 try and do this ? Or, is it possible to maybe branch your repository so
 we can stabilize PETSc for some kind of beta release ?

  This is our preference. Why? It is jsut as stable as tarballs because you can
clone to any particular revision, BUT its more flexible because you can easily
pull any fixes or upgrades.

 Sounds like we just need to make it clear that petsc is on the move
 and that you guys are completely responsive to bug reports, etc.

 We are going to build a petsc doesn't work FAQ, but, gee i wish your email
 list (petsc-maint) was archived somewhere...

  The DOE really frowns on this I believe. We discuss a lot of DOE internal 
stuff
on it. However, it there were an archived list to Cc, we could talk about doing 
that.

 If you start getting 10 more support queires a day can you handle that ?
 What about 50 ? Maybe I can also act as a gateway, if I tell our users to 
 come to
 me first.

  Well, we get anywhere from 50-200 per day, so I think we can probably handle 
some more. In
my experience there are a lot up front with a new community,but it rapidly 
tails off.

   Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




more inlining ?

2005-11-03 Thread Matthew Knepley
Barry Smith bsmith at mcs.anl.gov writes:

   Should we inline VecSetValues, MatSetValues, and friends?

If 

  1) at least gcc supports it (we will have to maintain the #define for
 those that are not C99 I think)

  2) gdb handles it

I also have another proposal. Dmitry and I are going to convert the
prior code gen framework from ASE to pure Python. I consider ASE a failed
experiment, but the code gen is still useful. I think we can handle some
of these situations that do not occur in user code with a generation
framework. For instance, stuff for different block sizes.

   Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




ufuncs, iterators

2005-08-18 Thread Matthew Knepley
Simon Burton simon at arrowtheory.com writes:

 Hi,

 Along the lines of python's numarray [1], we need some way of
 operating pointwise (and inner/outer operations) on Mat/Vec objects. 

 In particular, we need things like the following:
 (a) v=add.reduce(m)   (sum along rows/cols of a Mat to produce a Vec)

  This seems to be handled.

 (b) m=add.outer(v1,v2)(sum of all elements of two Vecs to produce a 
 Mat)

  This should be in MatDense() interface specifically since it produces a dense 
matrix. In fact,
I thought it was always preferrable not to form the matrix explicitly. I would 
have to see how
it is used.

 (c) m=add(m1,m2)  (pointwise sum)

  We have MatAXPY()

 (d) m=exp(m)  (pointwise exp)

  We could add pointwise operations just like the VecPointwise*().

 Matt

 Right now we can do (a) and (b) using dgemm and single column/row matrices 
 with just 1s,
 and we can do (c) using SetValues. But for (d): it looks like we will be 
 writing a c-loop ?

 And I don't think the PETSc interface to dgemm (MatMatMult*) is general 
 enough to do (b)
 (we need one of the products to be added to the other product, so beta=1).

 It would be easy enough to use numarray to shadow the PETSc arrays (using 
 *GetArray)
 and then make numarray do the work, but this seems doomed to failure because
 numarray is inherently dense.

 Simon.


 [1]: http://stsdas.stsci.edu/numarray/numarray-1.3.html/node35.html

 -- 
 Simon Burton, B.Sc.
 Licensed PO Box 8066
 ANU Canberra 2601
 Australia
 Ph. 61 02 6249 6940
 http://arrowtheory.com 




-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




ufuncs, iterators

2005-08-19 Thread Matthew Knepley
S V N Vishwanathan vishy at mail.rsise.anu.edu.au writes:

 Hi!

 BTW: possibly related note, are you using dense matrices sometimes
 to represent just 2-arrays; that is, not as representations of linear
 operators. If so, I do not think this is the correct approach! Conceptually
 PETSc Mat's are linear operators I think it would be a big mistake to 
 overload them as 2-arrays also. The correct approach is to use the 
 DACreate2d() construct for handling 2-d arrays; With the DA the values
 are stored into Vecs but there is additional information about 
 the two array structure; it can be decomposed nicely in parallel and
 one can set/access values with the usual two i,j indices. Of course if 
 they are being used as operators ignore this. 

 Simon OK, I will think about this. But what would be the strategy for
 Simon doing all these linear algebra operations ? Swap to/from Mat/DA
 Simon objects ?

 In Matlab matrix == 2-d array == matrix (in my opinion a terrible
 design decision) in PETSc matrix (dense) != 2-d array , the are
 completely different beasts mathematically

 I am not sure I understand the fine difference. As far as we are
 concerned, all the operations which we are doing (point wise addition,
 addition, multiplication etc.) are on the linear operator. But it might
 be that my thought process is conditioned by years of Matlab/Octave
 use. Can you maybe make this more explicit i.e. which situations would
 you use a DA array and when would you use a Mat object?

  Linear operators are members of the space of linear maps between two
vector spaces. They, for instance, have a characteristic behavior under
coordinate transformations. 2-d arrays are just lists of values. They need
not have any transformation properties. In fact, linear operators also
have spectral characteristics, and analytic behaviors. Using a PETSc Mat
just to store a collection of values is wrong. This goes back to the
modeling for your equations.

 Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




ufuncs, iterators

2005-08-19 Thread Matthew Knepley
Simon Burton simon at arrowtheory.com writes:

 On Thu, 18 Aug 2005 19:57:02 -0500
 Matthew Knepley knepley at mcs.anl.gov wrote:

 
  (d) m=exp(m)   (pointwise exp)
 
   We could add pointwise operations just like the VecPointwise*().
 
  Matt

 Yes, we need MatPointwiseMult aswell.

 Should I make a start on this ?

  I am thinking more about what Barry said. The VecPointwise*() operations
can be given a solid mathematical interpretation in terms of spinor operations.
However, I do not see anything like that for the Mat stuff yet. We need to
understand the mathematicas better.

   Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




ufuncs, iterators

2005-08-20 Thread Matthew Knepley
Simon Burton simon at arrowtheory.com writes:

 Oh, that should probably read:

 exp( -1/2\sigma^{2} ||x1_{i} - x2_{j}||_{2}^{2}) 

 And when we vectorize this operation:

 ||x1_i - x2_j||^2 = ||x1_i||^2 + ||x2_i||^2 + 2*(x1_i,x2_j)

 and the last term is the ip matrix.

 It seems that this expression satisfies the linear operator
 requirement (it's invarient under linear isometries).

  I am looking again, and it appears to be the normalized graph laplacian.
This is indeed a linear operator which we need to support.

 Matt

 Simon.


 On Thu, 18 Aug 2005 23:12:21 -0500 (CDT)
 Barry Smith bsmith at mcs.anl.gov wrote:

 
   In terms of exp( -1/2\sigma^{2} ||x_{i} - x_{j}||_{2}^{2})  
 what are they?
 
Thanks
 
Barry
 
 
 On Fri, 19 Aug 2005, Simon Burton wrote:
 
  On Thu, 18 Aug 2005 22:49:29 -0500 (CDT)
  Barry Smith bsmith at mcs.anl.gov wrote:
  
   
 What is x1, x2 and ip?
   
  Barry
  
  x1 and x2 are 2-arrays; their rows are the 'sample' vectors.
  ip is the matrix of all inner products from x1 and x2.
  
  Simon.
  
  
  
 


 -- 
 Simon Burton, B.Sc.
 Licensed PO Box 8066
 ANU Canberra 2601
 Australia
 Ph. 61 02 6249 6940
 http://arrowtheory.com 




-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




Bitkeeper going commercial-only

2005-04-12 Thread Matthew Knepley
Aron Ahmadia aron.ahmadia at gmail.com writes:

 So this is really crappy news, I really liked BK.  How is this going to
 affect PETSc and what are your future short-term and long-term plans?

 I have to figure out what I'm doing with COW, and I was wondering how you
 guys were dealing with this.

  They have promised us free licenses. In fact, its pretty easy for any
academic, open source project to get a free license. It turns out Victor
already had one for 2 years now. My guess is once all the Linux kernel
assholes stop caring about BK, it will go back to being completely free.
Commence beating numbskull, ignorant kernel programmers

 Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




--download-prometheus working again

2005-04-20 Thread Matthew Knepley
Barry Smith bsmith at mcs.anl.gov writes:

   After endless futzing I've gotten --download-prometheus working
 with both --with-clanguage=C and C++ with petsc-dev on my Mac. Should work 
 everywhere :-).

   Satish, could you try adding it to a nightly build? You also need 
 --download-parmetis Thanks

  Sweet. This will hold us for a few thousand processors.

 Matt
-- 
Failure has a thousand explanations. Success doesn't need one -- Sir Alec 
Guiness




[petsc-dev] make test error

2012-05-01 Thread Matthew Knepley
On Tue, May 1, 2012 at 3:28 PM, John Mousel john.mousel at gmail.com wrote:

 I just pulled petsc-dev and am getting the following error in the tests.


Pushed fix.

  Matt


 John


 Running test examples to verify correct installation
 Using PETSC_DIR=/home/vchivukula/NumericalLibraries/petsc-dev and
 PETSC_ARCH=linux-intel
 Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI
 process
 See http://www.mcs.anl.gov/petsc/documentation/faq.html
 lid velocity = 0.0016, prandtl # = 1, grashof # = 1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
 probably memory access out of range
 [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
 [0]PETSC ERROR: or see
 http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or 
 try
 http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
 corruption errors
 [0]PETSC ERROR: likely location of problem given in stack below
 [0]PETSC ERROR: -  Stack Frames
 
 [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not
 available,
 [0]PETSC ERROR:   INSTEAD the line number of the start of the function
 [0]PETSC ERROR:   is given.
 [0]PETSC ERROR: [0] PetscSectionCreateGlobalSection line 440
 src/vec/vec/impls/seq/vsection.c
 [0]PETSC ERROR: [0] DMGetDefaultGlobalSection line 2624
 src/dm/interface/dm.c
 [0]PETSC ERROR: [0] DMGetDefaultSF line 2656 src/dm/interface/dm.c
 [0]PETSC ERROR: [0] DMGlobalToLocalBegin line 1012 src/dm/interface/dm.c
 [0]PETSC ERROR: [0] DMDAFunction line 177 src/dm/impls/da/da2.c
 [0]PETSC ERROR: [0] DM user function line 0 unknownunknown
 [0]PETSC ERROR: [0] DMComputeFunction line 1896 src/dm/interface/dm.c
 [0]PETSC ERROR: [0] SNESDefaultComputeFunction_DMLegacy line 379
 src/snes/utils/dmsnes.c
 [0]PETSC ERROR: [0] SNES user function line 0 unknownunknown
 [0]PETSC ERROR: [0] SNESComputeFunction line 1821 src/snes/interface/snes.c
 [0]PETSC ERROR: [0] SNESSolve_LS line 143 src/snes/impls/ls/ls.c
 [0]PETSC ERROR: [0] SNESSolve line 3395 src/snes/interface/snes.c




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120501/337cee95/attachment.html


[petsc-dev] passing a function pointer?

2012-05-03 Thread Matthew Knepley
On Thu, May 3, 2012 at 10:33 PM, Cinstance cinstance at gmail.com wrote:

 I am using PETSC as the linear solver for a CFD project. It is part of an
 integrated frame work, and the project is developed under MS Windows.

 I use PETSC as prebuild libs in Visual Studio, so I cannot debug into
 PETSC code (at least for now I haven't figured out how to). What make it
 worse is that the framework ignored all the PetscPrintf messages. This
 leaves me in total darkness.

 The framework has its own printf derived method. Is there a way to pass it
 as a function pointer to petsc for it to use for printing stuff. Does it
 have a way to pass on this?


You really do not want to do this. PetscPrintf() is just a think wrapper
around printf().

However, after years of development. This is the huge, waving red flag
where you change
development practice since it is sapping 90% of your useful energy. If the
company does
not realize that, quit. They are not world-beaters.

Matt


 Thanks,
 Lesong

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120503/5e7bcd5c/attachment.html


[petsc-dev] Petsc VecCopy code.

2012-05-04 Thread Matthew Knepley
On Fri, May 4, 2012 at 2:27 PM, Yoo, Andy yoo2 at llnl.gov wrote:

 Hi,

 ** **

 I have written a two-dimensional MatVec routine (
 http://dl.acm.org/citation.cfm?id=2063469) a while ago.


Any chance of getting that into PETSc proper?


 

 Now, I am noticing there is a small memory leak that appears to come from
 VecCopy. It is small, as it only appears like after 30 MatVecs.

 I looked at the Petsc sources, but was not able to locate the default copy
 function for VecMPI.

 ** **

 Can you please point me to the right direction?


Its here:

http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/95406d5a1c14/src/vec/vec/impls/seq/bvec2.c#l190

set here in the vtable:

http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/95406d5a1c14/src/vec/vec/impls/mpi/pbvec.c#l111

   Matt



 **

 Thank you for your help in advance.

 ** **

 ** **

 ** **

 Andy Yoo, Ph.D.

 Center for Applied Scientific Computing

 Lawrence Livermore National Laboratory

 ** **




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120504/25a3d012/attachment.html


[petsc-dev] Error during KSPDestroy

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 7:28 AM, Alexander Grayver
agrayver at gfz-potsdam.dewrote:

 Hello,

 I use KSP and random rhs to compute largest singular value:


1) Is this the whole program? If not, this can be caused by memory
corruption somewhere else. This is what I suspect.

2) You can put in CHKMEMQ; throughout the code to find exactly where the
memory corruption happens.

   Matt


  ! create solver and set options for singular value estimation
  call KSPCreate(MPI_COMM_WORLD,ksp,**ierr);CHKERRQ(ierr)
  call KSPSetType(ksp,KSPGMRES,ierr);**CHKERRQ(ierr)
  call KSPSetTolerances(ksp,**solvertol,PETSC_DEFAULT_**
 DOUBLE_PRECISION,PETSC_**DEFAULT_DOUBLE_PRECISION,its,**
 ierr);CHKERRQ(ierr)
  call KSPGMRESSetRestart(ksp, its, ierr);CHKERRQ(ierr)
  call KSPSetComputeSingularValues(**ksp, flg, ierr);CHKERRQ(ierr)
  call KSPSetFromOptions(ksp,ierr);**CHKERRQ(ierr)

  ! generate random RHS
  call PetscRandomCreate(PETSC_COMM_**WORLD,rctx,ierr)
  call PetscRandomSetFromOptions(**rctx,ierr)
  call VecSetRandom(b,rctx,ierr)

  !no preconditioning
  call KSPGetPC(ksp,pc,ierr);CHKERRQ(**ierr)
  call PCSetType(pc,PCNONE,ierr);**CHKERRQ(ierr)
  call KSPSetOperators(ksp,A,A,SAME_**PRECONDITIONER,ierr);CHKERRQ(**ierr)
  !solve system
  call KSPSolve(ksp,b,x,ierr);**CHKERRQ(ierr)
  call KSPComputeExtremeSingularValue**s(ksp, smax, smin,
 ierr);CHKERRQ(ierr)

  call KSPDestroy(ksp,ierr);CHKERRQ(**ierr)

 However it crashes:

 [1]PETSC ERROR: --**
 --**
 [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
 probably memory access out of range
 [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
 [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/**
 documentation/faq.html#**valgrind[1]PETSChttp://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR:
  or try
 http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
 corruption errors
 [1]PETSC ERROR: PetscMallocValidate: error detected at
  PetscDefaultSignalHandler() line 157 in /home/lib/petsc-dev1/src/sys/**
 error/signal.c
 [1]PETSC ERROR: Memory at address 0x4aa3f00 is corrupted
 [1]PETSC ERROR: Probably write past beginning or end of array
 [1]PETSC ERROR: Last intact block allocated in KSPSetUp_GMRES() line 73 in
 /home/lib/petsc-dev1/src/ksp/**ksp/impls/gmres/gmres.c
 [1]PETSC ERROR: - Error Message
 --**--
 [1]PETSC ERROR: Memory corruption!
 [1]PETSC ERROR:  !
 [1]PETSC ERROR: --**
 --**
 [1]PETSC ERROR: Petsc Development HG revision:
 f3c119f7ddbfee243b51907a90acab**15127ccb39  HG Date: Sun Apr 29 21:37:29
 2012 -0500
 [1]PETSC ERROR: See docs/changes/index.html for recent updates.
 [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [1]PETSC ERROR: See docs/index.html for manual pages.
 [1]PETSC ERROR: --**
 --**
 [1]PETSC ERROR: /home/prog on a openmpi-i named node207 by user Sun May  6
 12:58:24 2012
 [1]PETSC ERROR: Libraries linked from /home/lib/petsc-dev1/openmpi-**
 intel-complex-debug-f/lib
 [1]PETSC ERROR: Configure run at Mon Apr 30 10:20:49 2012
 [1]PETSC ERROR: Configure options --with-blacs-include=/opt/**
 intel/Compiler/11.1/072/mkl/**include --with-blacs-lib=/opt/intel/**
 Compiler/11.1/072/mkl/lib/**em64t/libmkl_blacs_openmpi_**lp64.a
 --with-blas-lapack-lib=[/opt/**intel/Compiler/11.1/072/mkl/**
 lib/em64t/libmkl_intel_lp64.a,**/opt/intel/Compiler/11.1/072/**
 mkl/lib/em64t/libmkl_intel_**thread.a,/opt/intel/Compiler/**
 11.1/072/mkl/lib/em64t/libmkl_**core.a,/opt/intel/Compiler/11.**1/072/lib/intel64/libiomp5.a]
 --with-fortran-interfaces=1 --with-mpi-dir=/opt/mpi/intel/**openmpi-1.4.2
 --with-petsc-arch=openmpi-**intel-complex-debug-f --with-precision=double
 --with-scalapack-include=/opt/**intel/Compiler/11.1/072/mkl/**include
 --with-scalapack-lib=/opt/**intel/Compiler/11.1/072/mkl/**
 lib/em64t/libmkl_scalapack_**lp64.a --with-scalar-type=complex --with-x=0
 PETSC_ARCH=openmpi-intel-**complex-debug-f
 [1]PETSC ERROR: --**
 --**
 [1]PETSC ERROR: PetscMallocValidate() line 138 in
 /home/lib/petsc-dev1/src/sys/**memory/mtr.c
 [1]PETSC ERROR: PetscDefaultSignalHandler() line 157 in
 /home/lib/petsc-dev1/src/sys/**error/signal.c


 Call stack from debugger:

 opal_memory_ptmalloc2_int_**free, FP=7fffd4765300
 opal_memory_ptmalloc2_free_**hook, FP=7fffd4765330
 PetscFreeAlign,  FP=7fffd4765370
 PetscTrFreeDefault,  FP=7fffd4765520
 KSPReset_GMRES,  FP=7fffd4765740
 KSPReset,FP=7fffd4765840
 KSPDestroy,  FP=7fffd47659a0
 kspdestroy_, FP=7fffd47659d0


 Any ideas?

 Thanks.

 --
 Regards,
 Alexander




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting 

[petsc-dev] Error during KSPDestroy

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 8:42 AM, Alexander Grayver
agrayver at gfz-potsdam.dewrote:

 **
 On 06.05.2012 14:27, Matthew Knepley wrote:

 On Sun, May 6, 2012 at 7:28 AM, Alexander Grayver agrayver at gfz-potsdam.de
  wrote:

 Hello,

 I use KSP and random rhs to compute largest singular value:


  1) Is this the whole program? If not, this can be caused by memory
 corruption somewhere else. This is what I suspect.


 Matt,

 I can reproduce error using attached test programm and this matrix (7 mb):
 http://dl.dropbox.com/u/60982984/A.dat


I run it fine with the latest petsc-dev:

  1.405802e+00

Can you valgrind it on your machine?

Matt




  2) You can put in CHKMEMQ; throughout the code to find exactly where the
 memory corruption happens.

 Matt


  ! create solver and set options for singular value estimation
  call KSPCreate(MPI_COMM_WORLD,ksp,ierr);CHKERRQ(ierr)
  call KSPSetType(ksp,KSPGMRES,ierr);CHKERRQ(ierr)
  call
 KSPSetTolerances(ksp,solvertol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,its,ierr);CHKERRQ(ierr)
  call KSPGMRESSetRestart(ksp, its, ierr);CHKERRQ(ierr)
  call KSPSetComputeSingularValues(ksp, flg, ierr);CHKERRQ(ierr)
  call KSPSetFromOptions(ksp,ierr);CHKERRQ(ierr)

  ! generate random RHS
  call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr)
  call PetscRandomSetFromOptions(rctx,ierr)
  call VecSetRandom(b,rctx,ierr)

  !no preconditioning
  call KSPGetPC(ksp,pc,ierr);CHKERRQ(ierr)
  call PCSetType(pc,PCNONE,ierr);CHKERRQ(ierr)
  call KSPSetOperators(ksp,A,A,SAME_PRECONDITIONER,ierr);CHKERRQ(ierr)
  !solve system
  call KSPSolve(ksp,b,x,ierr);CHKERRQ(ierr)
  call KSPComputeExtremeSingularValues(ksp, smax, smin, ierr);CHKERRQ(ierr)

  call KSPDestroy(ksp,ierr);CHKERRQ(ierr)

 However it crashes:

 [1]PETSC ERROR:
 
 [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
 probably memory access out of range
 [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
 [1]PETSC ERROR: or see
 http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: 
 or try
 http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
 corruption errors
 [1]PETSC ERROR: PetscMallocValidate: error detected at
  PetscDefaultSignalHandler() line 157 in
 /home/lib/petsc-dev1/src/sys/error/signal.c
 [1]PETSC ERROR: Memory at address 0x4aa3f00 is corrupted
 [1]PETSC ERROR: Probably write past beginning or end of array
 [1]PETSC ERROR: Last intact block allocated in KSPSetUp_GMRES() line 73
 in /home/lib/petsc-dev1/src/ksp/ksp/impls/gmres/gmres.c
 [1]PETSC ERROR: - Error Message
 
 [1]PETSC ERROR: Memory corruption!
 [1]PETSC ERROR:  !
 [1]PETSC ERROR:
 
 [1]PETSC ERROR: Petsc Development HG revision:
 f3c119f7ddbfee243b51907a90acab15127ccb39  HG Date: Sun Apr 29 21:37:29 2012
 -0500
 [1]PETSC ERROR: See docs/changes/index.html for recent updates.
 [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [1]PETSC ERROR: See docs/index.html for manual pages.
 [1]PETSC ERROR:
 
 [1]PETSC ERROR: /home/prog on a openmpi-i named node207 by user Sun May
  6 12:58:24 2012
 [1]PETSC ERROR: Libraries linked from
 /home/lib/petsc-dev1/openmpi-intel-complex-debug-f/lib
 [1]PETSC ERROR: Configure run at Mon Apr 30 10:20:49 2012
 [1]PETSC ERROR: Configure options
 --with-blacs-include=/opt/intel/Compiler/11.1/072/mkl/include
 --with-blacs-lib=/opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_blacs_openmpi_lp64.a
 --with-blas-lapack-lib=[/opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_lp64.a,/opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.a,/opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_core.a,/opt/intel/Compiler/11.1/072/lib/intel64/libiomp5.a]
 --with-fortran-interfaces=1 --with-mpi-dir=/opt/mpi/intel/openmpi-1.4.2
 --with-petsc-arch=openmpi-intel-complex-debug-f --with-precision=double
 --with-scalapack-include=/opt/intel/Compiler/11.1/072/mkl/include
 --with-scalapack-lib=/opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_scalapack_lp64.a
 --with-scalar-type=complex --with-x=0
 PETSC_ARCH=openmpi-intel-complex-debug-f
 [1]PETSC ERROR:
 
 [1]PETSC ERROR: PetscMallocValidate() line 138 in
 /home/lib/petsc-dev1/src/sys/memory/mtr.c
 [1]PETSC ERROR: PetscDefaultSignalHandler() line 157 in
 /home/lib/petsc-dev1/src/sys/error/signal.c


 Call stack from debugger:

 opal_memory_ptmalloc2_int_free, FP=7fffd4765300
 opal_memory_ptmalloc2_free_hook, FP=7fffd4765330
 PetscFreeAlign,  FP=7fffd4765370
 PetscTrFreeDefault,  FP=7fffd4765520
 KSPReset_GMRES,  FP=7fffd4765740
 KSPReset,FP=7fffd4765840
 KSPDestroy,  FP=7fffd47659a0
 kspdestroy_, FP

[petsc-dev] Error during KSPDestroy

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 9:24 AM, Alexander Grayver
agrayver at gfz-potsdam.dewrote:

 **
 Hm, valgrind gives a lot of output like that (see full log in previous
 message):


Can you run this with --download-f-blas-lapack? This sounds much more like
an MKL bug.

   Matt


 ==20287== Invalid read of size 8
 ==20287==at 0x1AE79DA1: mkl_lapack_dlasq3 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_lapack.so)
 ==20287==by 0x5CF7AE5: mkl_lapack_dlasq3 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.so)
 ==20287==by 0x1AE79617: mkl_lapack_dlasq2 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_lapack.so)
 ==20287==by 0x5CF7A15: mkl_lapack_dlasq2 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.so)
 ==20287==by 0x1AA3E72A: mkl_lapack_dlasq1 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_lapack.so)
 ==20287==by 0x5CF79C7: mkl_lapack_dlasq1 (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.so)
 ==20287==by 0x1AC44D6C: mkl_lapack_zbdsqr (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_lapack.so)
 ==20287==by 0x5CFFEF8: mkl_lapack_zbdsqr (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.so)
 ==20287==by 0x1AC7D989: mkl_lapack_zgesvd (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_lapack.so)
 ==20287==by 0x5D021C0: mkl_lapack_zgesvd (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_thread.so)
 ==20287==by 0x5899E43: ZGESVD (in
 /opt/intel/Compiler/11.1/072/mkl/lib/em64t/libmkl_intel_lp64.so)
 ==20287==by 0x697017: KSPComputeExtremeSingularValues_GMRES
 (gmreig.c:46)
 ==20287==by 0x69EFBA: KSPComputeExtremeSingularValues (itfunc.c:47)
 ==20287==by 0x4509BC: main (solveTest.c:62)
 ==20287==  Address 0x11363d48 is not stack'd, malloc'd or (recently) free'd


 On 06.05.2012 15:21, Alexander Grayver wrote:

 On 06.05.2012 15:07, Matthew Knepley wrote:

Hello,

 I use KSP and random rhs to compute largest singular value:


  1) Is this the whole program? If not, this can be caused by memory
 corruption somewhere else. This is what I suspect.


 Matt,

 I can reproduce error using attached test programm and this matrix (7 mb):
 http://dl.dropbox.com/u/60982984/A.dat


  I run it fine with the latest petsc-dev:

1.405802e+00

  Can you valgrind it on your machine?


 I did:
 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p
 /solveTest -ksp_monitor_true_residual -log_summary -mat_type aij -ksp_rtol
 1.0e-10 -malloc off

 The error is better constrained:

 ==20287== Invalid read of size 8
 ==20287==at 0x7874B4C: opal_os_path (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-pal.so.0.0.0)
 ==20287==by 0x75F2E27: orte_session_dir_finalize (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-rte.so.0.0.0)
 ==20287==by 0x76012E8: orte_errmgr_base_error_abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-rte.so.0.0.0)
 ==20287==by 0x73396E9: ompi_mpi_abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libmpi.so.0.0.2)
 ==20287==by 0x734F36E: PMPI_Abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libmpi.so.0.0.2)
 ==20287==by 0x7499AB: PetscDefaultSignalHandler (signal.c:169)
 ==20287==by 0x749267: PetscSignalHandler_Private (signal.c:53)
 ==20287==by 0x924B9DF: ??? (in /lib64/libc-2.11.1.so)
 ==20287==by 0x535D9E: VecDestroyVecs (vector.c:653)
 ==20287==by 0x68B61D: KSPReset_GMRES (gmres.c:258)
 ==20287==by 0x6A9D39: KSPReset (itfunc.c:733)
 ==20287==by 0x6AA839: KSPDestroy (itfunc.c:780)
 ==20287==by 0x4509F8: main (solveTest.c:66)
 ==20287==  Address 0xbde4860 is 0 bytes inside a block of size 2 alloc'd
 ==20287==at 0x4C26B9B: malloc (vg_replace_malloc.c:263)
 ==20287==by 0x92876DF: vasprintf (in /lib64/libc-2.11.1.so)
 ==20287==by 0x9266C67: asprintf (in /lib64/libc-2.11.1.so)
 ==20287==by 0x75F1701: orte_util_convert_vpid_to_string (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-rte.so.0.0.0)
 ==20287==by 0x75F2D4A: orte_session_dir_finalize (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-rte.so.0.0.0)
 ==20287==by 0x76012E8: orte_errmgr_base_error_abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libopen-rte.so.0.0.0)
 ==20287==by 0x73396E9: ompi_mpi_abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libmpi.so.0.0.2)
 ==20287==by 0x734F36E: PMPI_Abort (in
 /opt/mpi/intel/openmpi-1.4.2/lib/libmpi.so.0.0.2)
 ==20287==by 0x7499AB: PetscDefaultSignalHandler (signal.c:169)
 ==20287==by 0x749267: PetscSignalHandler_Private (signal.c:53)
 ==20287==by 0x924B9DF: ??? (in /lib64/libc-2.11.1.so)
 ==20287==by 0x535D9E: VecDestroyVecs (vector.c:653)
 ==20287==by 0x68B61D: KSPReset_GMRES (gmres.c:258)
 ==20287==by 0x6A9D39: KSPReset (itfunc.c:733)
 ==20287==by 0x6AA839: KSPDestroy (itfunc.c:780)
 ==20287==by 0x4509F8: main (solveTest.c:66)

 Full log is attached.

 Important.
 If I comment this line:
 KSPComputeExtremeSingularValues(ksp, maxx, minx);

 It works

[petsc-dev] DM RefineLevel and CoarsenLevel

2012-05-06 Thread Matthew Knepley
That sounds fine to me.

  Matt

On Sun, May 6, 2012 at 3:44 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Should the refinement level be copied over by DMCoarsen (and the coarsen
 level be copied by DMRefine)?

 It's useful for diagnostics to be able to define a universal level. If I
 use PCMG and -snes_grid_sequence, there is effectively a sequence like

 DMCreate(,dm0); // r=0,c=0
 DMRefine(dm0,dmf); // r=1,c=0
 DMCoarsen(dmf,dmc); // r=0,c=1


 I would like a way to identify dmc as being on the same level as dm0.




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120506/ba21b6b1/attachment.html


[petsc-dev] DM RefineLevel and CoarsenLevel

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 5:57 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Now the next round:

 For semi-coarsening, we used to have stuff like -da_refine_hierarchy_x
 1,1,3 -da_refine_hierarchy_y 2,2,1 -da_refine_hierarchy_z 2,2,1. Two
 changes make this harder now:

 1. You essentially got rid of DMRefineHierarchy (it's not called any
 more), so each call to DMRefine and DMCoarsen have to figure out where they
 are.


This is a huge mistake. The only way unstructured stuff works is through
this interface. That is why I added it

   Matt


 2. Since the coarse DMs are not reused by PCMG, but instead created again
 using DMCoarsen, we have to figure out how to reverse the refinement
 process so that the same coarse grids get reconstructed again.

 I added a DMRefineHook so that we can port data the other way and I
 modified DMCoarsen_DA and DMRefine_DA to not call DMDACreate{1,2,3}d
 because it eagerly calls DMSetFromOptions before we can set the
 refinement/coarsen level. Unless someone stops me, I'm also going to add
 coarsen_{x,y,z} fields to DM_DA because the refinement ratio may have
 nothing to do with the coarsening ratio.

 I have no idea how to expose semi-coarsening through a C API other than to
 hold the refinement/coarsening path arrays in each DM_DA so that
 refinement/coarsening steps can be retraced.

 On Sun, May 6, 2012 at 1:57 PM, Barry Smith bsmith at mcs.anl.gov wrote:


  Fine

 On May 6, 2012, at 2:44 PM, Jed Brown wrote:

  Should the refinement level be copied over by DMCoarsen (and the
 coarsen level be copied by DMRefine)?
 
  It's useful for diagnostics to be able to define a universal level. If
 I use PCMG and -snes_grid_sequence, there is effectively a sequence like
 
  DMCreate(,dm0); // r=0,c=0
  DMRefine(dm0,dmf); // r=1,c=0
  DMCoarsen(dmf,dmc); // r=0,c=1
 
 
  I would like a way to identify dmc as being on the same level as dm0.





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120506/7c80c3bc/attachment.html


[petsc-dev] Attaching a near null space to an IS

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 7:15 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Matt introduced this concept, he says the IS is a better place to attach
 things.

 http://petsc.cs.iit.edu/petsc/petsc-dev/rev/2ad289ac99e0

 I don't understand why the IS is better (because it's mostly immutable?).
 I'm worried that putting it there is going to be fragile because the near
 null space is not a property of an IS at all.


Its not a property of your matrix either, or you would not need me to tell
you. Its a property of the operator. The operator
is defined by the DM. The near null space is actually a property of a
suboperator, defined by the DM using a field (we are
not allowing arbitrary divisions). The representation of a field in PETSc
is an IS (
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DM/DMCreateFieldIS.html)
so it makes sense to attach field information to the IS. Moreover, it
makes a hell of a lat more sense to attach an auxiliary operator (like L_p)
to this IS than to a matrix.

Furthermore, this scheme is completely workable in a nested context. The
user can specify the IS, or pull out the DM IS
and play with it, without a bunch of cumbersome copies hanging around that
we do not want and can't destroy. That is what
would happen with persistent submatrices. Lastly, I am running this for
PyLith and it works great.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120506/bc2840ad/attachment.html


[petsc-dev] Attaching a near null space to an IS

2012-05-06 Thread Matthew Knepley
On Sun, May 6, 2012 at 8:34 PM, Barry Smith bsmith at mcs.anl.gov wrote:


  I think it belongs in the DM (or the matrix if it was put in the matrix).
 When you select a subpart of the DM that new DM should contain the near
 null space for the subpart.  The DM associated with the field defining IS
 is where all properties should go, not the IS itself. Of course, we really
 have not yet solidified the getting of the new DM for the field yet, maybe
 after the release.


I do not necessarily disagree in principle, but right now we don;t give
smaller DMs, we give ISes. So I think my proposal is the
one that works now.

Second, I think we should be very careful using opaque objects for this. I
consider this a failure when I did it for parts of the system
for preconditioning, which was later changed to ISes in FieldSplit. Talking
to PETSc, the Lord of Linear Algebra, in sets of integers
is the right thing to do. It allows other people to stick their own stuff
in place of ours easily, without junking it up with a lots of
wrapper objects.

   Matt



   Barry

 On May 6, 2012, at 6:30 PM, Matthew Knepley wrote:

  On Sun, May 6, 2012 at 7:15 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
  Matt introduced this concept, he says the IS is a better place to attach
 things.
 
  http://petsc.cs.iit.edu/petsc/petsc-dev/rev/2ad289ac99e0
 
  I don't understand why the IS is better (because it's mostly
 immutable?). I'm worried that putting it there is going to be fragile
 because the near null space is not a property of an IS at all.
 
  Its not a property of your matrix either, or you would not need me to
 tell you. Its a property of the operator. The operator
  is defined by the DM. The near null space is actually a property of a
 suboperator, defined by the DM using a field (we are
  not allowing arbitrary divisions). The representation of a field in
 PETSc is an IS (
 http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DM/DMCreateFieldIS.html)
 so it makes sense to attach field information to the IS. Moreover, it
  makes a hell of a lat more sense to attach an auxiliary operator (like
 L_p) to this IS than to a matrix.
 
  Furthermore, this scheme is completely workable in a nested context. The
 user can specify the IS, or pull out the DM IS
  and play with it, without a bunch of cumbersome copies hanging around
 that we do not want and can't destroy. That is what
  would happen with persistent submatrices. Lastly, I am running this for
 PyLith and it works great.
 
 Matt
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120506/75542bd0/attachment.html


[petsc-dev] Error during KSPDestroy

2012-05-07 Thread Matthew Knepley
On Mon, May 7, 2012 at 3:16 AM, Alexander Grayver
agrayver at gfz-potsdam.dewrote:

 On 06.05.2012 22:24, Barry Smith wrote:

   Alexander,

  I cannot reproduce this on my mac with 3 different blas/lapack.


 Barry,

 I'm surprised. I ran it on my home PC with ubuntu and PETSc configured
 from scratch as following:
 --download-mpich --with-fortran-interfaces=1 --download-scalapack
 --download-blacs --with-scalar-type=complex --download-blas-lapack
 --with-precision=double

 And it's still there.
 Please note that all my numbers are complex.


I just ran this with complex instead of real. I get

sh: 1.688716e+00

and no crash.

Matt


   Could you please run the case below but with
 --download-f-blas-lapack   (you forgot the -f last time)? Send us the
 valgrind results. This will tell use the exact line number in dlasq3() that
 is triggering the bad read.


 I did:
 ./configure --with-petsc-arch=openmpi-**intel-complex-debug-c
 --download-scalapack --download-blacs --download-f-blas-lapack
 --with-precision=double --with-scalar-type=complex

 And then valgrind program. The first message from log:

 ==27656== Invalid write of size 8
 ==27656==at 0x15A8E9E: dlasq2_ (dlasq2.f:215)
 ==27656==by 0x15A83A4: dlasq1_ (dlasq1.f:135)
 ==27656==by 0x158ACEC: zbdsqr_ (zbdsqr.f:225)
 ==27656==by 0x154EC27: zgesvd_ (zgesvd.f:2038)
 ==27656==by 0x695DD3: KSPComputeExtremeSingularValue**s_GMRES
 (gmreig.c:46)
 ==27656==by 0x69DD76: KSPComputeExtremeSingularValue**s (itfunc.c:47)
 ==27656==by 0x44E98C: main (solveTest.c:62)
 ==27656==  Address 0xfad2d98 is 8 bytes before a block of size 832 alloc'd
 ==27656==at 0x4C25D66: memalign (vg_replace_malloc.c:694)
 ==27656==by 0x4B642B: PetscMallocAlign (mal.c:30)
 ==27656==by 0x687775: KSPSetUp_GMRES (gmres.c:73)
 ==27656==by 0x69FE4A: KSPSetUp (itfunc.c:239)
 ==27656==by 0x6A2058: KSPSolve (itfunc.c:402)
 ==27656==by 0x44E969: main (solveTest.c:61)

 Please find full log attached.

  Barry


 On May 6, 2012, at 9:16 AM, Alexander Grayver wrote:

  On 06.05.2012 15:34, Matthew Knepley wrote:

 On Sun, May 6, 2012 at 9:24 AM, Alexander Grayveragrayver at gfz-potsdam.
 **de agrayver at gfz-potsdam.de  wrote:
 Hm, valgrind gives a lot of output like that (see full log in previous
 message):

 Can you run this with --download-f-blas-lapack? This sounds much more
 like an MKL bug.

 I did:
 --download-scalapack --download-blacs --download-blas-lapack
 --with-precision=double --with-scalar-type=complex

 The error is still there. I checked ldd solveTest, mkl is not used for
 sure. This is not an MKL problem I guess:

 ==13600== Invalid read of size 8
 ==13600==at 0x58636AF: dlasq3_ (in /usr/local/lib/liblapack.so.3.**
 2.2)
 ==13600==by 0x5862C84: dlasq2_ (in /usr/local/lib/liblapack.so.3.**
 2.2)
 ==13600==by 0x5861F2C: dlasq1_ (in /usr/local/lib/liblapack.so.3.**
 2.2)
 ==13600==by 0x571A479: zbdsqr_ (in /usr/local/lib/liblapack.so.3.**
 2.2)
 ==13600==by 0x57466A7: zgesvd_ (in /usr/local/lib/liblapack.so.3.**
 2.2)
 ==13600==by 0x694687: KSPComputeExtremeSingularValue**s_GMRES
 (gmreig.c:46)
 ==13600==by 0x69C62A: KSPComputeExtremeSingularValue**s
 (itfunc.c:47)
 ==13600==by 0x44E02C: main (solveTest.c:62)
 ==13600==  Address 0x10826b90 is 16 bytes before a block of size 832
 alloc'd
 ==13600==at 0x4C25D66: memalign (vg_replace_malloc.c:694)
 ==13600==by 0x4B5ACB: PetscMallocAlign (mal.c:30)
 ==13600==by 0x686181: KSPSetUp_GMRES (gmres.c:73)
 ==13600==by 0x69E6FE: KSPSetUp (itfunc.c:239)
 ==13600==by 0x6A090C: KSPSolve (itfunc.c:402)
 ==13600==by 0x44E009: main (solveTest.c:61)

 The weird thing is that the it gives correct result, so zgesvd works
 fine.

 And also running this program with 10 iterations in valgrind doesn't
 produce error. The low above is with 100 iterations.
 Without valgrind the error is always there.

 --
 Regards,
 Alexander



 --
 Regards,
 Alexander




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120507/e907f176/attachment.html


[petsc-dev] discrepancy with bcgs, mpi, cusp, and no preconditioning

2012-05-10 Thread Matthew Knepley
On Thu, May 10, 2012 at 3:45 PM, Chetan Jhurani chetan.jhurani at 
gmail.comwrote:

 Hi wizards of -ops- indirections,

 I'm trying to use bcgs without preconditioning, for now, and
 the iterations using -vec_type cusp -mat_type mpiaijcusp don't
 match serial or non-GPU options.  I've attached the test program
 and the 4 outputs (serial/parallel + CPU/GPU).  All this is
 with petsc-dev downloaded now and real scalars.

 Only the parallel GPU results are different starting from
 third residual norm seen in results.txt.  The other three match
 one another.  Am I doing something wrong?

 fbcgs (bcgs with -ksp_bcgs_flexible) works fine with all the
 serial/parallel or CPU/GPU options I've tried.

 Let me know if you need the matrix, rhs, and initial guess
 binary files that are read in by the test program.


That would be great. This looks like a bug that should be tracked down.

MAtt


 Thanks,

 Chetan




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120510/26dfa2ac/attachment.html


[petsc-dev] Spelling Chebyshev

2012-05-10 Thread Matthew Knepley
On Thu, May 10, 2012 at 4:27 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, May 10, 2012 at 3:23 PM, Sean Farley sean at mcs.anl.gov wrote:

 It would seem to me that the best way to fix this is to just spell his
 name the way he did: ?  ???. All we need to do is
 update PETSc to use unicode strings. How hard could that be?


 Dmitry, can you translate the help strings and documentation to Russian
 for the 3.3 release? Thanks.


I think Dmitry did this years ago, but it made typing options in too hard.

  Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120510/e8e1f148/attachment.html


[petsc-dev] As if we need convincing

2012-05-11 Thread Matthew Knepley
http://www.250bpm.com/blog:4

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120511/575ed517/attachment.html


[petsc-dev] SNESDMComputeJacobian()

2012-05-15 Thread Matthew Knepley
On Tue, May 15, 2012 at 10:11 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 *- ptr - pointer to a structure that must have a DM as its first entry.*
 *This ptr must have been passed into SNESDMComputeFunction() as
 the context.*

 This is considered very bad form now since resolution-dependent
 information in the function context tends to make it not work with grid
 sequencing or FAS. Are we ready to remove it yet?


How does the DM info get there? I thought we were still using this.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120515/9a1916ba/attachment.html


[petsc-dev] patch for block size in VecLoad_HDF5

2012-05-16 Thread Matthew Knepley
On Wed, May 16, 2012 at 6:27 PM, Blaise Bourdin bourdin at lsu.edu wrote:

 Hi,

 ex19 in src/vec/vec/examples/tutorials is still broken, since changing
 block size of a Vec is no longer supported.
 the mechanism in VecLoad_HDF5 was to rely on an undocumented flag
 (-vecload_block_size)

 Shall we require that the block size be set _before_ calling VecLoad (in
 which case, I will push a patch to VecLoad_HDF5), or should the block size
 be obtained from the hdf5 files and set?


That is horrible. Require that the input vector have that blocksize and get
rid of that option. The user can just
as easily use that option to set the blocksize before entry.

   Matt



 Blaise

 --
 Department of Mathematics and Center for Computation  Technology
 Louisiana State University, Baton Rouge, LA 70803, USA
 Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276
 http://www.math.lsu.edu/~bourdin










-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120516/f81cbe5f/attachment.html


[petsc-dev] A PETSc question on scicomp I can't completely answer (behavior of PETScSetxxx)

2012-05-23 Thread Matthew Knepley
On Wed, May 23, 2012 at 7:23 AM, Aron Ahmadia aron at ahmadia.net wrote:


 http://scicomp.stackexchange.com/questions/2303/petscs-xxxsetxxx-methods-own-pointer-or-copy-values

 My understanding is that Set for the most part acts like a
 pass-by-reference, and my answer reflects that (
 http://scicomp.stackexchange.com/a/2311/9)

 If there's a better answer than that, Jed or Matt should post it or fix
 mine :)


Replied.

   Matt


 A




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120523/a40245c6/attachment.html


[petsc-dev] Multigrid is confusing

2012-05-24 Thread Matthew Knepley
On Thu, May 24, 2012 at 4:16 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, May 24, 2012 at 3:10 PM, Barry Smith bsmith at mcs.anl.gov wrote:

 Absolutely. And if it turns out to be too much of a pain to write and
 maintain such a kernel there is something wrong with our programming model
 and code development system. The right system should make all the
 complexity fall away; when the complexity becomes too much of a bear you
 know you have the wrong system.


 So do we manage blocks using internal C++ templates, templates in C, C
 generated using some other system (m4 anyone?), or something else entirely?


Yes! Finally, we acknowledge that this a problem.

1) C++ templates are not a solution to anything. ANYTHING.

2) I am assuming templates in C would work somewhat like a templating
engine.
I tried this for the last TOMS paper with Andy. Its was just not a big
payoff for
the work put in, and definitely did not justify incorporating another
package.

3) I prefer C generated from another system, like the one I use for FEM
(which I am
not attached to). We will definitely need this for GPU kernels, and I
am guessing
thread kernels if they are going to be worth something.

Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120524/9e5feaa9/attachment.html


[petsc-dev] Need a CFLAGS that does is *not* included in the link

2012-05-24 Thread Matthew Knepley
On Thu, May 24, 2012 at 10:09 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Building PETSc with clang++ produces a warning about compiling *.c as C++
 being deprecated. To silence the warning, we would need to pass -x c++ to
 the compiler, but NOT to the linker. CFLAGS is currently also passed to the
 linker. Is this something we want to fix?


According to autoconf, CFLAGS goes with both. CPPFLAGS is only for the
compiler.

   Matt


 Clang used to SEGV when -x c++ is passed to the linker. Now (latest SVN)
 it just interprets the object file as C++ source (which obviously produces
 a ton of garbage).

 http://llvm.org/bugs/show_bug.cgi?id=12924




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120524/d3ab8a16/attachment.html


[petsc-dev] Multigrid is confusing

2012-05-25 Thread Matthew Knepley
On Thu, May 24, 2012 at 10:25 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, May 24, 2012 at 9:17 PM, Barry Smith bsmith at mcs.anl.gov wrote:

   Sarcasm on --
   Yes, of course. It is essentially done, write it up and I want it on my
 desk by 8 AM Friday.
  -- Sarcasm off

   So what if we know the primatives. How does it give us the language
 to express these kernels and the tools to do the processing?


 I hate C++ templates as much as the next guy, but I think they can do this
 without insanity. I think we could also do it with a handful of macro hacks
 and C inline functions.

 Sure, we might be able to make a cool DSL for expressing this stuff, but
 I'm afraid we'd end up spending more time and debugging confusion making it
 play with everything else than we'd gain by having the DSL in the first
 place.

 I could be totally wrong.


I think its wrong, but not for the obvious reason. You can do very elegant
things with macros, but
they always end up being thrown away when you have to maintain them and
have others understand
them.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120524/872a9b7a/attachment.html


[petsc-dev] ctetgen into its own repository

2012-05-25 Thread Matthew Knepley
On Fri, May 25, 2012 at 4:03 PM, Barry Smith bsmith at mcs.anl.gov wrote:


Because of the tetgen license we cannot include ctetgen directly in the
 PETSc tarball.  Thus we have forked it off into its own repository and it
 is available for users as --download-ctetgen   Developers may choose to hg
 clone the repository directly into petsc-dev/externalpackages if they have
 any need to work on the ctetgen source code.


How can the fucking license say that we cannot include ASCII code is our
goddamn release. What if I
take a picture of the code and include the JPG? I already put in a
configure option to turn it on. This is
ridiculous. Are you sure that we can include the letter P. I think Sesame
Street has rights to that.

Matt



   Barry





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120525/70db1276/attachment.html


[petsc-dev] OpenMP compiler options

2012-05-29 Thread Matthew Knepley
On Tue, May 29, 2012 at 3:52 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 The OpenMP flags do not definitively identify that OpenMP is used. In
 particular, IBM XL interprets Cray's option -h omp as being equivalent to
 -soname omp, then silently ignores the Open MP pragmas. We can perhaps
 fix this instance by moving -qsmp up in the list, but we may eventually
 need to move it to compilerOptions.py.


Move it up, and add it to the comment. And people think OpenMP is the easy
way?

   Matt


   def configureLibrary(self):
 ''' Checks for -fopenmp compiler flag'''
 ''' Needs to check if OpenMP actually exists and works '''
 self.setCompilers.pushLanguage('C')
 #
 for flag in [-fopenmp, # Gnu
  -h omp,   # Cray
  -mp,  # Portland Group
  -Qopenmp, # Intel windows
  -openmp,  # Intel
   ,# Empty, if compiler automatically accepts
 openmp
  -xopenmp, # Sun
  +Oopenmp, # HP
  -qsmp,# IBM XL C/c++
  /openmp   # Microsoft Visual Studio
  ]:
   if self.setCompilers.checkCompilerFlag(flag):
 ompflag = flag
 break




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120529/7c32ca41/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 11:09 AM, Barry Smith bsmith at mcs.anl.gov wrote:



   Fredrik,

This question belongs on petsc-dev at mcs.anl.gov since it involves
 additions/extensions to PETSc so I am moving the discussion over to there.

We have not done the required work to have ghosted vectors work with
 CUSP yet, so this will require some additions to PETSc. We can help you
 with that process but since the PETSc team does not have a CUSP person
 developing PETSc full time you will need to actual contribute some code but
 I'll try to guide you in the right direction.

 The first observation is that ghosted vectors in PETSc are actually
 handled with largely the same code as VECMPI vectors (with just no ghost
 points by default) so in theory little work needs to be done to get the
 functionality you need. What makes the needed changes non-trivial is the
 current interface where one calls VecCreateGhost() to create the vectors.
 This is one of our easy interfaces and it is somewhat legacy in that
 there is no way to control the types of the vectors since it creates
 everything about the vector in one step.   Note that we have the same
 issues with regard to the pthread versions of the PETSc vectors and
 ghosting.

So before we even talk about what code to change/add we need to decide
 on the interface.  Presumably you want to be able to decide at runtime
 whether to use regular VECMPI, VECMPICUSP or VECMPIPTHREAD  in your ghosted
 vectors. How do we get that information in there? An additional argument to
 VecCreateGhost() (ugly?)? Options database (by calling VecSetFromOptions()
 ?), other ways?  So for example one could have:

 VecCreateGhost(..)
 VecSetFromOptions(..)

 to set the specific type cusp or pthread? What about

 VecCreateGhost(..)
 VecSetType(..,VECMPICUSP);

 which as you note doesn't currently work. Note that the PTHREAD version
 needs to do its own memory allocation so essentially has to undo much of
 what VecCreateGhost() already did, is that a bad thing?

 Or do we get rid of VecCreateGhost() completely and change the model to
 something like

 VecCreate()
 VecSetType()
 VecSetGhosted()

 or

 VecCreate()
 VecSetTypeFromOptions()
 VecSetGhosted()

 or even

 VecCreate()
 VecSetGhosted()   which will just default to regular MPI ghosted.

 this model allows a clean implementation that doesn't require undoing
 previously built internals.


I am for the second model, just absorbing ghosting into the current
implementations.

Matt



 Everyone chime in with observations so we can figure out any
 refactorizations needed.

   Barry





 On Feb 6, 2012, at 8:33 AM, Fredrik Heffer Valdmanis wrote:

  Hi,
 
  In FEniCS, we use ghosted vectors for parallel computations, with the
 functions
 
  VecCreateGhost
  VecGhostGetLocalForm
 
  As I am integrating the new GPU-based vectors and matrices in FEniCS, I
 want the ghosted vectors to be of type VECMPICUSP. I have tried to do this
 by calling VecSetType after creating the vector, but that makes
 VecGhostGetLocalForm give an error.
 
  Is it possible to set the type to be mpicusp when using ghost vectors,
 without changing much of the code? If so, how?
 
  If not, how would you recommend I proceed to work with mpicusp vectors
 in this context?
 
  Thanks!
 
  --
  Fredrik




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/13180f67/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 12:40 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Feb 6, 2012 at 20:35, Matthew Knepley knepley at gmail.com wrote:

 VecCreate()
 VecSetType()
 VecSetGhosted()


 Is there a compelling reason for memory to be allocated before VecSetUp()?
 Deferring it would make it easier to specify what memory to use or specify
 ghosting in any order.

 My preference is that ghosting is orthogonal to type, to the extent that
 it would make sense for the storage format (non-contiguous AMR grids might
 not be).


I don't like this because it would mean calling VecSetUp() all over the
place. Couldn't the ghosting flag be on the same
level as the sizes?

  Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/0a0a031f/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 12:47 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Feb 6, 2012 at 21:42, Matthew Knepley knepley at gmail.com wrote:

 I don't like this because it would mean calling VecSetUp() all over the
 place. Couldn't the ghosting flag be on the same
 level as the sizes?


 Maybe VecSetUp() is wrong because that would imply collective. This memory
 allocation is simple and need not be collective.

 Ghosting information is an array, so placing it in VecSetSizes() would
 seem unnatural to me. I wouldn't really want
 VecSetGhosts(Vec,PetscInt,const PetscInt*) to be order-dependent with
 respect to VecSetType(), but maybe the VecSetUp() would be too messy.


I needed to be more specific. I think VecSetSizes(local, global, ghost)
would work. Then VecSetGhostIndices() can be called anytime,
and even remapped.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/b17f38a8/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 1:11 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 6, 2012, at 12:47 PM, Jed Brown wrote:

  On Mon, Feb 6, 2012 at 21:42, Matthew Knepley knepley at gmail.com wrote:
  I don't like this because it would mean calling VecSetUp() all over the
 place. Couldn't the ghosting flag be on the same
  level as the sizes?
 
  Maybe VecSetUp() is wrong because that would imply collective. This
 memory allocation is simple and need not be collective.
 
  Ghosting information is an array, so placing it in VecSetSizes() would
 seem unnatural to me. I wouldn't really want
 VecSetGhosts(Vec,PetscInt,const PetscInt*) to be order-dependent with
 respect to VecSetType(), but maybe the VecSetUp() would be too messy.

Only some vectors support ghosting, so the usual PETSc way (like with
 KSPGMRESRestart()) is to calling the specific setting routines ONLY AFTER
 the type has been set.  Otherwise all kinds of oddball type specific stuff
 needs to be cached in the object and then pulled out later; possible but is
 that desirable? Who decides what can be set before the type and what can be
 set after? Having a single rule, anything appropriate for a subset of the
 types must be set after the type is set is a nice simple model.

   On the other hand you could argue that ALL vector types should support
 ghosting as a natural thing (with sequential vectors just have 0 length
 ghosts conceptually) then it would be desirable to allow setting the ghost
 information in any ordering.


I will argue this.


   Sadly we now pretty much require MatSetUp() or a
 MatXXXSetPreallocation() to be called so why not always have VecSetUp()
 always called?


Because I don't think we need it and it is snother layer of complication
for the user and us. I think
we could make it work where it was called automatically when necessary, but
that adds another
headache for maintenance and extension.

Matt


   We have not converged yet,

Barry





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/9042d194/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 1:23 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 6, 2012, at 1:14 PM, Matthew Knepley wrote:

  On Mon, Feb 6, 2012 at 1:11 PM, Barry Smith bsmith at mcs.anl.gov wrote:
 
  On Feb 6, 2012, at 12:47 PM, Jed Brown wrote:
 
   On Mon, Feb 6, 2012 at 21:42, Matthew Knepley knepley at gmail.com
 wrote:
   I don't like this because it would mean calling VecSetUp() all over
 the place. Couldn't the ghosting flag be on the same
   level as the sizes?
  
   Maybe VecSetUp() is wrong because that would imply collective. This
 memory allocation is simple and need not be collective.
  
   Ghosting information is an array, so placing it in VecSetSizes() would
 seem unnatural to me. I wouldn't really want
 VecSetGhosts(Vec,PetscInt,const PetscInt*) to be order-dependent with
 respect to VecSetType(), but maybe the VecSetUp() would be too messy.
 
Only some vectors support ghosting, so the usual PETSc way (like with
 KSPGMRESRestart()) is to calling the specific setting routines ONLY AFTER
 the type has been set.  Otherwise all kinds of oddball type specific stuff
 needs to be cached in the object and then pulled out later; possible but is
 that desirable? Who decides what can be set before the type and what can be
 set after? Having a single rule, anything appropriate for a subset of the
 types must be set after the type is set is a nice simple model.
 
On the other hand you could argue that ALL vector types should support
 ghosting as a natural thing (with sequential vectors just have 0 length
 ghosts conceptually) then it would be desirable to allow setting the ghost
 information in any ordering.
 
  I will argue this.

Ok, then just like VecSetSizes() we stash this information if given
 before the type is set and use it when the type is set.  However if it is
 set after the type is set (and after the sizes are set) then we need to
 destroy the old datastructure and build a new one which means messier code.
   By instead actually allocating the data structure at VecSetUp() the code
 is cleaner because we never need to take down and rebuild a data structure
 and yet order doesn't matter.  Users WILL need to call VecSetUp() before
 VecSetValues() and possibly a few other things like they do with Mat now.


We just disallow setting it after the type, just like sizes. I don't see
the argument against this.

   Matt



   Barry

 
Sadly we now pretty much require MatSetUp() or a
 MatXXXSetPreallocation() to be called so why not always have VecSetUp()
 always called?
 
  Because I don't think we need it and it is snother layer of complication
 for the user and us. I think
  we could make it work where it was called automatically when necessary,
 but that adds another
  headache for maintenance and extension.
 
  Matt
 
We have not converged yet,
 
 Barry
 
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/9bca81f0/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 1:30 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 6, 2012, at 1:27 PM, Matthew Knepley wrote:

  On Mon, Feb 6, 2012 at 1:23 PM, Barry Smith bsmith at mcs.anl.gov wrote:
 
  On Feb 6, 2012, at 1:14 PM, Matthew Knepley wrote:
 
   On Mon, Feb 6, 2012 at 1:11 PM, Barry Smith bsmith at mcs.anl.gov
 wrote:
  
   On Feb 6, 2012, at 12:47 PM, Jed Brown wrote:
  
On Mon, Feb 6, 2012 at 21:42, Matthew Knepley knepley at gmail.com
 wrote:
I don't like this because it would mean calling VecSetUp() all over
 the place. Couldn't the ghosting flag be on the same
level as the sizes?
   
Maybe VecSetUp() is wrong because that would imply collective. This
 memory allocation is simple and need not be collective.
   
Ghosting information is an array, so placing it in VecSetSizes()
 would seem unnatural to me. I wouldn't really want
 VecSetGhosts(Vec,PetscInt,const PetscInt*) to be order-dependent with
 respect to VecSetType(), but maybe the VecSetUp() would be too messy.
  
 Only some vectors support ghosting, so the usual PETSc way (like
 with KSPGMRESRestart()) is to calling the specific setting routines ONLY
 AFTER the type has been set.  Otherwise all kinds of oddball type specific
 stuff needs to be cached in the object and then pulled out later; possible
 but is that desirable? Who decides what can be set before the type and what
 can be set after? Having a single rule, anything appropriate for a subset
 of the types must be set after the type is set is a nice simple model.
  
 On the other hand you could argue that ALL vector types should
 support ghosting as a natural thing (with sequential vectors just have 0
 length ghosts conceptually) then it would be desirable to allow setting the
 ghost information in any ordering.
  
   I will argue this.
 
Ok, then just like VecSetSizes() we stash this information if given
 before the type is set and use it when the type is set.  However if it is
 set after the type is set (and after the sizes are set) then we need to
 destroy the old datastructure and build a new one which means messier code.
   By instead actually allocating the data structure at VecSetUp() the code
 is cleaner because we never need to take down and rebuild a data structure
 and yet order doesn't matter.  Users WILL need to call VecSetUp() before
 VecSetValues() and possibly a few other things like they do with Mat now.
 
  We just disallow setting it after the type, just like sizes. I don't see
 the argument against this.

We allow setting the sizes after the type.


Okay, so the current semantics are: VecSetSizes() wipes out the old Vec and
creates one of the right size. I am fine
with that, and would just add a ghost size. I don't think this complicates
what is already there much at all.

   Matt



 
 Matt
 
 
Barry
 
  
 Sadly we now pretty much require MatSetUp() or a
 MatXXXSetPreallocation() to be called so why not always have VecSetUp()
 always called?
  
   Because I don't think we need it and it is snother layer of
 complication for the user and us. I think
   we could make it work where it was called automatically when
 necessary, but that adds another
   headache for maintenance and extension.
  
   Matt
  
 We have not converged yet,
  
  Barry
  
  
  
  
  
   --
   What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
   -- Norbert Wiener
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120206/1e984fb0/attachment.html


[petsc-dev] [petsc-users] VECMPICUSP with ghosted vector

2012-02-06 Thread Matthew Knepley
On Mon, Feb 6, 2012 at 1:42 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 6, 2012, at 1:39 PM, Matthew Knepley wrote:

  On Mon, Feb 6, 2012 at 1:30 PM, Barry Smith bsmith at mcs.anl.gov wrote:
 
  On Feb 6, 2012, at 1:27 PM, Matthew Knepley wrote:
 
   On Mon, Feb 6, 2012 at 1:23 PM, Barry Smith bsmith at mcs.anl.gov
 wrote:
  
   On Feb 6, 2012, at 1:14 PM, Matthew Knepley wrote:
  
On Mon, Feb 6, 2012 at 1:11 PM, Barry Smith bsmith at mcs.anl.gov
 wrote:
   
On Feb 6, 2012, at 12:47 PM, Jed Brown wrote:
   
 On Mon, Feb 6, 2012 at 21:42, Matthew Knepley knepley at gmail.com
 wrote:
 I don't like this because it would mean calling VecSetUp() all
 over the place. Couldn't the ghosting flag be on the same
 level as the sizes?

 Maybe VecSetUp() is wrong because that would imply collective.
 This memory allocation is simple and need not be collective.

 Ghosting information is an array, so placing it in VecSetSizes()
 would seem unnatural to me. I wouldn't really want
 VecSetGhosts(Vec,PetscInt,const PetscInt*) to be order-dependent with
 respect to VecSetType(), but maybe the VecSetUp() would be too messy.
   
  Only some vectors support ghosting, so the usual PETSc way (like
 with KSPGMRESRestart()) is to calling the specific setting routines ONLY
 AFTER the type has been set.  Otherwise all kinds of oddball type specific
 stuff needs to be cached in the object and then pulled out later; possible
 but is that desirable? Who decides what can be set before the type and what
 can be set after? Having a single rule, anything appropriate for a subset
 of the types must be set after the type is set is a nice simple model.
   
  On the other hand you could argue that ALL vector types should
 support ghosting as a natural thing (with sequential vectors just have 0
 length ghosts conceptually) then it would be desirable to allow setting the
 ghost information in any ordering.
   
I will argue this.
  
 Ok, then just like VecSetSizes() we stash this information if given
 before the type is set and use it when the type is set.  However if it is
 set after the type is set (and after the sizes are set) then we need to
 destroy the old datastructure and build a new one which means messier code.
   By instead actually allocating the data structure at VecSetUp() the code
 is cleaner because we never need to take down and rebuild a data structure
 and yet order doesn't matter.  Users WILL need to call VecSetUp() before
 VecSetValues() and possibly a few other things like they do with Mat now.
  
   We just disallow setting it after the type, just like sizes. I don't
 see the argument against this.
 
We allow setting the sizes after the type.
 
  Okay, so the current semantics are: VecSetSizes() wipes out the old Vec
 and creates one of the right size.

   No, if the vector has already been built then VecSetSizes() errors out;
 it does not build a vector of the new size. If the vector type has been set
 but the sizes not set then VecSetSizes() triggers actually building the
 data structures.


Okay, so this is already confusing. Having SetSizes() store the sizes is
correct, since they are common to every type,
as I think ghosting should be. The weird part is SetSizes() calling SetUp
(actually its equivalent ops-create). I don't
think removing this will break much code since most people rely on SetType
or SetFromOptions.

   Matt



   Barry

  I am fine
  with that, and would just add a ghost size. I don't think this
 complicates what is already there much at all.
 
 Matt
 
 
  
  Matt
  
  
 Barry
  
   
  Sadly we now pretty much require MatSetUp() or a
 MatXXXSetPreallocation() to be called so why not always have VecSetUp()
 always called?
   
Because I don't think we need it and it is snother layer of
 complication for the user and us. I think
we could make it work where it was called automatically when
 necessary, but that adds another
headache for maintenance and extension.
   
Matt
   
  We have not converged yet,
   
   Barry
   
   
   
   
   
--
What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
-- Norbert Wiener
  
  
  
  
   --
   What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
   -- Norbert Wiener
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http

[petsc-dev] petsc-dev on bitbucket

2012-02-07 Thread Matthew Knepley
On Tue, Feb 7, 2012 at 8:14 PM, Sean Farley sean at mcs.anl.gov wrote:

 Yoo, i've switched petsc-dev over to bitbucket since petsc.cs.itt.edu is
 down pull/push from https://bitbucket.org/petsc/petsc-dev

 Sean - Barry impersonating me


I have pushed to bring it up to date with the latest master.

   Matt


 Since petsc.cs is down, I don't have everyone's public SSH key. For now,
 if you need push access, just sign up for a free bitbucket account (can
 also use OpenID) and reply to this email with your bitbucket username.
 Barry, Matt, Jed, and Peter have write access already. Hong (hzhang),
 Dmitry (karpeev), and Lois (curfman) should have write access as long as
 the username in the parenthesis is your correct bitbucket name.

 Also, BuildSystem is here:

 https://bitbucket.org/petsc/buildsystem

 where the same people as above have write access.

 P.S. - To change your current default push url in mercurial, just edit
 $PETSC_DIR/.hg/hgrc

 [paths]
 default = https://bitbucket.org/petsc/petsc-dev

 or if you uploaded your public SSH key to bitbucket already

 [paths]
 default = ssh://hg at bitbucket.org/petsc/petsc-dev

 More info here:

 http://confluence.atlassian.com/display/BITBUCKET/Using+the+SSH+protocol+with+bitbucket

 P.P.S - While we're at it, here is a nice shortcut in mercurial. Place
 this in your ~/.hgrc

 [extensions]
 schemes =

 [schemes]
 petsc = ssh://hg at bitbucket.org/petsc

 You can now clone like so: `hg clone petsc://petsc-dev` or `hg clone
 petsc://buildsystem`




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120207/51437f27/attachment.html


[petsc-dev] FAS with TS

2012-02-07 Thread Matthew Knepley
On Tue, Feb 7, 2012 at 11:50 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Wed, Feb 8, 2012 at 04:19, Peter Brune prbrune at gmail.com wrote:



 On Tue, Feb 7, 2012 at 6:40 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Suppose I want to solve a finite time step problem using FAS. The time
 step will be defined something like

 static PetscErrorCode SNESTSFormFunction_ARKIMEX(SNES snes,Vec X,Vec
 F,TS ts)
 {
   TS_ARKIMEX *ark = (TS_ARKIMEX*)ts-data;
   PetscErrorCode ierr;

   PetscFunctionBegin;
   ierr =
 VecAXPBYPCZ(ark-Ydot,-ark-shift,ark-shift,0,ark-Z,X);CHKERRQ(ierr); /*
 Ydot = shift*(X-Z) */
   ierr =
 TSComputeIFunction(ts,ark-stage_time,X,ark-Ydot,F,ark-imex);CHKERRQ(ierr);
   PetscFunctionReturn(0);
 }


 where ark-Z was determined from previous stages and ark-Ydot is a work
 vector. So FAS creates some hierarchy, but it isn't very forthcoming about
 showing the hierarchy to the user. But I need a way to

 1. Restrict ark-Z to the coarse level and obtain the work vector
 ark-Ydot somewhere that I can reuse it. The user might also have to
 restrict auxiliary variables to coarse grids. It's conceivable that they
 would want to do this in a different way from the restriction/prolongation
 for the solution variables. For example, if you were using upwinded R and P
 for a hyperbolic problem, but you needed to restrict coefficients (like
 bathymetry). Or they might want to sample differently out of some file. So
 I think we need an auxiliary transfer callback that is at least called
 every top-level SNESSolve() and optionally also called in every transfer
 (because it could be homogenizing coefficient structure that is a nonlinear
 function of state).


 The restriction (injection, etc) might be one step too hard at this
 point.  We have to define the problem at each level in FAS and grid
 sequencing so we might as well make it easy.  Is it possible to have the
 callback for the SNES point to things that have to be prolonged or
 restricted?  This could be called both before FAS and before grid
 sequencing to get the coefficients up to the next level.  We could provide
 some sort of default that goes through a number of user-provided vectors on
 a level given vectors on another level, and inject them.  A user with
 user-created multilevel coefficients could provide their own similar
 callbacks.


 So coefficients usually won't live on the same DM as the solution. This
 case with TS is easier in that regard, but it can't be the sole driver of
 design. There needs to be a general function to perform level setup.



I just want to point out that Jed envisions that coefficients (and maybe
subproblems, etc) cannot be accommodated on the
same DM. I agree. However, this silly idea that we can make DMs all over
the place with no cost, like DAs, if they contain
all the mesh information, is just wrong. I think this is a good argument
for having both a topology object and a DM handling
layout/solver information. What is the counter-argument?

Matt



 /* called by SNESSetUp_FAS() before restricting a nonlinear solution to a
 coarser level (but usually only used the first time unless this is doing
 solution-dependent homogenization) */
 typedef PetscErrorCode (*SNESFASRestrictHook)(SNES fine,Vec Xfine,SNES
 coarse);
 PetscErrorCode SNESFASSetRestrictHook(SNES fine,SNESFASRestrictHook
 hook,void *ctx);

 /* called in SNESSolve() each time a new solution is prolongated */
 typedef PetscErrorCode (*SNESGridSequenceInterpolateHook)(SNES coarse,Vec
 Xcoarse,SNES fine);
 PetscErrorCode SNESGridSequenceSetInterpolateHook(SNES
 coarse,SNESGridSequenceInterpolateHook hook,void *ctx);


 I think we also need to add state to DMCreateInterpolation() so that we
 can make it nonlinear when appropriate.



 2. Put the DM from this SNES into the TS that calls the user's
 IFunction. The easiest way is to have TSPushDM(TS,DM) that just jams it in
 there, caching the old DM (so the user's call to TSGetDM() works
 correctly), and TSPopDM(TS). This is ugly and is scary if the user does
 something weird like TSGetSolution() (returning the state at the beginning
 of the step, not the current value at the stage). This is something that
 doesn't make semantic sense except maybe if they have some weird
 diagnostics, but TSGetIJacobian() might be used for caching, and I'm scared
 of the semantics it would involve to support this sort of caching.


 This option seems like it would be scary complex.  I don't like it.


 I think it's likely simpler than the others, at least as a first
 implementation. I'm not wild about it either, but I'm not sure the
 alternatives are better.




 The alternative is to make different TSs for each level and somehow
 locate the correct TS (maybe cache it in the SNES hierarchy). I think this
 could be quite confusing to restrict all the members to the coarse grid.
 But having a multi-level TS object may eventually be sensible because
 multilevel time integration schemes 

[petsc-dev] FAS with TS

2012-02-08 Thread Matthew Knepley
On Wed, Feb 8, 2012 at 12:35 AM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Wed, Feb 8, 2012 at 08:57, Matthew Knepley knepley at gmail.com wrote:

 I just want to point out that Jed envisions that coefficients (and maybe
 subproblems, etc) cannot be accommodated on the
 same DM. I agree. However, this silly idea that we can make DMs all over
 the place with no cost, like DAs, if they contain
 all the mesh information, is just wrong. I think this is a good argument
 for having both a topology object and a DM handling
 layout/solver information. What is the counter-argument?


 Why can't we have multiple DMs that internally share topology? Then each
 implementation can share or not share as much as they like. Some DMs might
 also share topological information between levels. I don't think it makes
 sense to encode a specific sharing model into the type system.


So you would have some weird call to that DM that says make another DM and
share the internal state? That sounds
error prone and hard to inspect from outside. Things we complain about when
other people do them.

   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120208/f676b322/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-08 Thread Matthew Knepley
On Wed, Feb 8, 2012 at 7:54 AM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 8, 2012, at 6:43 AM, Satish Balay wrote:

  On Tue, 7 Feb 2012, Barry Smith wrote:
 
 
  On Feb 7, 2012, at 9:09 PM, Sean Farley wrote:
 
  I'm sure Jed (or Matt in his prime) could have run over to IIT and
 restarted the machine in less time than this :-)
 
  Sure, and like everybody else they would have had to wait outside
 until they had keys :-)
 
Those guys are very resourceful; I cannot image a simple locked door
 would be an issue for them.
 
Barry
 
Besides who the heck set up the machine so it cannot be started
 remotely? Should have used an Apple machine :-)
 
 
  It was a human error [when you tell something to shutdown - it should
 not automatically restart].
 
  yeah - if we installed server infrastructure with remote admin feature
  - then it could have been powered up remotely [from the remote
  management console or something like that..]

Isn't that a basic Linux thing, start on LANS signal.

 
  looks like folks [Sean,Matt,Barry] are happy with bitbucket.

Not me. I'm not happy with it.  I prefer the PETSc machine, bitbucket
 is just a back up when the PETSc machine goes down. If the PETSc machine is
 back up then we switch the master repository back.


What is wrong? Not enough freedom to mess up the machine? I don't feel like
pushing 2 places.

  Matt



   Barry

 
  Sean - you'll have to transfer all repos and keys to the new site.
 
  For now - I've removed petsc-dev and BuildSystem from petsc.cs.iit -
  and will plan a phased shutdown of the machine - as soon as you can
  find new home for all repos.
 
  Satish




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120208/7f543a21/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-08 Thread Matthew Knepley
On Wed, Feb 8, 2012 at 9:39 AM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 8, 2012, at 9:06 AM, Matthew Knepley wrote:

  On Wed, Feb 8, 2012 at 7:54 AM, Barry Smith bsmith at mcs.anl.gov wrote:
 
  On Feb 8, 2012, at 6:43 AM, Satish Balay wrote:
 
   On Tue, 7 Feb 2012, Barry Smith wrote:
  
  
   On Feb 7, 2012, at 9:09 PM, Sean Farley wrote:
  
   I'm sure Jed (or Matt in his prime) could have run over to IIT and
 restarted the machine in less time than this :-)
  
   Sure, and like everybody else they would have had to wait outside
 until they had keys :-)
  
 Those guys are very resourceful; I cannot image a simple locked
 door would be an issue for them.
  
 Barry
  
 Besides who the heck set up the machine so it cannot be started
 remotely? Should have used an Apple machine :-)
  
  
   It was a human error [when you tell something to shutdown - it should
 not automatically restart].
  
   yeah - if we installed server infrastructure with remote admin feature
   - then it could have been powered up remotely [from the remote
   management console or something like that..]
 
Isn't that a basic Linux thing, start on LANS signal.
 
  
   looks like folks [Sean,Matt,Barry] are happy with bitbucket.
 
Not me. I'm not happy with it.  I prefer the PETSc machine, bitbucket
 is just a back up when the PETSc machine goes down. If the PETSc machine is
 back up then we switch the master repository back.
 
  What is wrong? Not enough freedom to mess up the machine? I don't feel
 like pushing 2 places.

Push two places manually? WTF, presumably Mecurial is feature rich
 enough that you could automate the whole process of pushing to 2 places?

   Ok, I need to understand more how bitbucket handles a hierarchy of
 different repositories


There is only a 1-level hierarchy based on a top level account. Sean
created 'petsc' for our stuff. We can create many, so that
we have 'petsc-release', 'petsc-private', etc. if we want. Of course, I
want traditional hierarchy, and will file a feature request.


 with different permissions


There is an access control list for each repository, with read/write/admin
permissions. In addition, you can mark each repo
public or private.


 in different parts


Did not understand this part of the question.


 and have a hierarchy of managers of the repositories


I don't know why we need a hierarchy of managers, but we can have
individual managers with admin priv.


 and adding new repositories.


You can create new repos or import existing ones.


 I don't want to just have haphazard creation of new repositories without a
 proper relationship between them.


They are grouped by top level account

Matt


   Barry


   Barry

 
Matt
 
 
Barry
 
  
   Sean - you'll have to transfer all repos and keys to the new site.
  
   For now - I've removed petsc-dev and BuildSystem from petsc.cs.iit -
   and will plan a phased shutdown of the machine - as soon as you can
   find new home for all repos.
  
   Satish
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120208/ccee29e6/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-08 Thread Matthew Knepley
On Wed, Feb 8, 2012 at 10:54 AM, Sean Farley sean at mcs.anl.gov wrote:

 There is only a 1-level hierarchy based on a top level account. Sean
 created 'petsc' for our stuff. We can create many, so that
 we have 'petsc-release', 'petsc-private', etc. if we want. Of course, I
 want traditional hierarchy, and will file a feature request.


 If you phrase it like that (traditional hierarchy), then it will fall on
 deaf ears. The most I could see them adding is a way to create repo groups
 based on user groups (which exist currently). If you want your own personal
 collection of repos right-fucking-now, then fork the repos you want into
 your own account, like so:

 https://bitbucket.org/seanfarley/petsc-dev

 The nice thing about this is that you can tell where it was forked from:
 (fork of petsc / petsc-dev)


No I mean I want

  petsc/releases/petsc-3.1
  petsc/tools/parsing/BarrysNewHTMLMunger

Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120208/554ea262/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-08 Thread Matthew Knepley
On Wed, Feb 8, 2012 at 11:32 AM, Sean Farley sean at mcs.anl.gov wrote:

 No I mean I want

   petsc/releases/petsc-3.1
   petsc/tools/parsing/BarrysNewHTMLMunger


 Sounds like you want a traditional filesystem hierarchy and like I said
 before, your request will fall on deaf ears. This is the same request that
 mercurial-dev gets from people switching from subversion: I only want to
 check out a subdirectory, not the whole project! to which the common
 response is, You should rethink your 'project' if you only want a
 subdirectory. Take a look at subrepos and subpaths:


I guess I completely understand the logic for not checking out part of a
tree. There are consistency issues, and management
of related changes. However, I am just talking about grouping repositories.
I would be alright with any grouping strategy, be it
directories, tags, etc. I think this is orthogonal to Mercurial.

   Matt


 http://mercurial.selenic.com/wiki/Subrepository
 http://mercurial.selenic.com/wiki/SubrepoRemappingPlan

 and maybe projrc:


 http://mercurial.selenic.com/wiki/ProjrcExtension?action=showredirect=SubpathsExtension

 For what it's worth, I never liked petsc-dev being in a different folder
 than petsc-3.x. It is not confusing to have them all like so:

 petsc/petsc-3.1
 petsc/petsc-dev




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120208/4fcefcdc/attachment.html


[petsc-dev] MatMatMult gives different results

2012-02-09 Thread Matthew Knepley
On Thu, Feb 9, 2012 at 2:48 AM, Alexander Grayver
agrayver at gfz-potsdam.dewrote:

 **
 lib/petsc-dev hg pull -u
 pulling from http://petsc.cs.iit.edu/petsc/petsc-dev
 searching for changes
 no changes found

 Does it take some time till it's there?


You have to remake Fortran stubs: make allfortranstubs

and then make.

   Matt


 On 09.02.2012 09:44, Jed Brown wrote:

 This is fixed now.

 On Thu, Feb 9, 2012 at 11:25, Alexander Grayver agrayver at 
 gfz-potsdam.dewrote:

  Jed,

 Pulled petsc-dev this morning:

 [ 63%] Building C object
 CMakeFiles/petsc.dir/src/dm/impls/redundant/ftn-auto/dmredundantf.c.o
 /home/lib/petsc-dev/src/dm/impls/complex/ftn-auto/complexf.c(125): error:
 declaration is incompatible with PetscErrorCode={int}
 DMComplexSetConeOrientation(DM, PetscInt={int}, const PetscInt={int} *)
 (declared at line 29 of /home/lib/petsc-dev/include/petscdmcomplex.h)
   void PETSC_STDCALL DMComplexSetConeOrientation dmcomplexsetcone_(DM
 dm,PetscInt *p, PetscInt coneOrientation[], int *__ierr ){
  ^

 /home/lib/petsc-dev/src/dm/impls/complex/ftn-auto/complexf.c(125): error:
 incomplete type is not allowed
   void PETSC_STDCALL DMComplexSetConeOrientation dmcomplexsetcone_(DM
 dm,PetscInt *p, PetscInt coneOrientation[], int *__ierr ){
  ^

 /home/lib/petsc-dev/src/dm/impls/complex/ftn-auto/complexf.c(125): error:
 expected a ;
   void PETSC_STDCALL DMComplexSetConeOrientation dmcomplexsetcone_(DM
 dm,PetscInt *p, PetscInt coneOrientation[], int *__ierr ){
  ^

 /home/lib/petsc-dev/src/dm/impls/complex/ftn-auto/complexf.c(147):
 warning #12: parsing restarts here after previous syntax error

 [ 63%] compilation aborted for
 /home/lib/petsc-dev/src/dm/impls/complex/ftn-auto/complexf.c (code 2)
 make[4]: ***
 [CMakeFiles/petsc.dir/src/dm/impls/complex/ftn-auto/complexf.c.o] Error 2
 make[4]: *** Waiting for unfinished jobs
 Building C object CMakeFiles/petsc.dir/src/dm/impls/da/ftn-custom/zda.c.o
 make[3]: *** [CMakeFiles/petsc.dir/all] Error 2
 make[2]: *** [all] Error 2

 Yesterday's petsc compiled fine.

 On 08.02.2012 22:21, Jed Brown wrote:

  On Wed, Feb 8, 2012 at 23:39, Alexander Grayver agrayver at gfz-potsdam.de
  wrote:

 It happens within the CG solver with the system matrix which is created
 like this:
 call
 MatCreateShell(MPI_COMM_WORLD,mlocal,nlocal,N,N,PETSC_NULL_INTEGER,H,ierr);CHKERRQ(ierr)


 This should be fixed in petsc-dev now, can you pull and try again?



   --
 Regards,
 Alexander




 --
 Regards,
 Alexander




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120209/a9745ce8/attachment.html


[petsc-dev] Someone still cannot use the annotate link

2012-02-09 Thread Matthew Knepley
Clearly, karpeev was the only use who altered those lines:

http://petsc.cs.iit.edu/petsc/BuildSystem/rev/42cbb1d6192f
http://petsc.cs.iit.edu/petsc/BuildSystem/annotate/d295489bd56e/config/packages/MOAB.py

  Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120209/0f3bd1cf/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-09 Thread Matthew Knepley
On Thu, Feb 9, 2012 at 8:26 PM, Satish Balay balay at mcs.anl.gov wrote:

 On Wed, 8 Feb 2012, Sean Farley wrote:

  
   Hell, if you *really* want to, just create the account:
 petsc-release(s)
   then the URL would be
  
   http://bitbucket.org/petsc-release/petsc-3.1
  
 
  Actually, it's even easier than that:
 
  https://bitbucket.org/petsc/petsc-dev/downloads
 
  which provides downloads for all taged changesets.

 tags are no good. we implrement branches in different clones.


I am not sure what you mean by this. Let me be explicit.

This organization is semantic, and Completely outside the version control
structure. I want something to tell me this repo is about simulating
rockets
like Kit, the voice in Michael's car. Tags are fine for this. So is a
hierarchy.
So is silly XML metadata.

   Matt


 You could argue that we should throw away branches in clones have all
 clones in a single branch - and change our workflow.

 But I think this will be too confusing to most of us [yeah you could
 change your bash prompt to always indicate wich branch you are on -
 wich is equivalent to 'cd different clone' - but not all of us are
 that sophisticated]

 Satish




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120209/ce97da1b/attachment.html


[petsc-dev] MatMatMult gives different results

2012-02-09 Thread Matthew Knepley
On Thu, Feb 9, 2012 at 9:10 PM, Satish Balay balay at mcs.anl.gov wrote:

 On Thu, 9 Feb 2012, Jed Brown wrote:

  It's all workable,
  but I'm not seeing a clear advantage to keeping the primary at
  petsc.cs.iit.edu,

 currently the 2 reasons that are offered agains petsc.cs.iit.edu are:
 - ssh breaks - need https [to level comaprision - we can probably set
 this up for folks who want this]

 - reliability concerns.

  (provided Sean volunteers to set up mirroring in that direction).

 Personally - I'd prefer not to deal with that. If its decided that
 bitbuckets is the thing ot use - we should disable petsc.cs.iit.edu,

 As you've noticed - mirroring has already caused confusion. And I
 suspect this will keep coming up - much more often than the frequence
 of a downed machine.


The master should be on Bitbucket. We could keep a repo at IIT that
is updated every time as a backup only if we need it, but for nothing else.

   Matt



 Satish


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120209/5a4c74a5/attachment.html


[petsc-dev] DMGetMatrix -- DMGetMatrices?

2012-02-10 Thread Matthew Knepley
On Fri, Feb 10, 2012 at 12:05 AM, Dmitry Karpeev karpeev at mcs.anl.govwrote:



 On Fri, Feb 10, 2012 at 12:01 AM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, Feb 9, 2012 at 23:55, Dmitry Karpeev karpeev at mcs.anl.gov wrote:

 In a somewhat related matter, it appears that I cannot duplicate a
 preallocated MATXXXAIJ until it has been assembled:
 if my DM implementation keeps a preallocated MATSEQAIJ, which it wants
 to duplicate on every call to DMGetMatrix,
 it would have to put in fake entries before any duplication is possible.


 Doesn't it already do this (inserting 0)?

 I don't think so.  Preallocating doesn't set any values and seems to leave
 the matrix marked !assembled.
 MatDuplicate for such a matrix will fail.  Assemblying it before setting
 values (just to force an assembled flag)
 will squeeze out the extra values, won't it?  I think it would just be
 reasonable to allow to duplicate unassembled
 matrices, or, better yet, have a matrix be assembled by default until
 MatSetValues has been called.
 But I'm not sure whether either solution will break something else.


Actually, preallocating does do this now. I have to change the unstructured
code to do it.

   Matt


 Dmitry.




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120210/c35efe83/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-10 Thread Matthew Knepley
The thread has become too deep for me to read, hence the top posting.

Barry's question is the right one: What do we gain by changing?

  1) Reliability and Availability

   Barry, you should know that this crap about petsc.cs being backed up is
farcical. We
   would have the same situation we had with the first 10 years of PETSc
history again.
   BB is definitely more reliable in terms of backups, uptime, and
connectivity (SSH issues).

   2) Better management support

   The infrastructure for supporting user permissions is better on BB. We
don't edit a file,
calling a script someone hacked together. We have accounts, and when
accounts are
shut down they go away. A user can manage his SSH key independently of
us.

Those for me make it a slam dunk. However, I will ask the question in
reverse: What do we
give up? I think the only thing we give up is the security blanket of being
able to log in
ourselves and mess with a machine directly.

Matt

On Fri, Feb 10, 2012 at 8:26 AM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 9, 2012, at 11:15 PM, Sean Farley wrote:

 
  Even if you were right about this specific issue (which you are not) it
 doesn't matter. All you've done is removed the need for a releases
 subdirectory. What about tutorials subdirectory, externalpackages
 subdirectory, anothercoolthingwethinkofnextweek subdirectory.
 
  Why does the *server* have to have the subdirectory?

Because I want to have a bunch of repositories organized in a
 hierarchical manner. You response seems to be:

 1)   no you don't want that   or

 2)  you should put them all in one giant repository   or

 3) have them in different bitbucket accounts (like a petsc account and a
 externalpackages account) that have nothing to do with each other.

   Just admit that not supporting a directory structure at bitbucket is
 lame and stop coming up with lame reasons why it is ok. Then get bitbucket
 to add this elementary support and we'll be all set.

   Barry




 
  $ hg clone bb://petsc/anothercoolthing
 subdirectory-that-can-suck-eggs/anothercoolthing
 
  Please explain to me the real reasons bitbucket is better than petsc.cs.
  and stop rationalizing around bitbuckets weaknesses. Every choice has some
 tradeoffs and I haven't heard much about bitbuckets advantages so I am
 confused why you guys are so in love with it. (Well I understand Sean's
 reasons, being pretty lazy myself :-)).
 
  I'll let Jed explain about forks and have the reverse look-up (how many
 people have forked petsc). For me, it's drop-dead simple management.




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120210/c72fab4e/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-10 Thread Matthew Knepley
On Fri, Feb 10, 2012 at 9:23 AM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Feb 10, 2012, at 9:13 AM, Matthew Knepley wrote:

  The thread has become too deep for me to read, hence the top posting.
 
  Barry's question is the right one: What do we gain by changing?
 
1) Reliability and Availability
 
 Barry, you should know that this crap about petsc.cs being backed up
 is farcical. We
 would have the same situation we had with the first 10 years of PETSc
 history again.
 BB is definitely more reliable in terms of backups, uptime, and
 connectivity (SSH issues).
 
 2) Better management support
 
 The infrastructure for supporting user permissions is better on BB.
 We don't edit a file,
  calling a script someone hacked together. We have accounts, and when
 accounts are
  shut down they go away. A user can manage his SSH key independently
 of us.
 
  Those for me make it a slam dunk. However, I will ask the question in
 reverse: What do we
  give up?

I decent way of hierarchically organizing our repositories. Tell me how
 to do this on bitbucket and you have your slam dunk.


Mailing BB.

   Matt



   Barry


  I think the only thing we give up is the security blanket of being able
 to log in
  ourselves and mess with a machine directly.
 
  Matt
 
  On Fri, Feb 10, 2012 at 8:26 AM, Barry Smith bsmith at mcs.anl.gov wrote:
 
  On Feb 9, 2012, at 11:15 PM, Sean Farley wrote:
 
  
   Even if you were right about this specific issue (which you are not)
 it doesn't matter. All you've done is removed the need for a releases
 subdirectory. What about tutorials subdirectory, externalpackages
 subdirectory, anothercoolthingwethinkofnextweek subdirectory.
  
   Why does the *server* have to have the subdirectory?
 
Because I want to have a bunch of repositories organized in a
 hierarchical manner. You response seems to be:
 
  1)   no you don't want that   or
 
  2)  you should put them all in one giant repository   or
 
  3) have them in different bitbucket accounts (like a petsc account and a
 externalpackages account) that have nothing to do with each other.
 
Just admit that not supporting a directory structure at bitbucket is
 lame and stop coming up with lame reasons why it is ok. Then get bitbucket
 to add this elementary support and we'll be all set.
 
Barry
 
 
 
 
  
   $ hg clone bb://petsc/anothercoolthing
 subdirectory-that-can-suck-eggs/anothercoolthing
  
   Please explain to me the real reasons bitbucket is better than
 petsc.cs.  and stop rationalizing around bitbuckets weaknesses. Every
 choice has some tradeoffs and I haven't heard much about bitbuckets
 advantages so I am confused why you guys are so in love with it. (Well I
 understand Sean's reasons, being pretty lazy myself :-)).
  
   I'll let Jed explain about forks and have the reverse look-up (how
 many people have forked petsc). For me, it's drop-dead simple management.
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120210/f79f6e46/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-10 Thread Matthew Knepley
On Fri, Feb 10, 2012 at 9:35 AM, Satish Balay balay at mcs.anl.gov wrote:

 On Fri, 10 Feb 2012, Barry Smith wrote:

 
  On Feb 10, 2012, at 9:13 AM, Matthew Knepley wrote:
 
   The thread has become too deep for me to read, hence the top posting.
  
   Barry's question is the right one: What do we gain by changing?
  
 1) Reliability and Availability
  
  Barry, you should know that this crap about petsc.cs being backed
 up is farcical. We
  would have the same situation we had with the first 10 years of
 PETSc history again.
  BB is definitely more reliable in terms of backups, uptime, and
 connectivity (SSH issues).
  
  2) Better management support
  
  The infrastructure for supporting user permissions is better on BB.
 We don't edit a file,
   calling a script someone hacked together. We have accounts, and
 when accounts are
   shut down they go away. A user can manage his SSH key
 independently of us.
  
   Those for me make it a slam dunk. However, I will ask the question in
 reverse: What do we
   give up?
 
 I decent way of hierarchically organizing our repositories. Tell me
 how to do this on bitbucket and you have your slam dunk.
 

 Also some discussion on private repos.

 I guess none of you have objections for hosting private [for all kinds
 of collaboration work] at bitbucket.  [I don't claim 'iit' is better -
 for some work 'mcs' hosting was prefered]. Also lot of this work
 collabartion stuff is hosted at google docs - so I guess this isn't an
 issue.

 As I understand it - for more than 5 folks to be able to access a
 private repo - one needs to be on a paid plan with bitbucket. [not a
 big deal - but want to put that up front]


As Sean points out, not for us. This is already turned on in my account.


   I think the only thing we give up is the security blanket of being
 able to log in
   ourselves and mess with a machine directly.

 For some things its was easier to do it ourselves than wait for
 someone else [aka admin to do things.]  I think there was a
 significant benefit with this [for a lot of issues that came up in the
 past few years.]


And this exposes the best part. This is no wait for an admin to do it
here. That is not
in the model, as it is with iit or other roll-your-own systems. This is
completely self-administrated
which is why it is great.

   Matt



 Satish




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120210/da43f896/attachment.html


[petsc-dev] petsc-dev on bitbucket

2012-02-10 Thread Matthew Knepley
On Fri, Feb 10, 2012 at 9:46 AM, Satish Balay balay at mcs.anl.gov wrote:

 Matt,

 until we move to bitbucket - can you continue to push to petsc.cs.iit
 - and not bitbucket? [for now - pushes to petsc.cs.iit are
 automatically pushed to bitbucket]


I thought we were switched already :) I cannot understand why we would not
give up this maintenance burden (and directories is not a good enough
reason).

   Matt


 thanks,
 Satish

 

 asterix:/home/balay/spetschg in
 running ssh petsc at petsc.cs.iit.edu 'hg -R /hg/petsc/petsc-dev serve
 --stdio'
 comparing with ssh://petsc at petsc.cs.iit.edu//hg/petsc/petsc-dev
 searching for changes
 no changes found
 asterix:/home/balay/spetschg in https://bitbucket.org/petsc/petsc-dev
 comparing with https://bitbucket.org/petsc/petsc-dev
 searching for changes
 all local heads known remotely
 changeset:   22123:41a8404903d4
 user:Matthew G Knepley knepley at gmail.com
 date:Thu Feb 09 18:11:40 2012 -0600
 files:   src/dm/impls/mesh/meshexodus.c
 description:
 Fixed allocation define


 changeset:   22124:e3267d7effd7
 user:Matthew G Knepley knepley at gmail.com
 date:Thu Feb 09 18:13:59 2012 -0600
 files:   config/builder.py src/dm/impls/complex/complex.c
 src/snes/examples/tutorials/ex62.c
 description:
 Reorganizing ex62 tests and fixed global section
 - Things are somewhat broken now


 changeset:   22125:5302f0df1089
 tag: tip
 user:Matthew G Knepley knepley at gmail.com
 date:Thu Feb 09 19:12:03 2012 -0600
 files:   include/petscdmmesh.hh
 description:
 Small fix to old preallocation





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120210/8bba15fa/attachment.html


<    1   2   3   4   5   6   7   8   9   10   >