request for comments/advices about mapping PetscCookie - Python Type
In petsc4py, I've just implemented support for compose()/query() in my base Object class. However, I wanted to implement query() in such a way that 'dynamic_cast' (in the C++ sense, i.e, downcast) the composed PETSc object and return to Python code an instance of the appropriate Python type. Then, if you compose() a Mat, then query() will return a Mat. I manages all this inside a private python dictionary mapping PetscCookie - Python type. Of course, this required to make sure that ALL PetscXXXInitializePackage() have been called before adding stuff to my dict. All is working fine; moreover, this machinery is also being used in slepc4py and tao4py (yes, I have those...), then after importing slepc4py or tao4py, the dictionary is populated as appropriate. Then, the question are: Do this approach sound good enough? Is it too much hackery? Can I relly in the long future that this approach will always work? BTW, now I have a clear use case for initalizing all the XXX_COOKIE's to 0 in core PETSc. This will really help me to spot package initialization problems. In fact, I had to manage some of those problems in petsc-2.3.2, petsc-2.3.3 and tao-1.9 . PS: The renaming DA_COOKIE - DM_COOKIE does not play well with all this, but I do not care about it right now. -- Lisandro Dalc?n --- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594
request for comments/advices about mapping PetscCookie - Python Type
That approach sounds fine. I think that initializing the COOKIEs was not liked by some compilers. Ugh. Matt On Fri, Oct 10, 2008 at 1:31 PM, Lisandro Dalcin dalcinl at gmail.com wrote: In petsc4py, I've just implemented support for compose()/query() in my base Object class. However, I wanted to implement query() in such a way that 'dynamic_cast' (in the C++ sense, i.e, downcast) the composed PETSc object and return to Python code an instance of the appropriate Python type. Then, if you compose() a Mat, then query() will return a Mat. I manages all this inside a private python dictionary mapping PetscCookie - Python type. Of course, this required to make sure that ALL PetscXXXInitializePackage() have been called before adding stuff to my dict. All is working fine; moreover, this machinery is also being used in slepc4py and tao4py (yes, I have those...), then after importing slepc4py or tao4py, the dictionary is populated as appropriate. Then, the question are: Do this approach sound good enough? Is it too much hackery? Can I relly in the long future that this approach will always work? BTW, now I have a clear use case for initalizing all the XXX_COOKIE's to 0 in core PETSc. This will really help me to spot package initialization problems. In fact, I had to manage some of those problems in petsc-2.3.2, petsc-2.3.3 and tao-1.9 . PS: The renaming DA_COOKIE - DM_COOKIE does not play well with all this, but I do not care about it right now. -- Lisandro Dalc?n --- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
request for comments/advices about mapping PetscCookie - Python Type
Lisandro, In the current model of initializations there are two submodels 1) no dynamic libraries (example Mat, others should be the same) MatCreate() ALWAYS calls MatInitializePackage(). Inside MatInitializePackage() is a static variable indicating if initialization has take place. The user is not allowed to call any other Mat... methods BEFORE MatCreate(), if they do bad things could happen. For example, you cannot call MatNorm() if you have not yet created a Mat. 2) dynamic libraries when the libpetscmat.so dynamic library is opened by PETSc, MatInitializePackage() is automatically called. PetscInitialize() automatically opens all the standard dynamic libraries like libpetscmat.so If the user calls any Mat routine before PetscInitialize() it won't work. If the user calls MatNorm() before a MatCreate() it will not work, but not for the same reason as 1), only because they are not passing a matrix in. With python you care about the dynamic case: so long as your python initialization occurs after PetscInitialize() which it must and will ALL packages have been registered (unless we have a bug). So I think your code should be fine. You say that you can use the PETSC_COOKIE of zero to find such a bug, a pretty weak argument. I don't like the idea of you using the MAT_COOKIE directly at all for your cool python dictionary building. Reason: it is not extensible. Some one adds a new ZAP_COOKIE and you don't know about it and when you find out you have to go change your code. Correct? Here is an alternative model that I think is much better. Every call to PetscCookieRegister() actually registers the cookie (currently it is not recorded in any way, it is only registered in the logging) with the name; could be into something as ugly as a global array or linked list. Your python caster (each times it needs to cast) then could look in this global array or linked list and do the translation. This way your caster will always work; if later in the run someone registers new cookies your caster now has access to it. Now your python code doesn't ever need to see, touch or taste a XXX_COOKIE variable directly; I view the global XXX_COOKIE variables as private to that class implementation and really don't want any other code outside that class touching them directly. Sound workable? Barry I am assuming that your Python caster only needs to know the string name of the class (like Mat) to manage the caste. If it actually needs specific code written for each class then I would like that specific code to be inside that class and not managed in some central location (that is, it would go somewhere in the src/mat/ directory tree. On Oct 10, 2008, at 1:31 PM, Lisandro Dalcin wrote: In petsc4py, I've just implemented support for compose()/query() in my base Object class. However, I wanted to implement query() in such a way that 'dynamic_cast' (in the C++ sense, i.e, downcast) the composed PETSc object and return to Python code an instance of the appropriate Python type. Then, if you compose() a Mat, then query() will return a Mat. I manages all this inside a private python dictionary mapping PetscCookie - Python type. Of course, this required to make sure that ALL PetscXXXInitializePackage() have been called before adding stuff to my dict. All is working fine; moreover, this machinery is also being used in slepc4py and tao4py (yes, I have those...), then after importing slepc4py or tao4py, the dictionary is populated as appropriate. Then, the question are: Do this approach sound good enough? Is it too much hackery? Can I relly in the long future that this approach will always work? BTW, now I have a clear use case for initalizing all the XXX_COOKIE's to 0 in core PETSc. This will really help me to spot package initialization problems. In fact, I had to manage some of those problems in petsc-2.3.2, petsc-2.3.3 and tao-1.9 . PS: The renaming DA_COOKIE - DM_COOKIE does not play well with all this, but I do not care about it right now. -- Lisandro Dalc?n --- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594
FW: [PETSC #18391] PETSc crash with memory allocation in ILU preconditioning
Hi, I am seeing problems when trying to build petsc-dev code. My configure line is below, same as I successfully did for 2.3.2-p10. I tried with mkl 9 and mkl 10. Same errors. There are references to undefined symbols. Please share with me if you have any experience with the issue or suggestions to resolve it. Thanks, Ying ./config/configure.py --with-batch=1 --with-clanguage=C++ --with-vendor-compilers=intel '--CXXFLAGS=-g -gcc-name=/usr/intel/pkgs/gcc/4.2.2/bin/g++ -gcc-version=420 ' '--LDFLAGS=-L/usr/lib64 -L/usr/intel/pkgs/gcc/4.2.2/lib -ldl -lpthread -Qlocation,ld,/usr/intel/pkgs/gcc/4.2.2/x86_64-suse-linux/bin -L/usr/intel/pkgs/icc/10.1.008e/lib -lirc' --with-cxx=$ICCDIR/bin/icpc --with-fc=$IFCDIR/bin/ifort --with-mpi-compilers=0 --with-mpi-shared=0 --with-debugging=yes --with-mpi=yes --with-mpi-include=$MPIDIR/include --with-mpi-lib=\[$MPIDIR/lib64/libmpi.a,$MPIDIR/lib64/libmpiif.a,$MPIDIR /lib64/libmpigi.a\] --with-blas-lapack-lib=\[$MKLLIBDIR/libguide.so,$MKLLIBDIR/libmkl_lapack .so,$MKLLIBDIR/libmkl_solver.a,$MKLLIBDIR/libmkl.so\] --with-scalapack=yes --with-scalapack-include=$MKLDIR/include --with-scalapack-lib=$MKLLIBDIR/libmkl_scalapack.a --with-blacs=yes --with-blacs-include=$MKLDIR/include --with-blacs-lib=$MKLLIBDIR/libmkl_blacs_intelmpi_lp64.a --with-umfpack=1 --with-umfpack-lib=\[$UMFPACKDIR/UMFPACK/Lib/libumfpack.a,$UMFPACKDIR/AM D/Lib/libamd.a\] --with-umfpack-include=$UMFPACKDIR/UMFPACK/Include --with-parmetis=1 --with-parmetis-dir=$PARMETISDIR --with-mumps=1 --download-mumps=$PETSC_DIR/externalpackages/MUMPS_4.6.3.tar.gz --with-superlu_dist=1 --download-superlu_dist=$PETSC_DIR/externalpackages/superlu_dist_2.0.tar .gz /nfs/pdx/proj/dt/pdx_sde02/x86-64_linux26/petsc/petsc-dev/conftest.c:7: undefined reference to `f2cblaslapack311_id_' /p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libguide.so: undefined reference to `pthread_atfork' -- You set a value for --with-blas-lapack-lib=lib, but ['/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libguide.so', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl_lapack.s o', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl_solver.a ', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl.so'] cannot be used * -Original Message- From: Barry Smith [mailto:bsm...@mcs.anl.gov] Sent: Thursday, October 09, 2008 12:39 PM To: Rhew, Jung-hoon Cc: PETSc-Maint Smith; Linton, Tom; Cea, Stephen M; Stettler, Mark Subject: Re: [PETSC #18391] PETSc crash with memory allocation in ILU preconditioning We don't have all the code just right to use those packages with 64 bit integers. I will try to get them all working by Monday and will let you know my progress. To use them you will need to be using petsc-dev http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html so you can switch to that now if you are not yet using it in preparation for my updates. Barry On Oct 9, 2008, at 12:52 PM, Rhew, Jung-hoon wrote: Hi, I found that the root cause of malloc error was that our PETSc library had been compiled without 64 bit flag on. Thus, PetscInt was defined as int instead of long long and for large problems, the memory allocation requires memory beyond the maximum of int and causes integer overflow. But when I tried to build using 64 bit flag (--with-64-bit- indices=1), all files associated with the external libraries (such as UMFPACK, and MUMPS) built with PETSc started failing in compilation mainly due to the incompatibility between int in those libraries and long long in PETSc. I wonder if you can let us know how to resolve this conflict when builing PETSc with 64 bit. The brute force way is to change the source codes of those libraries where the conflicts occur but I wonder if there is a neater way of doing this. Thanks. jr Example: libfast in: /nfs/ltdn/disks/td_disk49/usr.cdmg/jrhew/work/mds_work/ PETSC/mypetsc-2.3.2-p10/src/mat/impls/aij/seq/umfpack umfpack.c(154): error: a value of type PetscInt={long long} * cannot be used to initialize an entity of type int * int m=A-rmap.n,n=A-cmap.n,*ai=mat-i,*aj=mat- j,status,*ra,idx; -Original Message- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: Tuesday, October 07, 2008 6:15 PM To: Rhew, Jung-hoon Cc: petsc-maint at mcs.anl.gov; Linton, Tom; Cea, Stephen M; Stettler, Mark Subject: Re: [PETSC #18391] PETSc crash with memory allocation in ILU preconditioning During the symbolic phase of ILU(N) there is no way in advance to know how many new nonzeros are needed in the factored version over the original matrix (this is tree for LU too). We handle this by starting with a certain amount of memory and then if that is not enough for for the symbolic factor we double the
FW: [PETSC #18391] PETSc crash with memory allocation in ILU preconditioning
On Fri, Oct 10, 2008 at 6:29 PM, Deng, Ying ying.deng at intel.com wrote: Hi, I am seeing problems when trying to build petsc-dev code. My configure line is below, same as I successfully did for 2.3.2-p10. I tried with mkl 9 and mkl 10. Same errors. There are references to undefined symbols. Please share with me if you have any experience with the issue or suggestions to resolve it. 1) Please always send configure.log. The screen output does not tell us enough to debug problems. 2) Specifying libraries directly is not usually a good idea since some packages, like MKL, tend to depend on other libraries (like libguide, libpthread). I would use --with-blas-lapack-dir=$MKLDIR 3) Mail about install problems should go to petsc-maint at mcs.anl.gov. petsc-dev is for discussion of development. Thanks, Matt Thanks, Ying ./config/configure.py --with-batch=1 --with-clanguage=C++ --with-vendor-compilers=intel '--CXXFLAGS=-g -gcc-name=/usr/intel/pkgs/gcc/4.2.2/bin/g++ -gcc-version=420 ' '--LDFLAGS=-L/usr/lib64 -L/usr/intel/pkgs/gcc/4.2.2/lib -ldl -lpthread -Qlocation,ld,/usr/intel/pkgs/gcc/4.2.2/x86_64-suse-linux/bin -L/usr/intel/pkgs/icc/10.1.008e/lib -lirc' --with-cxx=$ICCDIR/bin/icpc --with-fc=$IFCDIR/bin/ifort --with-mpi-compilers=0 --with-mpi-shared=0 --with-debugging=yes --with-mpi=yes --with-mpi-include=$MPIDIR/include --with-mpi-lib=\[$MPIDIR/lib64/libmpi.a,$MPIDIR/lib64/libmpiif.a,$MPIDIR /lib64/libmpigi.a\] --with-blas-lapack-lib=\[$MKLLIBDIR/libguide.so,$MKLLIBDIR/libmkl_lapack .so,$MKLLIBDIR/libmkl_solver.a,$MKLLIBDIR/libmkl.so\] --with-scalapack=yes --with-scalapack-include=$MKLDIR/include --with-scalapack-lib=$MKLLIBDIR/libmkl_scalapack.a --with-blacs=yes --with-blacs-include=$MKLDIR/include --with-blacs-lib=$MKLLIBDIR/libmkl_blacs_intelmpi_lp64.a --with-umfpack=1 --with-umfpack-lib=\[$UMFPACKDIR/UMFPACK/Lib/libumfpack.a,$UMFPACKDIR/AM D/Lib/libamd.a\] --with-umfpack-include=$UMFPACKDIR/UMFPACK/Include --with-parmetis=1 --with-parmetis-dir=$PARMETISDIR --with-mumps=1 --download-mumps=$PETSC_DIR/externalpackages/MUMPS_4.6.3.tar.gz --with-superlu_dist=1 --download-superlu_dist=$PETSC_DIR/externalpackages/superlu_dist_2.0.tar .gz /nfs/pdx/proj/dt/pdx_sde02/x86-64_linux26/petsc/petsc-dev/conftest.c:7: undefined reference to `f2cblaslapack311_id_' /p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libguide.so: undefined reference to `pthread_atfork' -- You set a value for --with-blas-lapack-lib=lib, but ['/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libguide.so', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl_lapack.s o', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl_solver.a ', '/p/dt/sde/tools/x86-64_linux26/mkl/10.0.2.018/lib/em64t/libmkl.so'] cannot be used * -Original Message- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: Thursday, October 09, 2008 12:39 PM To: Rhew, Jung-hoon Cc: PETSc-Maint Smith; Linton, Tom; Cea, Stephen M; Stettler, Mark Subject: Re: [PETSC #18391] PETSc crash with memory allocation in ILU preconditioning We don't have all the code just right to use those packages with 64 bit integers. I will try to get them all working by Monday and will let you know my progress. To use them you will need to be using petsc-dev http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html so you can switch to that now if you are not yet using it in preparation for my updates. Barry On Oct 9, 2008, at 12:52 PM, Rhew, Jung-hoon wrote: Hi, I found that the root cause of malloc error was that our PETSc library had been compiled without 64 bit flag on. Thus, PetscInt was defined as int instead of long long and for large problems, the memory allocation requires memory beyond the maximum of int and causes integer overflow. But when I tried to build using 64 bit flag (--with-64-bit- indices=1), all files associated with the external libraries (such as UMFPACK, and MUMPS) built with PETSc started failing in compilation mainly due to the incompatibility between int in those libraries and long long in PETSc. I wonder if you can let us know how to resolve this conflict when builing PETSc with 64 bit. The brute force way is to change the source codes of those libraries where the conflicts occur but I wonder if there is a neater way of doing this. Thanks. jr Example: libfast in: /nfs/ltdn/disks/td_disk49/usr.cdmg/jrhew/work/mds_work/ PETSC/mypetsc-2.3.2-p10/src/mat/impls/aij/seq/umfpack umfpack.c(154): error: a value of type PetscInt={long long} * cannot be used to initialize an entity of type int * int m=A-rmap.n,n=A-cmap.n,*ai=mat-i,*aj=mat- j,status,*ra,idx; -Original Message- From: Barry Smith