On Sat, Jul 9, 2011 at 6:13 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote: > On Sat, Jul 9, 2011 at 18:00, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote: >> >> I frequently get this question: I have a code "blah", which has its >> own build system (e.g., wmake), and I want to use PETSc solvers in it. >> ?How can I build the two together? ?The usual answer I give that they >> either have to (a) build their application using our >> makefiles, or (b) include $PETSC_DIR/conf/variables, etc. > > These are both makefile-based solutions, That's right. And that's my point: what we offer now for incorporating PETSc into other codes is somewhat limited. > and "make getlinklibs" is a better > ways to snarf the variables. But only marginally. > We could easily write a pkgconfig file for > PETSc. Some people would like that, but pkgconfig doesn't really care about > supporting multiple installs (different PETSC_ARCH). As I've suggested > before, I think we should have a pure Python script that provides the > information in a machine-readable way (listing library paths without > "-Wl,-rpath," flags that may need to be adjusted if the user is not linking > exactly the same way that PETSc was, correctly handle shared versus static > linking, etc).
I think another way would be to simplify the writing of "configure modules" for libraries built using some set of standard tools (e.g., GNU packages built with configure/make, etc). We already configure these packages internally (e.g., mpich). If a user could write a simple foo.py that configured/built their favorite package with PETSc as a dependency, they could then proceed to modify the FOO code and jack PETSc into it. At the same time they couldn't have to parse the output of make getlinklibs, etc. Dmitry.