>We have found that for a moderately large project (say over 10,000
>lines of code) an outline (or even a set of outlines) is not enough.
>We rely heavily on a LaTeX table of contents (our best analog to a Leo
>outline), but we also rely on diagrams and overview chapters.
>This is all very expensive in time and effort, and I would say that in
>my research group we seem to be able to afford this effort only
>roughly every 5 to 10 years.  Of course we are a very small shop; your
>mileage may vary.

The axiom sources will eventually be available as a series of books.
There are 10 volumes planned so far. Volume 1, the tutorial exists.
Volume 5, the interpreter is being written now and is likely to be
the next volume available.

I don't expect that the books will actually be printed. The last time
I printed all of the axiom sources (double sided) comprising only the
naked source code it filled approximately 6 feet of linear shelf space.
The literate version will likely double that, at minimum. The algebra
volume alone will take up many linear feet if we succeed in the goal
of joining the source code with the research papers.

Of course these books are the actual source code of the system and
will dynamically change over time. We will need overview volumes
and a volume that is nothing but index and annotated bibliography.
I suspect other 'meta-volumes' will come into play.

In the grand scheme of things I see this set of volumes as the
kernel of a computational mathematics literature, joining the research
work with the actual algorithms and organized by subject matter.
Ordinary textbooks like Bronstein's Symbolic Integration will be
able to reference or include sections of the real algorithms that
are already documented and working.

At the moment the literate programs are just a collection of .dvi
files organized in parallel with the source tree, created during
system build time. We need to invent the machinery, analysis programs,
markup, indexing, and organization programs to create a useful,
searchable, browsable library.

I've given a little thought to the subject and experimented with
various methods. One approach you might find interesting is the
"booklet" idea. It's a natural extension of the noweb machinery.
All we do is give a syntax and semantics to the chunk names. If
the chunk name parses to a valid URL we fetch and include the URL:

<<any chunk name>>=      standard chunk replace
<<file:///tmp/foo.nw>>=  file /tmp/foo included in place
<<ftp://a.b/tmp.nw>>=    ftp fetch of the file

etc. We have a "booklet" program as part of the source tree
and I've used it to build a working example but have not had the
time to exploit the direction.

Books may or may not be the best form for the source code.
Bill Page has experimented with putting the literate sources
up as wiki-editable web pages, obviating the need for CVS.
This enables technologies like hyperlinking, animated flash,
"tutorial movies", online lectures, and dynamic source graphs.

The Doyen effort (daly.axiom-developer.org/doyen) will allow
research papers containing source to be drag-and-dropped onto
a running Axiom and dynamically added.

All of this is a research experiment as far as I'm concerned.
I do not have a clear idea of how to organize a huge pile of
software to make it into a living document and a coherent whole.
Like all research tasks, the shape changes and improves as we
struggle with the issues that arise.

Tim



_______________________________________________
Axiom-developer mailing list
Axiom-developer@nongnu.org
http://lists.nongnu.org/mailman/listinfo/axiom-developer

Reply via email to