--- Ralf Hemmecke <[EMAIL PROTECTED]> wrote:

> But what I think is important is, not just to write HUGE files that 
> nobody can manage anymore, but to add a bit of structure to it.
> Nowadays we could write a whole book into just one latex file. But 
> I bet, most people structure that via several files to keep things
> manageable (in the head not because of the file system).

Bingo.  When I started trying to write Maximabook way back when, one of
the first things I did was learn how to combine individual files using
a "master" control document file.  Even if the editor can handle tens
of thousands of lines (and syntax highlighting those lines!) I can't
parse that much information without help.

I prefer the idea of "one file per concept", which happens to map
fairly well to the journal/academic paper model if we want an Axiom
journal.  I envision the algebra structure as being composed of "core"
files which implement the basic foundations the system is built on and
most basic functionality (e.g. the current pamphlets) and then seeing
the system gradually expand in two ways.  1)  individual papers
(pamphlets) contributed via journal or individual authors having to do
with specific subjects 2)  as the quality of individual contributions
is proven and they become more central to the functionality being
implemented, fold the individual paper or papers that are becoming
"core" into the main file pertaining to that subject (so in a core
file, "chapters" or maybe "sections" would have their own subject
categorization.  Or, using MSC, we have one file for each "top-level"
category that implements the standard functionality of Axiom for that
mathematical domain, and papers in sub-topics that address more
specific areas.

This could also address a concern I have with the wiki and "open"
nature of Axiom - namely, that Axiom is not really a "peer-reviewed"
work.  Normal open source is "potentially" peer-reviewed, meaning
anybody can review it but it is not guaranteed that it has been
reviewed.  The wiki content, by design, is even less guaranteed to be
"correct" in any academically verified sense.  This could pose a
problem with being accepted for use in academic work - e.g. the lack of
authority behind the conceptual correctness of the work in question. 
The question arises - "is this functionality I'm using written by
someone who knows what he/she is doing, and/or has it been checked and
verified by someone who can recognize when it's working incorrectly?" 
The "core" files in each subject will be Axiom's "branded"
functionality - the functionality that we as a project stand behind as
being high quality.  There are a few other ideas that need exploring
here, like systematic peer review of functionality, but I think it is
an important question that will bear on the acceptance of Axiom into
the academic community in the future.  Wikipedia faces the same issue -
how do you know what you're reading is written by someone who knows
what they are doing, and doesn't just represent "common wisdom?"

> We should also note that we are not yet at a point where we can
> really start to work on the mathematics that we all love to do. We
> are struggeling with the building process. So my 
> suggestion/convention is just to keep the build simple. If you come
> up with a better one, welcome.

I think we are moving in that direction.  Axiom is a complex system, so
there are certain limitations to the build simplicity, but I am very
glad to see a configure/make/make install routine arrive - I think that
is a very important first step.

> I think any convention is better then the mess we currently have.
> What I have seen in the Axiom build process is one-pamphlet-per-
> Makefile. That is very much JAVA-like. One would have ONE document
> that describes the whole build process, that is clear. But that
> document is a generated one (dvi/pdf/...) but not necessarily just
> ONE pamphlet.

As I understand it, the autotools will eventually get us close to that
goal.  I think we won't be able to reduce it beyond an autotools build
process triggering a lisp based build process, but that IMHO is the
best possible situation.  Maxima was brought to that point some time
ago and the system has been remarkably successful - I think it shows
"the way forward" for large free lisp programs in general.  Autotools
interfaces with the "normal" software development world (as well as
handling a score of practical points we don't want to worry about), and
lets us keeps the "lisp and beyond" parts in Lisp, which has a number
of advantages from a development standpoint.  Autotools +
defsystem/asdf is a very powerful and flexible setup.
 
> One rule we already have: Everything should be a pamphlet.
> Next rule should be that there has to be a certain structure of the 
> pamphlet so that a number of the pamphlets can be used to produce one
> document (a book if you like) that describes the build, another set
> of pamphlets (that need not be disjoint from the first one) that
> describes the interpreter etc. (But the "certain" above is still
> unclear, at least I want that there is no \documentclass, \begin
> {document}, \end{document} anymore.--The \usepackage is a problem,
> I know and have not yet a good solution.)

I experimented with this some time ago, actually - having the actual
content be in a "content-only" latex file, and having two types of
"top-level" files - one that pulled several "content-only" files into a
book, and the other type (a standard file for all pamphlet files which
could be altered on an individual file basis) which would let the user
print an individual "paper".  IIRC there wasn't much interest in this
at the time and I abandoned the experiment.

> If I use a category like "Ring" I am surely NOT going to "include" a 
> definition of "Ring" into my code. It is completely enough for me if
> the "Ring" that appears in my code+description is a hyperlink and
> leads me to the right place in the generated DVI/PDF/HTML.
> Especially if you think of the web as a big document, why would you
> want to include and reproduce all existing things again and again?

One illustration of presenting information in different ways might be
illustrated by the dimensions example.  In the units paper, I need to
explain several programming concepts "polymorphism" in order to make
sense of how to express dimensional ideas in a computer algebra system.
 Now, if we make all of Axiom fully literate there will undoubtedly be
a "proper" explanation of polymorphism somewhere in the compiler part
of the software, but I'll want to explain it with respect to the units
code because someone reading up on units is probably doing it for
scientific work in the physical sciences.  Thus, an explanation of
polymorphism for a programmer might be less useful to a physical
scientist than the explanation targeted for that particular audience.

However, I agree that in general a link to an explanation is preferable
- we should avoid information duplication which does provide a real
benefit to the end user.
 
Cheers,
CY

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


_______________________________________________
Axiom-developer mailing list
Axiom-developer@nongnu.org
http://lists.nongnu.org/mailman/listinfo/axiom-developer

Reply via email to