> I guess the literate idea even says that it does not matter how a file > is called. It is most important that you write a paper from which you > can generate all the code (even different files from one pamphlet > source). That sounds nice, but in some sense I find that very difficult > to maintain. For ALLPROSE I set the convention every file is a .nw file.
> I suggest to adopt that convention for axiom (replace .nw by .pamphlet). Ralf, In fact the system direction is the exact opposite, at least the parts I touch. C (and, even more egregious, Java) adopt the file-per-function kinds of idea. If you look back in history (or lived it as I have), you'll see that the tiny-file idea is based on computer restrictions. Memory was small and the compiler couldn't handle large files. Furthermore, the loader couldn't load large files. So everyone uses small files (or, in my case, short paper tapes). You see this legacy everywhere. It is still possible to write memory overlays in linkers so that when one function completes it can be relaced in memory by another link section. This allows a large program to run in small physical memory. This "feature" is still supported by the intel chip with segmented memory (which is rarely used but still causes pain). Limits of storage, compiler technology and imagination caused the idea of splitting out header files as a way to expose signatures. Now we live in header-file hell. "Libraries" were ways to collect all the little pieces so the linker could connect them and now we have DLL hell and lib* hell. Indeed we are forced to use "IDE"s because the tiny files have overwhelmed our ability to cope. The #line directive is a hack to allow a runtime to figure out which grain-of-sand-file contains this particular function. It is also a legacy of history. I am building a program for work that lives in one literate file, has 30,000 lines of Lisp code, and has 2000+ pages of final pdf documentation (so far) and 7000 test cases. It is never ambiguous which file contains the failing function. Computer programs have nothing to do with their file storage but we have linked these ideas and suffered for it. Suppose we follow the past and make Axiom "include" files which separate out the signature information in a domain or category into separate files. Then we could add an "include" statement. This way lies madness. Consider what happens if you scale Axiom by a factor of 100 in the next 30 years onto a petamachine. You end up with 110,000 domains and categories with roughly 1,100,000 functions. I don't want 110K files. I want 110 books. I have a library that must include 20 books on group theory. They each have a section on permutation groups. It would be more "efficient" if there were a standardized "permutation group" booklet that everyone referenced as "included" into their text. That way there would be no duplication. But that misses the point of literate works. The key point is not that I talk about permutation groups but that I talk about them in a certain way, in a certain place, with a certain style, for a certain audience. The fact that the information is available elsewhere means nothing to the human. And we're trying to write for humans, not for machines. I want to send you one book that you can just sit and read. Not read by computer, but sit and read for educational purposes. And, as a side-effect, you can also compile and run it. The running program illustrates the ideas in a way that you can build on them. You can modify them, extend them, write about them, and distribute them, solve problems with them. You can build on other people's work. We need to lift our eyes back to the humans, away from the technology. Communicate, don't program. Tim _______________________________________________ Axiom-developer mailing list Axiom-developer@nongnu.org http://lists.nongnu.org/mailman/listinfo/axiom-developer