On June 20, 2005 7:35 PM Tim Daly wrote: > ... > some sort of browser-like capabilities are assumed.
Yes, most certainly. As usual, I find I agree with many things you say below, but disagree strong on some specific points. > the limitations we have now seems to be things like: > > * the syntax of the web page does not have a semantic model > I think this is not accurate. I might agree if you said this differently. For example, I do think that the conventional document semantics of HTML is too limited for our purposes. Fortunately there is a lot of very active research going on right now concerning how extended HTML to be a much richer language or rather, a set of application-specific related languages such as MathML, SVG, RDF and many more. > we're trying to build a research science platform, not a > display GUI object. we'd like the GUI piece of the system to > have a clean, programmable semantics so we can reason about > user actions. we want to reflect user actions (say handwriting, > gestures, clicks, eye-gazes) and system state (say branch cut > crossings, solid model data structure stresses, hill climbing > search progress, as subtle changes to the screen. I don't think any of these requirements are significantly different than any other sophisticated web-based application being built today. In fact I think all of them have already been used to greater or lesser degree. The problem here, I think, is just that none of these "virtual reality type" of user interfaces have ever approached the degree of standardization that would make web franchising feasible. I mean: not enough people and projects have bought-in to any one such approach. > ... > * the DOM model is hierarchical > > the DOM is a Document Object Model. it's basically a hierarchical > data structure and it suffers from the same problems that databases > used to suffer, that is, they are hierarchy. I hierarchical data structure is obviously appropriate if the object you wish to model is hierarchical. Right? > Hierarchical databases ruled the day until new theory came around > and the world went relational (I know most of you don't remember > it but there was a HUGE fight about this. I attended a committee > meeting about this at a database conference and the major objection > was there would never be enough horsepower or memory to handle > relational searches... beware the future). Actually, I was probably at that same meeting. :) Funny that you should mention this example, because I think that, in reverse, it actually demonstrates your point and also lets me make a point I want to make. The relational data model actually had *less* semantics than many of the data base systems that were in common use at that time - both hierarchical and network. In this respect it was rather like XML today compared to early HTML and SGML. The compromise was less semantics for greater generality and mathematical rigour. At the time no one knew how to deal in a general and rigorous way with data structures more complex than relations. The lack of semantics is what made people worry that implementing such general operations would require too much horsepower, and for a while this was true. It is still very hard for a purely relational system to match the performance of a hierarchical database for something that is naturally hierarchical, for example something like a book or document. But here we have examples of two strange principles at work: 1) less is more, i.e. that lack of semantics is a good thing if what you want to do is generalize and formalize something. And 2) worse is better. The fact that the relational model of data is now very widely accepted is a good thing in spite of the fact that it is worse at representing many important data objects. When we are thinking about a browser for the future Axiom, I think we need to keep these principles in mind. > ... > * the browser cannot interact with the filesystem > * the browser pages cannot be drawn upon > * the browser pages are "paper-like" > > the browser is a dumb tool at the moment. we need to > break out of the mold, pick a particular browser, and make > our own version of it. our version can be modified to do > read/write of the file system, handle socket connections, > present tabbed pages or sub-areas as an active canvas so > Axiom can write graphics or text to them in real time, > present a section of the screen as command-line I/O, show > axiom state in special tabs, allow the browser to start > axiom, let it speak lisp, etc. Here is one place that I disagree strongly... :) If there is one lesson that is really clear to me about open source and the Internet in general, it is that doing really big things requires collaboration, not cooperation (cooperation in the sense of the verb to co-opt). By that I mean we have to learn to depend on and take advantage of the work of others (other projects) rather than attempt to acquire and control them. I think this is a really hard lesson for a traditional programmer to learn. I always used to hate learning to use someone else's subroutine library since they never seemed to "think the way I did". And often there were good reasons related to efficiency, correctness, and trust that justified this view. But for me open source, the Internet and very rapidly increasing and affordable computer power has completely changed my views about this. > in short, we need to stop struggling with the limits of > current browser technology, take a standard browser and > extend it to our purposes. in fact, i expect we could do > what we do with GCL: package our own version from a tgz file > and add patches to do what axiom needs. No, no, no! Other people much smarter and more dedicated than us are already doing almost everything of this sort that we need. What we need to do is exactly the opposite. We need to learn to incorporate and utilize the features and extensions of current and future generations of browser technology. Axiom's requirements (along with many other sophisticated browser based applications) can help to influence the evolution of browser technology but it should *not* attempt to co-opt and control it. This is a matter of design philosophy. Standards are a very important part of that philosophy. This is one of the reasons that worse (at least less than ideal) is often better in the long run. > in the 30 year horizon we need something that is useful, > impressive, and reasonably modern. today's tools just hint > at what will be common. we need to listen to the hints, > anticipate the needs, and get out in front. Again, I find myself wanting to label this as "inappropriate behaviour" for the Internet and open source. What you are describing fits quite well with the old corporate model was largely defined by IBM and then taken to the extreme by Microsoft, but not with where IBM is today and not with the open source movement. > we need to think about the researcher's problem space in > a much deeper form than current systems do (including Axiom). > in the long term we want to be at the center of the tool > set that researchers use to solve problems. To blend another metaphor (anyone remember Carlos Castaneda?) I think a much better attitude more or less in keeping with today's "extreme programming" is the "warrior's attitude". By design we should try to make Axiom flexible and adaptable and able to take advantage of new browser technology as it becomes available, even though today we don't know exactly what that might be. Again, *standards* are one of the best ways to try to achieve that. If we can do this I think we might have a good chance of being at least one of the players in the center of the tool set. I think proprietary systems will have a much harder time being and staying in this position. > research is long-term, detail-tedious, and takes a lot > of work to build up a big picture. we want to be able to > capture problem state, suggest relevant papers, perform > proofs in the background, do speculative computations for > possible suggestions, pre-generate literate pamphlets with > references and code, etc. we want to draw a wide range of > tools together (math language, graphics, 3D models > (organic, engineering, etc), full-text searches, > collaborative tools, etc). Yes about this I agree completely. And from my point of view this is almost exactly the same goal of much of the leading edge research and development on the web today. > related to the current suggestions i think we are limiting > ourselves too much and creating too tight a straight-jacket > by trying to work within the limits of current browsers and > MMA-like worksheets. I agree that we have to push those limits, but I think we are in a very good position to do that from within the environment of open source and advanced web application development in general. > choose a browser, get the source, add it to axiom, and > extend it in various ways so we can experiment with ideas. Again: no, no, no. :) > just making it possible for the browser to read/write the > filesystem and present a "canvas" area to axiom puts us > far ahead of the world. it's not ideal but it works. But here: yes, yes, yes! "it's not ideal but it works." is exactly how I want to characterize the choice of using standard web browsers on which to base the new (and future) Axiom user interface. ------- Thanks for your message and thanks for opening this debate, Tim. Although we often both agree and disagree on many serious points, I am glad that we have been able to continue to work together on Axiom and I fully expect this arrangement to continue. So I know you wont and I don't want others to think these points of disagreement are a bad thing. I hope other people will feel free to share their opinions and reasoning. Cheers, Bill Page. _______________________________________________ Axiom-developer mailing list Axiom-developer@nongnu.org http://lists.nongnu.org/mailman/listinfo/axiom-developer