* Stefano Mazzocchi <[EMAIL PROTECTED]> [2003-10-12 16:04]:

> On Sunday, Oct 12, 2003, at 16:13 Europe/Rome, Alan Gutierrez wrote:
> 
> >The trouble with Wiki and docs is that new users, such as myself,
> >are going to look for a documentation outline. A good TOC and index
> >makes all the differnece in the world when searching documentation.
> 
> eheh, right on.
> 
> >Has anyone discussed how to impose an outline on a Wiki?
> 
> yes. there are some proposals on the table, ranging from simple to 
> futuristic.

Did you just bang this out? Thanks for responding to my question so
thoroughly.

>                                 - o -
> 
> the simple one is a manually created single-dereferencing linkmap.
> 
> Imagine that you have a repository with the following learning objects:
> 
>  /1
>  /2
>  /3
>  /4
> 
> which are edited and created individually. Then you have a linkmap that 
> basically says
> 
>  Trail "A"
>    /1
>   Section "Whatever"
>     /3
>     /4
>   Section "Somethign else"
>     /2
> 
>  Trail "B"
>    /4
>    /1
> 
> Trails are like "books", and they might share LOs. Trails might be 
> published as a single PDF file for easier offline review.
> 
> Trails can be used as "tabs" in the forrest view, while the rest is the 
> navbar on the side.
> 
> the LO identifier (http://cocoon.apache.org/LO/4) can be translated to 
> a real locator (http://cocoon.apache.org/cocoon/2.1/A/introduction) and 
> all the links rewritten accordingly.
> 
> This link translation is a mechanical lookup, based on the linkmap 
> information.
> 
> [note: this is getting closer to what topic maps are about! see 
> topicmaps.org, even if, IMO, it wouldn't make sense to use topic maps 
> for this because they are much more complex]

An outline is a classic tool for organizing a document. This is the
aspect of content managment that I find interesting, creating a
mechanism for organization by an information architect. Searching is
well and good, but I've learned more from the "Cocoon Developers's
Handbook" in two hours than I've learned form the Cocoon Wiki in a
week.

Make this tedious task of link map creation simplier and you will
have an exciting solution for myriad business problems.  

Creating the link map would be easier if the learning object was a
proscribed object, rather than a found object, the way things tend
to be in the semanitc web. Gathing up all the micro topics of a Wiki
has got to be akin to hearding cats. You might connect to a
two sentance learning object on the topic map, one only to have it
change meaning as people contribute to the object.

To my mind it is best to have a go between.  A documentation Wiki
really ought to be documentation tasks that are pareceled out to the
community. There is a ready context for a contributors submission.
A topic map, or outline, that specifies topics.  Sections are given
over to a tecnhical writer, who acts a maintainer. Wiki
contributions are attached to a topic in the outline, the maintainer
pulls relevent discussion up into the topic, editing it for
consistant style.

This is how good open source documentation is created anyway.


>                                     - o -
> 
> The approach above works but it requires two operations:
> 
>  1) creation of the LO
>  2) connection of the LO in the linkmap

Or those operations could be inverted and iterative.

>                                     - o -

> In the more futuristic approach, each learning object specifies two 
> things:
> 
>  1) what it expects the reader to know
>  2) what it expect to give to the reader
> 
> Suppose that you have learning object library of
> 
>                   /1 -(teaches)-> [a]
>                   /2 -(teaches)-> [a]
>  [a] <-(expects)- /3 -(teaches)-> [b]
>  [a] <-(expects)- /4 -(teaches)-> [c]

This to me is a parallel course to an outline. In many of the
cookbook style software texts I read, there is a see also section in
each recipie. The above would not replace an outline, but the
absence of the above shows a neglection of the medium of hypertext.

>                               - o -
> 
> If each editor comes up with his own identifiers, we might have to 
> issues:
> 
>  1) identifiers are so precise that there is no match in between 
> documents written by different people
> 
>  2) identifiers are so broad in spectrum that concepts overlap and 
> dependencies blur
> 
> So we must find a way to come up with a system that helps us in the 
> discovery, creation and correctness maintenance of a taxonomy, but 
> without removing or overlapping human judgment.

Discovery, creation and correctness maintence. I'm repeating this
because they sound like the overarching goals of content managment.
I like this because it implies that a content managment system aids
decision making, rather than obsolesing decision making.

>                               - o -
> 
> Here is a scenario of use (let's focus on text so, LO = page)
> 
>  a) you edit a page.
> 
>  b) you highlight the words in the page that you believe are important 
> to identify the knowledge distilled in the page that is transmitted to 
> the reader.
> 
>  c) you also highlight the words in the page that you believe identify 
> the cognitive context that is required by the user to understand this 
> page
> 
>  [this can be done very easily in the WYSIWIG editor, using different 
> colors]
> 
>  c) when you are done, you submit the page into the system.
> 
>  d) the page gets processed against the existing repository.
> 
>  e) the system suggests you the topic that might be closer to your 
> page, and you select the ones you think they fit the best. or you 
> introduce a new one in case nothing fits.
> 
> point d) seems critical in the overall smartness of the system, but I 
> don't think it is since semantic estimation is done by humans on point 
> e), point d) just has to be smart enough to remove all the pages that 
> have nothing at all to do with the current learning object.
> 
> Possible implementations are:
> 
>  - document vector space euclidean distance (used by most search 
> engines, including google and lucene)
>  - latent semantic distance (but this one is patented but it will 
> expire in a few years, used for spam filtering by the Mail.app in 
> MacOSX, used by the Microsoft Office help system and the assistant).

Democracy is another useful algortihm. (Altough, I'm giddy thinking
about the use of Google/Lucene not only to extract but to inject
content into a Web.) Moderation is one approach.

Prior to a) there is the contributors interataction with the system
that prompted the sumbission. The submission is not entirely
without context, so perhaps it is enough to inject the submission at
the point where the author got on board, just as I am injecting my
content into this thread here, and let the community move the
content around. That is, provide community tools for moderation and
classifcation. Heck, invite the community to highlight what they
feel are keywords as they read the document, feed that to Lucene.

>                                - o -

> The above model paints a double-dereference hypertext, sort of 
> "polymorphic" hypertext where links are made to "abstract concepts" 
> that are dereferenced against resource implicitly.

Help. Double-dereferenced? Polymorphic? Maybe I don't understand the
meaning of the word dereference in this context.

> In fact, navigation between learning objects can be:
> 
>  1) hand written
>  2) inferred from the contextual dependencies
>  3) inferred from the usage patterns

For 3) keep in mind that once can trace a visitor's through the
documentation. These proximities can be fed to a search engine,
perhaps with feedback from the visitor. A link that says, I found
it! and/or a  Did you find this useful? menu. (This is all being
done somewhere I am sure.)

> I believe that a system that is able to implement all of the above 
> would prove to be a new kind of hypertext, much closer to the original 
> Xanadu vision that Ted Nelson outlined in the '60s when he coined the 
> term hypertext.

Lofty!

> I don't think we have to jump all the way up here, there are smaller 
> steps that we can do to improve what we have, but this is where I want 
> to go.

I am very interested in creating the tools to assist in the ceation
of the outline. I'd like to look at the problem from the persepctive
of a librarian (which I am not) or an editor. 

I am intereseted in managing the content of a single project, rather
than managing a the content of an enormous organziation, or the
entire web. I want a document management system that gears itself
towards the publication of a document, with toc, index, chapters.
Collective Word Processing?

-- 
Alan Gutierrez - [EMAIL PROTECTED] - C:504.301.8807 - O:504.948.9237

Reply via email to