Re: [Wikitech-l] Simple Page Object model using #lst

2011-01-25 Thread Brion Vibber
On Tue, Jan 25, 2011 at 10:27 AM, Platonides  wrote:

> Had LST used   to mark sections,
> instead of content, it
> would be as easy as traversing the preprocessor output, which would
> already have the sections splitted.
>

It was done this way in order to allow overlapping sections: LST was created
so arbitrary parts of a document on Wikisource can be quoted while retaining
a direct link to the original document as it continues to be edited.

Basically, the section markers are permanent markers for the source of a
copy-and-paste operation. One person might be copying from paragraph 1 to
paragraph 4; another might copy from paragraph 3 to paragraph 5; your page
structure looks like this:

  [page]
[section-open 1/]
[para 1/] 
[para 2/] 
[section-open 2/]
[para 3/] 
[para 4/] 
[section-close 1/]
[para 5/] 
[section-close 2/]
  [/page]

Since the LST sections overlap, they don't really fit well in the
hierarchical structures that the preprocessor deals in except as standalone
start/end markers.

*BUT* ... it's probably possible to actually redo things to use that above
structure in a sensible way, instead of doing text regexes:

  iterate through the node tree:
if found desired section start node:
  start saving our spot
if found desired section end node:
  if start node was at same level:
grab everything in between
RETURN that to upstream parser
  else:
find the closed common parent node of start and end
build a node tree that has the parts of the start's parent before
the start trimmed, and the parts of the end's parent after the end trimmed
RETURN that to upstream parser

One could also pull the markers out of the original text and store them as
separate metadata in some way, which seems to be part of the suggestions
earlier in thread. The main problem here is that we could easily end up
losing track of the markers during editing; we have no persistent identity
for pieces of text, so if there's not a visible node in there for editors to
move & copy along with their alterations, they not be able to persist
automatically.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Simple Page Object model using #lst

2011-01-25 Thread Platonides
Had LST used   to mark sections,
instead of content, it
would be as easy as traversing the preprocessor output, which would
already have the sections splitted.

Alex Brollo wrote:
> 2011/1/25 Alex Brollo 
> 
> Just to test effectiveness of such a strange idea, I added some formal
> section tags into a 6 Kby text section.txt, then I wrote a simple script to
> create a "data area" , this is the result (a python dictionary into a html
> comment code) appended to the section.txt file:
> 
> 
> 
> then I run these lines from python idle:
> 
 for i in range(1000):
> f=open("section.txt").read()
> indici=eval(find_stringa(f,""))
> t=""
> for i in indici[""]:
> t+=f[i[0]:i[1]]
> 
> As you see the code, for 1000 times:
> opens the file and loads it
> selects "data area" (find_stringa is a personal, string seach tool to get
> strings), and converts it into a dictionary
> retrieves all the text inside multiple sections named "1" (the worst case in
> the list: section 1 has three instances: [(152, 990), (1282, 2406), (4078,
> 4478)]
> 
> Time to do 1000 cicles: more or less, 3 seconds on a far from powerful pc.
> :-)
> Fast, in my opinion!
> 
> So, it can be done, and it runs, in an effective way too. Doesn't it?
> 
> Alex

It can obviously be done. But you should compare it against the original
implementation. 3 seconds by itself isn't meaningful.
Another thing to test would be using stripos() instead of those regex,
in case it is faster.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Simple Page Object model using #lst

2011-01-25 Thread Alex Brollo
2011/1/25 Alex Brollo 

Just to test effectiveness of such a strange idea, I added some formal
section tags into a 6 Kby text section.txt, then I wrote a simple script to
create a "data area" , this is the result (a python dictionary into a html
comment code) appended to the section.txt file:



then I run these lines from python idle:

>>> for i in range(1000):
f=open("section.txt").read()
indici=eval(find_stringa(f,""))
t=""
for i in indici[""]:
t+=f[i[0]:i[1]]

As you see the code, for 1000 times:
opens the file and loads it
selects "data area" (find_stringa is a personal, string seach tool to get
strings), and converts it into a dictionary
retrieves all the text inside multiple sections named "1" (the worst case in
the list: section 1 has three instances: [(152, 990), (1282, 2406), (4078,
4478)]

Time to do 1000 cicles: more or less, 3 seconds on a far from powerful pc.
:-)
Fast, in my opinion!

So, it can be done, and it runs, in an effective way too. Doesn't it?

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Simple Page Object model using #lst

2011-01-25 Thread Alex Brollo
2011/1/25 Jesse (Pathoschild) 

> On Tue, Jan 25, 2011 at 8:14 AM, Alex Brollo 
> wrote:
> > If this would happen, I imagine that the original page could be
> considered
> > an "object", t.i. a collection of "attributes" (fragments of text)  and
> > "methods" (template chunks).
>
> Labeled Section Transclusion can be used this way, but it's not very
> efficient for this. Internally it uses generated regular expressions
> to extract sections; you can peek at its source code at
> <
> http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/LabeledSectionTransclusion/lst.php?view=markup
> >.
>

Thanks, but I'm far from understanding such a php code, nor I have any idea
about the "whole exotic thing" of wiki code parsing and html generation.
But, if I'd write something like #lst, I'd index text using section code
simply as delimiters, building something hidden like this into the wiki code
ot into another field of database:



where s1,s2,s3 are the section names and numbers the offset/length of the
text between section tags into the wiki page "string"; or something similar
to this, built to be extremely simple/fast  to parse and to give back
substrings of the page in the fastest, most efficient way. Such data should
be calculated only when a page content is changed. I guess, that efficiency
of sections would increase a lot, incouraging a larger use of #lst.

If such parsing of section text would be the first step of page parsing,
even segments of text delimited by noinclude tags could be retrieved.

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Simple Page Object model using #lst

2011-01-25 Thread Jesse (Pathoschild)
On Tue, Jan 25, 2011 at 8:14 AM, Alex Brollo  wrote:
> If this would happen, I imagine that the original page could be considered
> an "object", t.i. a collection of "attributes" (fragments of text)  and
> "methods" (template chunks).

Labeled Section Transclusion can be used this way, but it's not very
efficient for this. Internally it uses generated regular expressions
to extract sections; you can peek at its source code at
.

--
Yours cordially,
Jesse (Pathoschild)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l