Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Alex Brollo
2011/1/4 Brion Vibber :

>
>
> Indeed, Google Docs has an optimized editing UI for Android and iOS
> that focuses precisely on making it easy to make a quick change to a
> paragraph in a document or a cell in a spreadsheet (with concurrent
> editing).
>
>
> http://www.intomobile.com/2010/11/17/mobile-edit-google-docs-android-iphone-ipad/
>


A little bit of OT: try the new image vector image editor of Google Docs; it
exports images into svg format, and I found it excellent to build such
images and to upload them into Commons.


Now a "free roaming thought" about templates, just to share an exotic idea.
The main issue of template syntax, is casual, free, unpredictable mixture of
 attributes and contents into template parameters. It's necessary, IMHO,  to
convert them into somehow "well  formed structures" so that content could
pulled out from the template code. This abstract structure could be this
one:
{{template name begin|param1|param2|...}}
  {{optional content 1 begin}}
   text 1
  {{optional content 1 end}}
  {{optional content 2 begin}}
   text 2
  {{optional content 2 end}}
   .
  {{template name end}}

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Dmitriy Sintsov
* Brion Vibber  [Tue, 4 Jan 2011 13:39:28 -0800]:
> On Tue, Jan 4, 2011 at 1:27 PM, Dirk Riehle  wrote:
> Wikis started out as *very* lightly formatted plaintext. The point was
> to be
> fast and easy -- in the context of web browsers which only offered
> plaintext
> editing, lightweight markup for bold/italics and a standard convention
> for
> link naming was about as close as you could get to WYSIWYG / WYSIYM.
>
>
It is still faster to type link address in square brackets than clicking 
"add link" icon then typing the link name or selecting it from a 
drop-down list. Even '' is a bit faster than Ctrl+I (italics via the 
mouse will be even slower than that).

> As browsers have modernised and now offer pretty decent rich-text
> editing in
> native HTML, web apps can actually make use of that to provide
> formatting &
> embedding of images and other structural elements. In this context, 
why
> should we spend more than 10 seconds thinking about how to devise a
> syntax
> for links or tables? We already have a perfectly good language for 
this
> stuff, which is machine-parseable: HTML. (Serialize it as XML to make 
it
> even more machine-friendly!)
>
> If the web browsers of 1995 had had native HTML editing, I rather
> suspect
> there would never have been series-of-single-quotes to represent 
italics
> and
> bold...
>
Native HTML usually is a horrible bloat of tags, their attributes and 
css styles. Not really a well-readable and easily processable thing. 
Even XML, processed via XSLT would be much more compact and better 
readable. HTML is poor at separating of semantics and presentation. HTML 
also invites page editor to abuse all of these features, while wikitext 
encourages the editor to concentrate the efforts on the quality of 
content.
Let's hope that wikitext won't be completely abandoned in MediaWiki 2.0.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] SpecialPages and Related users and titles

2011-01-04 Thread Tim Starling
On 04/01/11 12:12, Ilmari Karonen wrote:
> On 01/03/2011 11:09 PM, Aryeh Gregor wrote:
>> On Mon, Jan 3, 2011 at 12:23 AM, MZMcBride  wrote:
>>> It looks like a nice usability fix. :-)  (Now to get Special:MovePage turned
>>> into ?action=move)
>>
>> I'd do the opposite -- stop using actions other than view, and move
>> everything to special pages.  (Of course we'd still support the
>> old-style URLs forever for compat, just not generate them anywhere.)
>> The set of things that are done by actions is small, fixed, and
>> incoherent: edit, history, delete, protect, watch; but not move,
>> undelete, export, logs, related changes.  The distinction is
>> historical -- I assume most or all of the action-based ones came about
>> before we had such a thing as special pages.  It would be cleaner if
>> we only had special pages for doing non-view actions.
> 
> I think we've had both actions and special pages from the beginning 
> (well, since r403 at least).  I suspect it's just that adding new 
> special pages was easier than adding new actions, so people tended to 
> pick the path of least resistance.

Special pages are nice because they have a base class with lots of
features. They are flexible and easy to add.

The main problem with actions is that most of them are implemented in
that horror that is Article.php.

We could have a class hierarchy for actions similar to the one we have
for special pages, with a useful base class and some generic
internationalisation features. Each action would get its own file, in
includes/actions. This would involve breaking up and refactoring
Article.php, which I think everyone agrees is necessary anyway.

The reason actions exist, distinct from special pages, is that it was
imagined that it would be useful to have a class hierarchy for
Article, along the lines of its child class ImagePage. Actions were
originally implemented with code like:

if ( $namespace == NS_IMAGE ) {
$wgArticle = new ImagePage( $wgTitle );
} else {
$wgArticle = new Article( $wgTitle );
}
$wgArticle->$action();

CategoryPage and the ArticleFromTitle hook were later added, extending
this abstraction. An object-oriented breakup of action UIs would need
code along the lines of:

$wgArticle = MediaWiki::articleFromTitle( $wgTitle );
$actionObject = $wgArticle->getAction( $action );
$actionObject->execute();

That is, a factory function in the Article subclass would create a UI
object. Each action could have its own base class, say
ImagePageViewAction inheriting from ViewAction. It's possible to have
the same level of abstraction with special pages:

class SpecialEdit {
function execute( $subpage ) {
$article = MediaWiki::articleFromTitle( $subpage );
$ui = $article->getEditUI();
$ui->edit();
}
}

So it could be architecturally similar either way, plus or minus a bit
of boilerplate. But special pages wouldn't automatically be
specialised by article type, so code common to all article types may
end up in the special page class. This could be a loss to flexibility,
especially for extensions that use the ArticleFromTitle hook.

I agree that special page subpages are a nice way to implement
actions, at least from the user's point of view. The URLs are pretty
and can be internationalised. Drupal has a similar URL scheme, and it
works for them.

However, in MediaWiki, the use of special subpages makes the handling
of long titles somewhat awkward. Many database fields for titles have
a maximum length of 255 bytes, and this limit is exposed to the user.
To allow titles approaching 255 bytes to be moved etc., there is a
hack in Title.php which lifts the length limit for NS_SPECIAL only.
This means that the names of special subpages cannot, in general, be
stored in title fields in the database. This has rarely been a problem
so far, but if we move to using special subpages exclusively, we may
see a few bugs filed along these lines.

Of course, an action URL can't be stored as a title either, so it's
not much of a point in their favour.

Just some thoughts for discussion.

(I know Aryeh makes up his mind about things like this rather faster
than I do; I look forward to his reply which will no doubt tell me all
the reasons why he's not changing his position.)

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Erik Moeller
2011/1/4 Brion Vibber :
> A good document structure would allow useful editing for both simple
> paragraphs and complex features like tables and templates even on such
> primitive devices, by giving a dedicated editing interface the information
> it needs to address individual paragraphs, template parameters, table cells,
> etc.

Indeed, Google Docs has an optimized editing UI for Android and iOS
that focuses precisely on making it easy to make a quick change to a
paragraph in a document or a cell in a spreadsheet (with concurrent
editing).

http://www.intomobile.com/2010/11/17/mobile-edit-google-docs-android-iphone-ipad/

-- 
Erik Möller
Deputy Director, Wikimedia Foundation

Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Rob Lanphier
On Tue, Jan 4, 2011 at 1:39 PM, Brion Vibber  wrote:
> Exactly my point -- spending time tinkering with
> sortof-human-readable-but-not-powerful-enough syntax distracts from thinking
> about what needs to be *described* in the data... which is the important
> thing needed when devising an actual storage or interchange format.

Below is an outline, which I've also posted to mediawiki.org[1] for
further iteration.  There's a lot of different moving parts, and I
think one thing that's been difficult about this conversation is that
different people are interested in different parts.  I know a lot of
people on this list are already overwhelmed or just sick of this
conversation, so maybe if some of us break off in an on-wiki
discussion, we might actually be able to make some progress without
driving everyone else nuts.  Optimistically, we might make some
progress, but the worst case scenario is that we'll at least have
documented many of the issues so that we don't have to start from zero
the next time the topic comes up.

Here's the pieces of the conversation that I'm seeing:
1.  Goals:  what are we trying to achieve?
*  Tool interoperability
**  Alternative parsers
**  GUIs
**  Real-time editing (ala Etherpad)
*  Ease of editing raw text
*  Ease of structuring the data
*  Template language with fewer squirrelly brackets
*  Performance
*  Security
*  What else?

2.  Abstract format:  regardless of syntax, what are we trying to express?
*  Currently, we don't have an abstract format; markup just maps to a
subset of HTML (so perhaps the HTML DOM is our abstract format)
*  What subset of HTML do we use?
*  What subset of HTML do we need?
*  What parts of HTML do we *not* want to allow in any form?
*  What parts of HTML do we only want to allow in limited form (e.g.
only safely generated from some abstract format)
*  Is the HTML DOM sufficiently abstract, or do we want/need some
intermediate conceptual format?
*  Is browser support for XML sufficiently useful to try to rely on that?
*  Will it be helpful to expose the abstract format in any way

3.  Syntax:  what syntax should we store (and expose to users)?
*  Should we store some serialization of the abstract format instead of markup?
*  Is hand editing of markup a viable long term strategy?
*  How important is having something expressible with BNF?
*  Is XML viable as an editing format?  JSON?  YAML?

4.  Tools (e.g. WYSIWYG)
*  Do our tool options get better if we fix up the abstract format and syntax?
*  Tools:
**  Wikia WYSIWYG editor
**  Magnus Manske's new thing
**  Line-by-line editing
list goes on...

5.  Infrastructure: how would one support mucking around with the data?
*  Support for per-wiki data formats?
*  Support for per-page data formats?
*  Support for per-revision data formats?
*  Evolve existing syntax with no infrastructure changes?

[1] http://www.mediawiki.org/wiki/User:RobLa-WMF/2011-01_format_discussion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Tim Starling
On 05/01/11 00:37, Roan Kattouw wrote:
> 2011/1/3 Brion Vibber :
>> My SVGEdit wrapper code is currently using the ApiSVGProxy extension to read
>> SVG files via the local MediaWiki API. This seems to work fine locally, but
>> it's not enabled on Wikimedia sites, and likely won't be generally around;
>> it looks like Roan threw it together as a test, and I'm not sure if
>> anybody's got plans on keeping it up or merging to core.
>>
> I threw it together real quick about a year ago, because of a request
> from Brad Neuberg from Google, who needed it so he could use SVGWeb (a
> Flash thingy that provides SVG support for IE versions that don't
> support SVG natively). Tim was supposed to review it but I don't
> remember whether he ever did. 

I reviewed the JavaScript side, and asked for two changes:

* Make it possible to disable client-side scripting in configuration
* Fix the interface between JS and Flash, which was using
__SVG__DELIMIT as a delimiter without checking for that string in the
input. User input containing this string could thus pass arbitrary
parameters to flash, with possible security consequences.

Three weeks after my review, Brad opened a ticket:

http://code.google.com/p/svgweb/issues/detail?id=446

I haven't heard anything back from them since, and I see the ticket is
still open. I haven't reviewed the Flash side.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Brion Vibber
On Tue, Jan 4, 2011 at 8:53 PM, Jay Ashworth  wrote:

> - Original Message -
> > From: "Brion Vibber" 
>
> > Requiring people to do all their document creation at this level is
> > like asking people to punch binary ASCII codes into cards by hand -- it's
> > low-level grunt work that computers can handle for us. We have
> > keyboards and monitors to replace punchcards; not only has this let most
> people stop
> > worrying about memorizing ASCII code points, it's let us go beyond
> > fixed-width ASCII text (a monitor emulating a teletype, which was
> > really a friendlier version of punch cards) to have things like
> _graphics_.
> > Text can be in different sizes, different styles, and different
> languages. We
> > can see pictures; we can draw pictures; we can use colors and shapes to
> create
> > a far richer, more creative experience for the user.
>
> None of which will be visible on phones from my Blackberry on down, which,
> IIRC, make up more than 50% of the Internet access points on the planet.
>
> Minimalism is your friend; I can presently *edit* wikipedia on that BB,
> with no CSS, JS, or images.  That's A Good Thing.
>

A good document structure would allow useful editing for both simple
paragraphs and complex features like tables and templates even on such
primitive devices, by giving a dedicated editing interface the information
it needs to address individual paragraphs, template parameters, table cells,
etc.

I would go so far as to say that this sort of fallback interface would in
fact be far superior to editing a big blob of wikitext on a small cell phone
screen -- finding the bit you want to edit in a huge paragraph full of
references and image thumbnails is pretty dreadful at the best of times.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] What do we want to accomplish? (was Re: WikiCreole)

2011-01-04 Thread Jay Ashworth
- Original Message -
> From: "Mark A. Hershberger" 

> The problem naturally falls back on the parser: As I understand it,
> the only reliable way of creating XHTML from MW markup is the parser that
> is built into MediaWiki and is fairly hard to separate (something I
> learned when I tried to put the parser tests into a PHPUnit test harness.)
> 
> I think The first step for creating a reliable, independent parser for
> MW markup would be to write some sort of specification
> (http://www.mediawiki.org/wiki/Markup_spec) and then to make sure our
> parser tests have good coverage.

The last time I spent any appreciable time on wikitech (which was 4 or 5
years ago), *someone* had a grammar and parser about 85-90% working.  I
don't have that email archive due to a crash, so I can't pin a name to
it or comment on whether it's someone in this thread...

or, alas, comment on what happened later.  But he seemed pretty excited 
and happy, as I recall.

Cheers,
-- jra

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Jay Ashworth
- Original Message -
> From: "Brion Vibber" 

> Requiring people to do all their document creation at this level is
> like asking people to punch binary ASCII codes into cards by hand -- it's
> low-level grunt work that computers can handle for us. We have
> keyboards and monitors to replace punchcards; not only has this let most 
> people stop
> worrying about memorizing ASCII code points, it's let us go beyond
> fixed-width ASCII text (a monitor emulating a teletype, which was
> really a friendlier version of punch cards) to have things like _graphics_.
> Text can be in different sizes, different styles, and different languages. We
> can see pictures; we can draw pictures; we can use colors and shapes to create
> a far richer, more creative experience for the user.

None of which will be visible on phones from my Blackberry on down, which,
IIRC, make up more than 50% of the Internet access points on the planet.

Minimalism is your friend; I can presently *edit* wikipedia on that BB,
with no CSS, JS, or images.  That's A Good Thing.

Cheers,
-- jra

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] What would be a perfect wiki syntax? (Re: WYSIWYG)

2011-01-04 Thread Jay Ashworth
- Original Message -
> From: "Alex Brollo" 

> Just a brief comment: there's no need of seaching for "a perfect wiki
> syntax", since it exists: it's the present model of well formed
> markup, t.i. xml.

I believe the snap reaction here is "you haven't tried to diff XML, have you?

My personal snap reaction is that the increase in cycles necessary to process
XML in both directions, *multiplied by the number of machines in WMF data 
center* will make XML impractical, but I'm not a WMF engineer.

Cheers,
-- jra

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Need some input

2011-01-04 Thread Benjamin Lees
On Tue, Jan 4, 2011 at 7:37 PM, Chad  wrote:
> Ninja vs. Pirate.
>
> Discuss.

Brion appears to have begun inserting a pro-ninja bias into our
documentation years ago:
http://meta.wikimedia.org/w/index.php?title=Help:User_rights&diff=next&oldid=148898
I, on the other hand, took care to treat both sides fairly:
http://www.mediawiki.org/w/index.php?title=Manual:$wgNamespaceProtection&diff=250604&oldid=210043

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Need some input

2011-01-04 Thread Chad
On Tue, Jan 4, 2011 at 7:49 PM, David Gerard  wrote:
> On 5 January 2011 00:45, Chad  wrote:
>
>> I know you love talking about the issue, but please try to stay
>> OT and not derail this thread.
>
>
> You're just saying that because pirates stole all the well-formed XML.
>

Real pirates use serialized PHP objects.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Need some input

2011-01-04 Thread David Gerard
On 5 January 2011 00:45, Chad  wrote:

> I know you love talking about the issue, but please try to stay
> OT and not derail this thread.


You're just saying that because pirates stole all the well-formed XML.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Need some input

2011-01-04 Thread Chad
On Tue, Jan 4, 2011 at 7:39 PM, David Gerard  wrote:
> On 5 January 2011 00:37, Chad  wrote:
>
>> Ninja vs. Pirate.
>> Discuss.
>
>
> Send a couple more stealth developers to WYSIFTW and it'll be ready to
> sneakily deploy as an official gadget in three weeks or less.
>

Pirates aren't stealthy, and I don't think either of them really
care about wikitext or wysiwyg.

I know you love talking about the issue, but please try to stay
OT and not derail this thread.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Need some input

2011-01-04 Thread David Gerard
On 5 January 2011 00:37, Chad  wrote:

> Ninja vs. Pirate.
> Discuss.


Send a couple more stealth developers to WYSIFTW and it'll be ready to
sneakily deploy as an official gadget in three weeks or less.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Need some input

2011-01-04 Thread Chad
Ninja vs. Pirate.

Discuss.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Daniel Kinzler
On 04.01.2011 22:39, Brion Vibber wrote:
>> In order to have a visual editor or three, combined with a plain text
>> editor, combined with some fancy other editor we have yet to invent, you
>> will still need that specification that tells you what a valid wiki instance
>> is. This is the core data; only if you have a clear spec of that can you
>> have tool and UI innovation on top of that.
> 
> 
> Exactly my point -- spending time tinkering with
> sortof-human-readable-but-not-powerful-enough syntax distracts from thinking
> about what needs to be *described* in the data... which is the important
> thing needed when devising an actual storage or interchange format.

Perhaps we should stop thinking about "formats" and start thinking about the
document model. Spec an (extensible) WikiDOM, let people knock themselves out
with different syntaxes to describe/create it. The "native" format could be
serialized php objects for all I care.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Secure login and interwiki — link failures and best help pages?

2011-01-04 Thread Platonides
Billinghurst wrote:
> Following on from a recent discussion here, I have been trying to  
> watch the WMF world from a secure login.
> 
> First statement is that it is problematic as so many links fail in the  
> interwiki space.  I cannot work out why some links to other wikis work  
> fine and always take me to a secure server, whereas on other occasions  
> I will only be offered a link to a normal http protocol within WMF.
> 
> Interwikis in a standard form are very problematic, well at least in  
> some places, eg.
> https://secure.wikimedia.org/wikisource/en/wiki/Author:Alfred_Tennyson
> both the direct link 
That's a known bug.

> and the <=> links from [Extension:DoubleWiki Extension]

That's a bug in BilingualLink() function in [[MediaWiki:Common.js]].
Talk with Pathoschild.

> The sister links on that page are similarly problematic
Same issue as normal interwikis.


> Yet if I pop over to Commons, and go to a pages like
> https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Alfred_Lord_Tennyson
> https://secure.wikimedia.org/wikipedia/commons/wiki/Alfred_Tennyson
> there do not seem the similar problems.  Is it due to Commons being on  
> the same path?

There's one javascript in Commons fixing those links. They are equally
broken if visiting with javascript disabled. See
[[commons:MediaWiki:Common.js/secure.js]]


> [Note that I haven't done a forensic analysis, this is all through wanderings]
> 
> Add to the issue is that so many of the local servers don't have a  
> link to the secure servers as it is not a default configuration.
> 
> Also something that creates difficulties is that the favicon is the  
> same for all secure.wikimedia.org pages, and when one is working on  
> four or five properties  at a time and all the tabs show "[W] My  
> Watchlist" it does get very confusing. If there was some  
> differentiation that would be helpful.

Yep. Would be nice to get it fixed.


> To add to that I am unable to find the information to try and  
> understand more about the background to the issue.  Any  
> thought/guidance/pointers?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Brion Vibber
On Jan 4, 2011 1:54 PM, "David Gerard"  wrote:
>
> On 4 January 2011 21:39, Brion Vibber  wrote:
>
> > If the web browsers of 1995 had had native HTML editing, I rather
suspect
> > there would never have been series-of-single-quotes to represent italics
and
> > bold...
>
>
> ... They did. Netscape Gold was the version *most* people used, and it
> even had a WYSIWYG HTML editor built in.

As a separate tool to edit standalone HTML files yes. As a widget integrated
into web pages and controllable via scripting, no.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Secure login and interwiki — link f ailures and best help pages?

2011-01-04 Thread Billinghurst
Following on from a recent discussion here, I have been trying to  
watch the WMF world from a secure login.

First statement is that it is problematic as so many links fail in the  
interwiki space.  I cannot work out why some links to other wikis work  
fine and always take me to a secure server, whereas on other occasions  
I will only be offered a link to a normal http protocol within WMF.

Interwikis in a standard form are very problematic, well at least in  
some places, eg.
https://secure.wikimedia.org/wikisource/en/wiki/Author:Alfred_Tennyson
both the direct link and the <=> links from [Extension:DoubleWiki Extension]

The sister links on that page are similarly problematic

Yet if I pop over to Commons, and go to a pages like
https://secure.wikimedia.org/wikipedia/commons/wiki/Category:Alfred_Lord_Tennyson
https://secure.wikimedia.org/wikipedia/commons/wiki/Alfred_Tennyson
there do not seem the similar problems.  Is it due to Commons being on  
the same path?

[Note that I haven't done a forensic analysis, this is all through wanderings]

Add to the issue is that so many of the local servers don't have a  
link to the secure servers as it is not a default configuration.

Also something that creates difficulties is that the favicon is the  
same for all secure.wikimedia.org pages, and when one is working on  
four or five properties  at a time and all the tabs show "[W] My  
Watchlist" it does get very confusing. If there was some  
differentiation that would be helpful.

To add to that I am unable to find the information to try and  
understand more about the background to the issue.  Any  
thought/guidance/pointers?


This message was sent using iSage/AuNix webmail
http://www.isage.net.au/




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] What do we want to accomplish? (was Re: WikiCreole)

2011-01-04 Thread Mark A. Hershberger

Brion Vibber  writes:

> Forget about syntax -- what do we want to *accomplish*?

One thing *I* would like to accomplish is a fruitful *end* to parser
discussions.  A way to make any further discussion a moot point.

>From the current discussion, it looks like people want to make it easier
to work with WikiText (e.g. Enable tool creation like WYSIWYG editors)
while still supporting the old Wiki markup (aka, the Wikipedia installed
base).

The problem naturally falls back on the parser: As I understand it, the
only reliable way of creating XHTML from MW markup is the parser that is
built into MediaWiki and is fairly hard to separate (something I learned
when I tried to put the parser tests into a PHPUnit test harness.)

I think The first step for creating a reliable, independent parser for
MW markup would be to write some sort of specification
(http://www.mediawiki.org/wiki/Markup_spec) and then to make sure our
parser tests have good coverage.

Then, we could begin to move from 7-bit ASCII to 16-bit Unicode because
we would have a standard so that independent programmers could verify
that their parser or wikitext generator was working acceptably and
reliably.  Once you have the ability to create inter-operable WikiText
parser/generators, it seems easy (to me) to build more tools on top of
that.

Mark.

-- 
http://hexmode.com/

War begins by calling for the annihilation of the Other,
but ends ultimately in self-annihilation.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread David Gerard
On 4 January 2011 21:39, Brion Vibber  wrote:

> If the web browsers of 1995 had had native HTML editing, I rather suspect
> there would never have been series-of-single-quotes to represent italics and
> bold...


... They did. Netscape Gold was the version *most* people used, and it
even had a WYSIWYG HTML editor built in.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Brion Vibber
On Tue, Jan 4, 2011 at 1:27 PM, Dirk Riehle  wrote:

>
>  As long as we're hung up on details of the markup syntax, it's going to be
>> very very hard to make useful forward motion on things that are actually
>> going to enhance the capabilities of the system and put creative power in
>> the hands of the users.
>>
>> Forget about syntax -- what do we want to *accomplish*?
>>
>
> I think you got this sideways. The concrete syntax doesn't matter, but the
> abstract syntax does. Without a clear specification no competing parsers, no
> interoperability, no decoupling APIs, no independently evolving components.
>
> (Abstract syntax here means "XML representation" or structured
> representation or DOM tree i.e. an abstract syntax tree. But for that you
> need a language i.e. Wikitext specification and an implementation of a
> parser as of today doesn't do the job.)

[snip]

> In order to have a visual editor or three, combined with a plain text
> editor, combined with some fancy other editor we have yet to invent, you
> will still need that specification that tells you what a valid wiki instance
> is. This is the core data; only if you have a clear spec of that can you
> have tool and UI innovation on top of that.


Exactly my point -- spending time tinkering with
sortof-human-readable-but-not-powerful-enough syntax distracts from thinking
about what needs to be *described* in the data... which is the important
thing needed when devising an actual storage or interchange format.

Wikis started out as *very* lightly formatted plaintext. The point was to be
fast and easy -- in the context of web browsers which only offered plaintext
editing, lightweight markup for bold/italics and a standard convention for
link naming was about as close as you could get to WYSIWYG / WYSIYM.


As browsers have modernised and now offer pretty decent rich-text editing in
native HTML, web apps can actually make use of that to provide formatting &
embedding of images and other structural elements. In this context, why
should we spend more than 10 seconds thinking about how to devise a syntax
for links or tables? We already have a perfectly good language for this
stuff, which is machine-parseable: HTML. (Serialize it as XML to make it
even more machine-friendly!)

If the web browsers of 1995 had had native HTML editing, I rather suspect
there would never have been series-of-single-quotes to represent italics and
bold...

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Dirk Riehle

> As long as we're hung up on details of the markup syntax, it's going to be
> very very hard to make useful forward motion on things that are actually
> going to enhance the capabilities of the system and put creative power in
> the hands of the users.
>
> Forget about syntax -- what do we want to *accomplish*?

I think you got this sideways. The concrete syntax doesn't matter, but the 
abstract syntax does. Without a clear specification no competing parsers, no 
interoperability, no decoupling APIs, no independently evolving components.

(Abstract syntax here means "XML representation" or structured representation 
or DOM tree i.e. an abstract syntax tree. But for that you need a language 
i.e. Wikitext specification and an implementation of a parser as of today 
doesn't do the job.)

> worrying about memorizing ASCII code points, it's let us go beyond
> fixed-width ASCII text (a monitor emulating a teletype, which was really a
> friendlier version of punch cards) to have things like _graphics_. Text can
> be in different sizes, different styles, and different languages. We can see
> pictures; we can draw pictures; we can use colors and shapes to create a far
> richer, more creative experience for the user.
>
> GUIs didn't come about from a better, more universal way of encoding text --
> Unicode came years after GUI conventions were largely standardized in
> practice.

In order to have a visual editor or three, combined with a plain text editor, 
combined with some fancy other editor we have yet to invent, you will still 
need that specification that tells you what a valid wiki instance is. This is 
the core data; only if you have a clear spec of that can you have tool and UI 
innovation on top of that.

Cheers,
Dirk

-- 
Website: http://dirkriehle.com - Twitter: @dirkriehle
Ph (DE): +49-157-8153-4150 - Ph (US): +1-650-450-8550


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Platonides
On 1/4/11 12:21 PM Neil Kandalgaonkar wrote:
> On 1/4/11 12:51 PM, Platonides wrote:
>> I don't see how FS authentication is useful there.
>> All authentication would be performed by mediawiki, with a master
>> credential such as $wgDBpassword. MediaWiki shouldn't need to send the
>> media server a user password!
> 
> Nobody said we'd be sending user passwords over to a media server. Most 
> of the time, even regular MediaWiki servers don't need to see passwords. 
> They just need some means to authenticate the session cookie.
> 
> But like I said we don't have very firm plans about how we would do 
> authentication.

This was just a counter-point to the statement "authentication is really
a nice-to-have for Commons or Wikipedia right now".


>> (NB sysops should be able to remove goatses from forum avatars...)
> 
> Yeah. Avatars can be tricky. Also to be pedantically correct you want to 
> have some guard against impersonation (using same icon, and maybe adding 
> unicode space characters or other trivial changes to username).

The last one _should_ already be handled by AntiSpoof.
(Although there is, for instance, an open bug about ZWJ, any takers?)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Neil Kandalgaonkar
On 1/4/11 12:51 PM, Platonides wrote:
> I don't see how FS authentication is useful there.
> All authentication would be performed by mediawiki, with a master
> credential such as $wgDBpassword. MediaWiki shouldn't need to send the
> media server a user password!

Nobody said we'd be sending user passwords over to a media server. Most 
of the time, even regular MediaWiki servers don't need to see passwords. 
They just need some means to authenticate the session cookie.

But like I said we don't have very firm plans about how we would do 
authentication.


> (NB sysops should be able to remove goatses from forum avatars...)

Yeah. Avatars can be tricky. Also to be pedantically correct you want to 
have some guard against impersonation (using same icon, and maybe adding 
unicode space characters or other trivial changes to username).

-- 
Neil Kandalgaonkar 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Dirk Riehle
>> (Note that I think any conversation about parser changes should consider
>> the GoodPractices page from http://www.wikicreole.org/wiki/GoodPractices.)
>>
>> If nothing else, perhaps there would be some use for the EBNF grammar
>> that was developed for WikiCreole.
>> http://dirkriehle.com/2008/01/09/an-ebnf-grammar-for-wiki-creole-10/
>
> WikiCreole used to not be parsable by a grammar, either. And it has
> inconsistencies like "italic is // unless it appears in a url".
> Good to see they improved.

WikiCreole only had a prose specification, hence it was ambiguous. Our syntax 
definition improved that so that in theory (and practice) you could now have 
multiple competing parser implementations. The issue with WikiCreole now is 
that it is simply too small---lots of stuff that it can't do but that any wiki 
engine will want.

The real reason why to care about a precise specification (that is not, as in 
the case of Mediawiki, simply the implementation), is the option to evolve 
faster. The real paper for this is 
http://dirkriehle.com/2008/07/19/a-grammar-for-standardized-wiki-markup/ - 
wouldn't it be nice if we could be innovating on a wiki platform?

Cheers,
Dirk


-- 
Website: http://dirkriehle.com - Twitter: @dirkriehle
Ph (DE): +49-157-8153-4150 - Ph (US): +1-650-450-8550


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia Storage System ( was JavaScript access to uploaded...)

2011-01-04 Thread Platonides
Michael Dale wrote:
>> I think thumbnail and transformation servers (they should also do
>> stuff like rotating things on demand) are separate from how we store
>> things, and will just be acting on behalf of the user anyway. So they
>> don't introduce new requirements to image storage. Anybody see
>> anything problematic about that?
> 
> I think managing storage of procedural derivative assets differently
> than original files is pretty important. Probably one of the core
> features of a Wikimedia Storage system.

Yes, I think we should treat them as different "image clusters",
optionally sharing servers (unless there's a better equivalent available
in the dfs).


> Assuming finite storage it would be nice to specify we don't care as
> much if we lose thumbnails vs losing original assets. For example when
> doing 3rd party backups or "dumps"we don't need all the derivatives to
> be included.
> 
> We don't' need need to keep random resolutions derivatives of old
> revisions of assets around for ever, likewise improvements to SVG
> rasterization or improvements to transcoding software would mean
> "expiring" derivatives

A good point.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Platonides
Alex Brollo wrote:
> Perhaps #lst is inefficient but I'd like to compare #lst and template
> efficiency. Sometimes I see complex, very complex templates used for so
> banal aims  ;-)
> 
> Alex

That's an excellent case! It's better to use a single "inefficient" tool
than one hundred of templates which (standalone) are considered "efficient".


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Alex Brollo
2011/1/4 Platonides 

>
> Do not try to be over-paranoid on not using the fetaures you have
> available. You can ask for advice if something you have done is sane or
> not, of course.
> An interesting point is that labelled section transclusion is enabled on
> French wikipedia. It's strange that someone got tricked into enabling it
> on that 'big' project. I wonder how is it being used there.
>

French wikisource is hardly working to convert "naked texts" into "texts
with scans", this means that theu user very largely #lst. Their work is
excellent. There are many contributors. I guess, that many of them work too
into pedia. Perhaps they import there their useful tool.

See http://toolserver.org/~thomasv/graphs/Wikisource_-_texts_fr.png to see
how hard they use proofreading.

Perhaps #lst is inefficient but I'd like to compare #lst and template
efficiency. Sometimes I see complex, very complex templates used for so
banal aims  ;-)

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Platonides
Neil Kandalgaonkar wrote:
> We've narrowed it down to two systems that are being tested right now, 
> MogileFS and OpenStack. OpenStack has more built-in stuff to support 
> authentication. MogileFS is used in many systems that have an 
> authentication layer, but it seems you have to build more of it from 
> scratch.
> 
> Authentication is really a nice-to-have for Commons or Wikipedia right 
> now. I anticipate it being useful for a handful of cases, which are both 
> more anticipated than actual right now:
>- images uploaded but not published (a la UploadWizard)
>- forum avatars (which can viewed by anyone, but can only be edited 
> by the user they belong to)

I don't see how FS authentication is useful there.
All authentication would be performed by mediawiki, with a master
credential such as $wgDBpassword. MediaWiki shouldn't need to send the
media server a user password!
(NB sysops should be able to remove goatses from forum avatars...)
Authentication as understood by OpenStack is of little use for us now.
Things like adding a uid column in mysql would be more useful than a
native authentication for accessing the resource.




> As for things like SVG translation, I'm going to say that's out of scope 
> and probably impractical. Our experience with the Upload Wizard 
> Licensing Tutorial shows that it's pretty rare to be able to simply plug 
> in new strings into an SVG and have an acceptable translation. It 
> usually needs some layout adjustment, and for RTL languages it needs 
> pretty radical changes.

You can provide the same SVG changing just the legend box.



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Brion Vibber
On Tue, Jan 4, 2011 at 12:03 PM, Mark A. Hershberger wrote:

> Perhaps this is where we can cooperate more with other Wiki writers to
> develop a common Wiki markup.  From my brief perusal of efforts, it
> looks like there is a community of developers involved in
>  but MediaWiki involvement is lacking
> (http://bit.ly/hYoki3 — for a email from 2007(!!) quoting Tim Starling).
>

We poked a bit at the early days of the WikiCreole project, but never really
saw it as something that would solve any of the problems that MediaWiki had.
I was at the meeting at WikiSym 2006 in Denmark where some of the creole
syntax bits got hammered out, and if anything that helped convince me that
it wasn't going to do us good to continue on that path.

As long as we're hung up on details of the markup syntax, it's going to be
very very hard to make useful forward motion on things that are actually
going to enhance the capabilities of the system and put creative power in
the hands of the users.

Forget about syntax -- what do we want to *accomplish*?

Requiring people to do all their document creation at this level is like
asking people to punch binary ASCII codes into cards by hand -- it's
low-level grunt work that computers can handle for us. We have keyboards and
monitors to replace punchcards; not only has this let most people stop
worrying about memorizing ASCII code points, it's let us go beyond
fixed-width ASCII text (a monitor emulating a teletype, which was really a
friendlier version of punch cards) to have things like _graphics_. Text can
be in different sizes, different styles, and different languages. We can see
pictures; we can draw pictures; we can use colors and shapes to create a far
richer, more creative experience for the user.

GUIs didn't come about from a better, more universal way of encoding text --
Unicode came years after GUI conventions were largely standardized in
practice.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Platonides
Alex Brollo wrote:
> Thanks Roan, your statement sound very alarming for me; I'll open a specific
> thread about into wikisource-l quoting this talk. I'm doing any efford to
> avoid server/history overload, since I know that I am using a free service
> (I just fixed {{loop}} template to optimize it into it.source, at my
> best...) and if you are right, I've to change deeply my approach to #lst.
> 
> :-(
> 
> Alex

The reason that labelled section transcluding is only enabled on
wikisources, some wiktionaries... is that it is inefficient. Thus your
proposal of enable it everywhere would be a bad idea. However, I am just
remembering things said in the past. I haven't reviewed it myself.

Do not try to be over-paranoid on not using the fetaures you have
available. You can ask for advice if something you have done is sane or
not, of course.
An interesting point is that labelled section transclusion is enabled on
French wikipedia. It's strange that someone got tricked into enabling it
on that 'big' project. I wonder how is it being used there.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Platonides
Tei wrote:
> The last time I tried to search something special about PHP (how to
> force a garbage recollection in old versions of PHP) there was very
> few hits on google, or none.

Maybe that was because PHP only has garbage recollection since 5.3 :)

For reference: http://php.net/manual/features.gc.php


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia Storage System ( was JavaScript access to uploaded...)

2011-01-04 Thread Michael Dale
On 01/04/2011 01:12 PM, Neil Kandalgaonkar wrote:
> We've narrowed it down to two systems that are being tested right now,
> MogileFS and OpenStack. OpenStack has more built-in stuff to support
> authentication. MogileFS is used in many systems that have an
> authentication layer, but it seems you have to build more of it from
> scratch.
>
>
> Authentication is really a nice-to-have for Commons or Wikipedia right
> now. I anticipate it being useful for a handful of cases, which are
> both more anticipated than actual right now:
>   - images uploaded but not published (a la UploadWizard)
>   - forum avatars (which can viewed by anyone, but can only be edited
> by the user they belong to)


hmm. I think it would ( obviously? ) be best to handle media
authentication at the mediaWiki level with just a simple private /
public accessible classification for the backed storage system. Things
that are "private" have to go through the mediaWiki api where you can
leverage all the existing extendible credential management.

Also important to keep things simple for 3rd parties that are not using
a clustered filesystem stack, easier to map web accessible dir vs not ..
than any authentication managed within the storage system.

Image 'editing' / uploading already includes basic authentication ie:
http://www.mediawiki.org/wiki/Manual:Configuring_file_uploads#Upload_permissions
User avatars would be a special case of

>
> I think thumbnail and transformation servers (they should also do
> stuff like rotating things on demand) are separate from how we store
> things, and will just be acting on behalf of the user anyway. So they
> don't introduce new requirements to image storage. Anybody see
> anything problematic about that?

I think managing storage of procedural derivative assets differently
than original files is pretty important. Probably one of the core
features of a Wikimedia Storage system.

Assuming finite storage it would be nice to specify we don't care as
much if we lose thumbnails vs losing original assets. For example when
doing 3rd party backups or "dumps"we don't need all the derivatives to
be included.

We don't' need need to keep random resolutions derivatives of old
revisions of assets around for ever, likewise improvements to SVG
rasterization or improvements to transcoding software would mean
"expiring" derivatives

When mediaWiki is dealing with file maintenance it should have to
authenticate differently when removing, moving, or overwriting orginals
vs derivatives i.e independent of DB revision numbers or what mediaWiki
*thinks* it should be doing.

For example only upload ingestion nodes or "modes" should have write
access to the archive store. Transcoding or thumbnailing or maintenance
nodes or "modes" should only have read-only access to archive originals
and write access to derivatives.


>
> As for things like SVG translation, I'm going to say that's out of
> scope and probably impractical. Our experience with the Upload Wizard
> Licensing Tutorial shows that it's pretty rare to be able to simply
> plug in new strings into an SVG and have an acceptable translation. It
> usually needs some layout adjustment, and for RTL languages it needs
> pretty radical changes.
>
> That said, it's an interesting frontier and it would be awesome to
> have a tool which made it easier to create translated SVGs or indicate
> that translations were related to each other. One thing at a time though.

I don't think its that impractical ;) SVG includes some conventions for
layout. With some procedural sugar could be improved, ie container sizes
dictating relative character size. It may not be perfectly beautiful but
certainly everyone translating content should not have to know how to
edit SVG files, likewise software can facilitate a separate svg layout
expert to come in later and improve on the automated derivative.

But your correct its not part really part of storage considerations. But
is part of thinking about the future of access to media streams via the
api.

Maybe the base thing for the storage platform to consider in this thread
is: access to media streams via the api or if its going to try and
manage a separate entry point outside of mediawiki. I think public
assets going over the existing squid -> http file server path and
non-public asset going trough an api entry point would make sense.

--michael


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Platonides
Mark A. Hershberger wrote:
> Perhaps this is where we can cooperate more with other Wiki writers to
> develop a common Wiki markup.  From my brief perusal of efforts, it
> looks like there is a community of developers involved in
>  but MediaWiki involvement is lacking
> (http://bit.ly/hYoki3 — for a email from 2007(!!) quoting Tim Starling).
> 
> (Note that I think any conversation about parser changes should consider
> the GoodPractices page from http://www.wikicreole.org/wiki/GoodPractices.)
> 
> If nothing else, perhaps there would be some use for the EBNF grammar
> that was developed for WikiCreole.
> http://dirkriehle.com/2008/01/09/an-ebnf-grammar-for-wiki-creole-10/

WikiCreole used to not be parsable by a grammar, either. And it has
inconsistencies like "italic is // unless it appears in a url".
Good to see they improved.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] WikiCreole (was Re: What would be a perfect wiki syntax? (Re: WYSIWYG))

2011-01-04 Thread Mark A. Hershberger
Alex Brollo  writes:

> Just a brief comment: there's no need of seaching for "a perfect wiki
> syntax", since it exists: it's the present model of well formed markup, t.i.
> xml.

And, from your answer, we can see that you mean “perfectly
understandable to parsers”, but sacrifices human usability.  XML is
notoriously difficult to produce by hand.

Suppose there was some mythical “perfect” markup.

We wouldn't want to sacrifice the usability of simple Wiki markup — it
would need to be something that could be picked up quickly (wiki-ly) by
people.  After all, if your perfect markup start barfing up XML parser
errors whenever someone created not-so-well-formed XML, well, that
wouldn't feel very “wiki”, would it?

From what I've seen of this iteration of this conversation, it looks
like people are most concerned with markup that is easy and unambiguous
to parse.

While I understand the importance of unambiguous markup or syntax for
machines, I think human-centered attributes such as “learn-ability” are
paramount.

Perhaps this is where we can cooperate more with other Wiki writers to
develop a common Wiki markup.  From my brief perusal of efforts, it
looks like there is a community of developers involved in
 but MediaWiki involvement is lacking
(http://bit.ly/hYoki3 — for a email from 2007(!!) quoting Tim Starling).

(Note that I think any conversation about parser changes should consider
the GoodPractices page from http://www.wikicreole.org/wiki/GoodPractices.)

If nothing else, perhaps there would be some use for the EBNF grammar
that was developed for WikiCreole.
http://dirkriehle.com/2008/01/09/an-ebnf-grammar-for-wiki-creole-10/



-- 
http://hexmode.com/

War begins by calling for the annihilation of the Other,
but ends ultimately in self-annihilation.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Neil Kandalgaonkar
On 1/4/11 9:24 AM, Michael Dale wrote:

> So ... it would be good to think about moving things like img_auth.php
> and thumb.php over to an general purpose api media serving module no?

It's related, but we're just laying the foundations now. I think we 
haven't really talked about this on wikitech, this might be a good time 
to mention it...

We're just evaluating systems to store things at scale. Or rather Russ 
Nelson (__nelson on IRC) is primarily doing that -- he is a contractor 
whom some of you met at the DC meetup. The rest of us (me, Ariel Glenn, 
Mark Bergsma, and the new ops manager CT Woo) are helping now and then 
or trying to evolve the requirements as new info comes in.

Most of the info is here:

 
http://wikitech.wikimedia.org/view/Media_server/Distributed_File_Storage_choices

We've narrowed it down to two systems that are being tested right now, 
MogileFS and OpenStack. OpenStack has more built-in stuff to support 
authentication. MogileFS is used in many systems that have an 
authentication layer, but it seems you have to build more of it from 
scratch.

Authentication is really a nice-to-have for Commons or Wikipedia right 
now. I anticipate it being useful for a handful of cases, which are both 
more anticipated than actual right now:
   - images uploaded but not published (a la UploadWizard)
   - forum avatars (which can viewed by anyone, but can only be edited 
by the user they belong to)

I think thumbnail and transformation servers (they should also do stuff 
like rotating things on demand) are separate from how we store things, 
and will just be acting on behalf of the user anyway. So they don't 
introduce new requirements to image storage. Anybody see anything 
problematic about that?

As for things like SVG translation, I'm going to say that's out of scope 
and probably impractical. Our experience with the Upload Wizard 
Licensing Tutorial shows that it's pretty rare to be able to simply plug 
in new strings into an SVG and have an acceptable translation. It 
usually needs some layout adjustment, and for RTL languages it needs 
pretty radical changes.

That said, it's an interesting frontier and it would be awesome to have 
a tool which made it easier to create translated SVGs or indicate that 
translations were related to each other. One thing at a time though.


-- 
Neil Kandalgaonkar 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Ilmari Karonen
On 01/04/2011 05:57 PM, Roan Kattouw wrote:
> 2011/1/4 Michael Dale:
>
>> It may hurt caching to serve everything over jsonp since we can't set
>> smaxage with  callback=randomString urls. If its just for editing its
>> not a big deal, untill some IE svg viewer hack starts getting all svg
>> over jsonp ;) ... Would be best if we could access this data without
>> varying urls.
>>
> Yes, JSONP is bad for caching.

Well, if the response is informative enough, you can often use constant 
callback names.  A lot of my scripts which use the API do that.  Of 
course, it may mean a bit more work if you're using a framework like 
jQuery which defaults to random callback names, but it's not that much.

A couple of examples off the top of my head:
http://commons.wikimedia.org/wiki/MediaWiki:MainPages.js
http://commons.wikimedia.org/wiki/MediaWiki:Gadget-PrettyLog.js

-- 
Ilmari Karonen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Brion Vibber
On Tue, Jan 4, 2011 at 5:37 AM, Roan Kattouw  wrote:

> > Alternately, we could look at using HTTP access control headers on
> > upload.wikimedia.org, to allow XMLHTTPRequest in newer browsers to make
> > unauthenticated requests to upload.wikimedia.org and return data
> directly:
> >
> This should be enabled either way. You could then try the cross-domain
> request, and use the proxy if it fails.
>

Sensible, yes.


> But which browsers need the proxy anyway? Just IE8 and below? Do any
> of the proxy-needing browsers support CORS?
>

I think for straight viewing only the Flash compat widget needs cross-domain
permissions (browsers with native support use  for viewing), and a
Flash cross-domain settings file would probably take care of that.

For editing, or other tools that need to directly access the file data,
either a proxy or CORS should do the job. I _think_ current versions of all
major browsers support CORS for XHR fetches, but I haven't done compat tests
yet. (IE8 requires using an alternate XDR class instead of XHR but since it
doesn't do native SVG I don't care too much; I haven't checked IE9 yet, but
since the editor works in it I want to make sure we can load the files!)

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Michael Dale
On 01/04/2011 09:57 AM, Roan Kattouw wrote:
> The separate img_auth.php entry point is needed on wikis where reading
> is restricted (private wiis), and img_auth.php will check for read
> permissions before it outputs the file. The difference between the
> proxy I wrote and img_auth.php is that img_auth.php just streams the
> file from the filesystem (which, on WMF, will hit NFS every time,
> which is bad) whereas ApiSVGProxy uses an HTTP request (which will hit
> the image Squids, which is good).
>
So ... it would be good to think about moving things like img_auth.php
and thumb.php over to an general purpose api media serving module no?

This would help standardise how media serving is "extended", reduce
extra entry points and as you point out above let us use more uniformly
proxy our back-end data access over HTTP to hit the squids instead of
NFS where possible.

And as a shout out to Trevors mediawiki 2.0 vission, eventually enable
more REST like interfaces within  mediaWiki media handing.

--michael

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Tei
On 4 January 2011 16:00, Alex Brollo  wrote:
> 2011/1/4 Roan Kattouw 
...
>
> What a "creative" use of #lst allows, if it is really an efficient, light
> routine, is to build named variables and arrays of named variables into one
> page; I can't imagine what a good programmer could do with such a powerful
> tool. I'm, as you can imagine, far from a good programmer, nevertheless I
> built easily routines for unbeliavable results. Perhaps, coming back to the
> topic.  a good programmer would disrupt wikipedia using #lst? :-)
>

Don't use the words "good programmers", sounds like mythic creatures
that never adds bugs and can work 24 hours without getting tired.
Haha...

What you seems you may need, is a special type of people, maybe in the
academia, or student, or working already on something that already ask
for a lot performance .  One interested in the intricate details of
optimizing.
The last time I tried to search something special about PHP (how to
force a garbage recollection in old versions of PHP) there was very
few hits on google, or none.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Alex Brollo
2011/1/4 Roan Kattouw 

> > What a "creative" use of #lst allows, if it is really an efficient, light
> > routine, is to build named variables and arrays of named variables into
> one
> > page; I can't imagine what a good programmer could do with such a
> powerful
> > tool. I'm, as you can imagine, far from a good programmer, nevertheless I
> > built easily routines for unbeliavable results. Perhaps, coming back to
> the
> > topic.  a good programmer would disrupt wikipedia using #lst? :-)
> >
> Using #lst to implement variables in wikitext sounds like a terrible
> hack, similar to how using {{padleft:}} to implement string functions
> in wikitext is a terrible hack.


Thanks Roan, your statement sound very alarming for me; I'll open a specific
thread about into wikisource-l quoting this talk. I'm doing any efford to
avoid server/history overload, since I know that I am using a free service
(I just fixed {{loop}} template to optimize it into it.source, at my
best...) and if you are right, I've to change deeply my approach to #lst.

:-(

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Roan Kattouw
2011/1/4 Michael Dale :
> hmm... Is img_auth widely used? Can we just disable svg api data access
> if $wgUploadPath includes imageAuth ... or add a configuration variable
> that states if img_auth is an active entry point?  Why dont we think
> about the problem diffrently and support serving images through the api
> instead of maintaining a speperate img_auth entry point?
>
The separate img_auth.php entry point is needed on wikis where reading
is restricted (private wiis), and img_auth.php will check for read
permissions before it outputs the file. The difference between the
proxy I wrote and img_auth.php is that img_auth.php just streams the
file from the filesystem (which, on WMF, will hit NFS every time,
which is bad) whereas ApiSVGProxy uses an HTTP request (which will hit
the image Squids, which is good).

> Is the idea that our asset scrubbing for malicious scripts or embed
> image html tags to protect against IE's lovely 'auto mime' content type
> is buggy?
No, IEContentAnalyzer will reject anything that would "confuse" IE.

> I think the majority of mediaWiki installations are serving
> assets on the same domain as the content. So we would do good to address
> that security concern as our own. ( afaiak we already address this
> pretty well) Furthermore we don't want people to have to re-scrub once
> they do access that svg data on the local domain...
>
MW was written with this same-domain setup in mind, and WMF is one of
the very few setups out there that uses a separate domain for files.
So I'm fairly sure we don't rely on files being on a different or
cookieless domain for security.

> It would be nice to serve up diffrent content types "data" over the api
> in a number of use cases. For example we could have a more structured
> thumb.php entry point or serve up video thumbnails at requested times
> and resolutions. This could also clean up Neil's upload wizard per-user
> temporary image store by requesting these assets through the api instead
> of relying on obfuscation of the url. Likewise the add media wizard
> presently does two requests once it opens the larger version of the image.
>
> Eventually it would be nice to make more services available like svg
> localisation / variable substitution and rasterization. ( ie give me
> engine_figure2.svg in Spanish at 600px wide as a png )
>
You should talk to Russ Nelson, Ariel Glenn and the other people
currently involved in redesigning WMF's file storage architecture.

> It may hurt caching to serve everything over jsonp since we can't set
> smaxage with  callback=randomString urls. If its just for editing its
> not a big deal, untill some IE svg viewer hack starts getting all svg
> over jsonp ;) ... Would be best if we could access this data without
> varying urls.
>
Yes, JSONP is bad for caching.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Roan Kattouw
2011/1/4 Alex Brollo :
> Excellent, I'm a passionate user of #lst extension, and I like that its code
> can be optimized (so I feel combortable to use it more and more). I can't
> read php, and I take this opportunity to ask you:
>
I haven't read the code in detail, and I can't really answer these
question until I have. I'll look at these later today, I have some
other things to do first.

> 1. is #lsth option compatible with default #lst use?
No idea what #lsth even is or does, nor what you mean by 'compatible'
in this case.

> 2. I can imagine that #lst simply runs as a "substring finder", and I
> imagine that substring search is really an efficient, fast and
> resource-sparing server routine. Am I true?
It does seem to load the entire page text (wikitext I think, not sure)
and look for the section somehow, but I haven't looked at how it does
this in detail.

> 3. when I ask for a section into a page, the same page is saved into a
> cache, so that next calls for other sections of the same page are fast and
> resource-sparing?
>
I'm not sure whether LST is caching as much as it should. I can tell
you though that the "fetch the wikitext of revision Y of page Z"
operation is already cached in MW core. Whether the "fetch the
wikitext of section X of revision Y of page Z" operation is cached
(and whether it makes sense to do so), I don't know.

> What a "creative" use of #lst allows, if it is really an efficient, light
> routine, is to build named variables and arrays of named variables into one
> page; I can't imagine what a good programmer could do with such a powerful
> tool. I'm, as you can imagine, far from a good programmer, nevertheless I
> built easily routines for unbeliavable results. Perhaps, coming back to the
> topic.  a good programmer would disrupt wikipedia using #lst? :-)
>
Using #lst to implement variables in wikitext sounds like a terrible
hack, similar to how using {{padleft:}} to implement string functions
in wikitext is a terrible hack.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Michael Dale
On 01/03/2011 02:22 PM, Brion Vibber wrote:
> Since ApiSVGProxy serves SVG files directly out on the local domain as their
> regular content type, it potentially has some of the same safety concerns as
> img_auth.php and local hosting of upload files. If that's a concern
> preventing rollout, would alternatives such as wrapping the file data &
> metadata into a JSON structure be acceptable?

hmm... Is img_auth widely used? Can we just disable svg api data access
if $wgUploadPath includes imageAuth ... or add a configuration variable
that states if img_auth is an active entry point?  Why dont we think
about the problem diffrently and support serving images through the api
instead of maintaining a speperate img_auth entry point?

Is the idea that our asset scrubbing for malicious scripts or embed
image html tags to protect against IE's lovely 'auto mime' content type
is buggy? I think the majority of mediaWiki installations are serving
assets on the same domain as the content. So we would do good to address
that security concern as our own. ( afaiak we already address this
pretty well) Furthermore we don't want people to have to re-scrub once
they do access that svg data on the local domain...

It would be nice to serve up diffrent content types "data" over the api
in a number of use cases. For example we could have a more structured
thumb.php entry point or serve up video thumbnails at requested times
and resolutions. This could also clean up Neil's upload wizard per-user
temporary image store by requesting these assets through the api instead
of relying on obfuscation of the url. Likewise the add media wizard
presently does two requests once it opens the larger version of the image.

Eventually it would be nice to make more services available like svg
localisation / variable substitution and rasterization. ( ie give me
engine_figure2.svg in Spanish at 600px wide as a png )

It may hurt caching to serve everything over jsonp since we can't set
smaxage with  callback=randomString urls. If its just for editing its
not a big deal, untill some IE svg viewer hack starts getting all svg
over jsonp ;) ... Would be best if we could access this data without
varying urls.

> Alternately, we could look at using HTTP access control headers on
> upload.wikimedia.org, to allow XMLHTTPRequest in newer browsers to make
> unauthenticated requests to upload.wikimedia.org and return data directly:
>
> https://developer.mozilla.org/En/HTTP_Access_Control

I vote yes! ... This would also untaint video canvas data that I am
making more and more use of in the sequencer ... Likewise we could add a
crossdomain.xml file so IE flash svg viewers can access the data.

> In the meantime I'll probably work around it with an SVG-to-JSONP proxy on
> toolserver for the gadget, which should get things working while we sort it
> out.
Sounds reasonable :)

We should be able to "upload" the result via the api on the same domain
as the editor so would be very fun to enable this for quick svg edits :)

peace,
--michael


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Alex Brollo
2011/1/4 Roan Kattouw 

> Just from looking at the LST code, I can tell that it has at least one
> performance problem: it initializes the parser on every request. This
> is easy to fix, so I'll fix it today. I can also imagine that there
> would be other performance concerns with LST preventing its deployment
> to large wikis, but I'm not sure of that.
>

Excellent, I'm a passionate user of #lst extension, and I like that its code
can be optimized (so I feel combortable to use it more and more). I can't
read php, and I take this opportunity to ask you:

1. is #lsth option compatible with default #lst use?
2. I can imagine that #lst simply runs as a "substring finder", and I
imagine that substring search is really an efficient, fast and
resource-sparing server routine. Am I true?
3. when I ask for a section into a page, the same page is saved into a
cache, so that next calls for other sections of the same page are fast and
resource-sparing?

What a "creative" use of #lst allows, if it is really an efficient, light
routine, is to build named variables and arrays of named variables into one
page; I can't imagine what a good programmer could do with such a powerful
tool. I'm, as you can imagine, far from a good programmer, nevertheless I
built easily routines for unbeliavable results. Perhaps, coming back to the
topic.  a good programmer would disrupt wikipedia using #lst? :-)

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Does anybody have the 20080726 dump version?

2011-01-04 Thread Anthony
On Sat, Jan 1, 2011 at 11:46 AM, Ariel T. Glenn  wrote:
> Στις 01-01-2011, ημέρα Σαβ, και ώρα 16:42 +, ο/η David Gerard
> έγραψε:
>> On 31 December 2010 17:09, Ariel T. Glenn  wrote:
>>
>> > I'd like all the dumps from all the projects to be on line.  Being
>> > realistic I think we would wind up keeping offline copies of all of it,
>> > and copies from every 6 months online, with the last several months of
>> > consecutive runs = around 20 or 30 of them also online.
>>
>>
>> Has anyone found anyone at the Internet Archive who answers their
>> email and would be interested in making these available to the world?
>> Sounds just their thing. Unless there's some reason it isn't.
>
> Yes, we know some people at the Archive, I am not sure what they would
> need to arrange however. It's just a matter of having someone upload the
> dumps up there, as someno has done for a few of them in the past...
> unless you are talking about having them grab the dumps every couple
> weeks and put them someplace organized.

Yes, I've asked them before about something similar for another
project, and they told me to just upload it and only contact them once
it was over 100 files or something (I forget the number).  As I recall
they're a pain to upload to, though, unless there was some rsync
access that I was missing or something.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Roan Kattouw
2011/1/4 Alex Brollo :
> Simply install Labeled Section Trasclusion into a large pedia project;
Just from looking at the LST code, I can tell that it has at least one
performance problem: it initializes the parser on every request. This
is easy to fix, so I'll fix it today. I can also imagine that there
would be other performance concerns with LST preventing its deployment
to large wikis, but I'm not sure of that.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript access to uploaded file contents: SVGEdit gadget needs ApiSVGProxy or CORS

2011-01-04 Thread Roan Kattouw
2011/1/3 Brion Vibber :
> My SVGEdit wrapper code is currently using the ApiSVGProxy extension to read
> SVG files via the local MediaWiki API. This seems to work fine locally, but
> it's not enabled on Wikimedia sites, and likely won't be generally around;
> it looks like Roan threw it together as a test, and I'm not sure if
> anybody's got plans on keeping it up or merging to core.
>
I threw it together real quick about a year ago, because of a request
from Brad Neuberg from Google, who needed it so he could use SVGWeb (a
Flash thingy that provides SVG support for IE versions that don't
support SVG natively). Tim was supposed to review it but I don't
remember whether he ever did. Also, Mark had some concerns (he looked
into rewriting URLs in Squid first, but I think his conclusion was it
was tricky and an API proxy would be easier), and there were concerns
about caching, both from Mark who didn't seem to want these SVGs to
end up in the text Squids, and from Aryeh who *did* want them to be
cached. I told Aryeh I'd implement Squid support in ApiSVGProxy, but I
don't think I ever did that.

For more background, see the conversation that took place in
#mediawiki on Dec 29, 2009 starting around 00:30 UTC.

> Since ApiSVGProxy serves SVG files directly out on the local domain as their
> regular content type, it potentially has some of the same safety concerns as
> img_auth.php and local hosting of upload files. If that's a concern
> preventing rollout, would alternatives such as wrapping the file data &
> metadata into a JSON structure be acceptable?
>
I think we should ask Mark and Tim to revisit this whole thing and
have them work out what the best way would be to make SVGs available
on the same domain. There's too many things I don't know, so I can't
even guess what would be best.

> Alternately, we could look at using HTTP access control headers on
> upload.wikimedia.org, to allow XMLHTTPRequest in newer browsers to make
> unauthenticated requests to upload.wikimedia.org and return data directly:
>
> https://developer.mozilla.org/En/HTTP_Access_Control
>
> That would allow the front-end code to just pull the destination URLs from
> imageinfo and fetch the image data directly. It also has the advantage that
> it would work for non-SVG files; advanced HTML5 image editing tools using
> canvas could benefit from being able to load and save PNG and JPEG images as
> well.
>
> https://bugzilla.wikimedia.org/show_bug.cgi?id=25886 requests this for
> bits.wikimedia.org (which carries the stylesheets and such).
>
This should be enabled either way. You could then try the cross-domain
request, and use the proxy if it fails.

But which browsers need the proxy anyway? Just IE8 and below? Do any
of the proxy-needing browsers support CORS?

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Alex Brollo
Can I suggest a really simple trick to inject something new into
"stagnating" wikipedia?

Simply install Labeled Section Trasclusion into a large pedia project; don't
ask, simply install it. If you'd ask, typical pedian boldness would raise a
comment "Thanks, we don't need such a thing" for sure. They need it... but
they don't know, nor they can admit that a small sister project like source
uses currently something very useful.

Let they discover the #lst surprising  power.

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] What would be a perfect wiki syntax? (Re: WYSIWYG)

2011-01-04 Thread Alex Brollo
I apologyze, I sent an empty reply. :-(

Just a brief comment: there's no need of seaching for "a perfect wiki
syntax", since it exists: it's the present model of well formed markup, t.i.
xml.

While digging into subtler troubles from wiki syntax, t.i. difficulties in
parsing it by scripts or understanding fuzzy behavior of the code, I always
find a trouble coming from tha simple fact, that wiki is a markup that isn't
intrinsecally well formed - it doen't respect the simple, basic rules of a
well formed syntax:  strict and evident rules about beginning-ending of a
modifier; no mixing of attributes and content inside its "tags", t.i.
templates.

In part, wiki markup can be hacked to take a step forward; I'm using more
and more "well formed templates", splitted into two parts, a "starting
template" and an "ending template". Just a banal example: it.source users
are encouraged to use {{Centrato!l=20em}} text ... syntax, where
text - as you see - is outside the template, while the usual
syntax {{Centrato| text ... |l=20em}} mixes tags and contents (Centrato
is Italian name of "center" and l attribute states the width of centered
div). I find such a trick extremely useful when parsind text, since - as
follows by the use of a well-formed marckup - I can retrieve the whole text
simply removing any template code and any html tag; an impossible task using
the common "not well formed"  syntax, where nothing tells about the nature
of parameters: they only can be classified by "human understanding" of the
template code or by the whole body of wiki parser.

Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l