An other option that does not relies on anything else than MediaWiki core is to
use the MediaWiki preprocessor. You do
$wgParser->getPreprocessor->preprocessToDom( 'my wikitext' ); and you get a DOM
document that contains the template call formatted in an XML tree. It will
looks like:
An other maybe relevant example : ProofreadPage has a content model for
proofreading pages with multiple areas of Wikitext and use some for
transclusion and others for rendering:
*
https://github.com/wikimedia/mediawiki-extensions-ProofreadPage/blob/master/includes/page/PageContent.php
*
Thank you very much for this idea!
+1 for a gadget or maybe, in the future, a part of the ProofreadPage extension.
An issue that we probably need to tackle when designing this tool: what about
not-existing words that could be a typo of two different words? If we replace
all instances of the
Hi!
Your proposal looks very good. I'll be very happy to help you as co-mentor on
this project if is no GSoc project for Proofread Page.
Some comments:
1. You should keep in mind that the extension will be used by Wikibook and
Wikisource communities so these communities must be involve in the
Hi!
For rendering of x3d files there is x3dom that is the official library of the
web3D consortium. It provides two backends: one using webGL and a fallback
using Flash. http://www.x3dom.org . This library looks pretty easy to integrate
with MediaWiki.
Tpt
Brian Wolff bawo...@gmail.com a
+1 with Amir. The unicode support is very important.
About list of firsts wikis, why fr.wikisource isn't in the list? We have
requested some months ago to be one of the first wikis to test the extension
and the answer seems positive.
https://bugzilla.wikimedia.org/show_bug.cgi?id=39744
Thomas