Call for participation in OpenSym 2015!
Aug 19-20, 2015, San Francisco, http://opensym.org
FOUR FANTASTIC KEYNOTES
Richard Gabriel (IBM) on Using Machines to Manage Public Sentiment on Social
Media
Peter Norvig (GOOGLE) on Applying Machine Learning to Programs
Robert Glushko (UC
for regular submissions: May 31, 2013
* Camera-ready for both rounds: June 9, 2013
As long as it is May 17 somewhere on earth, your submission will be accepted.
COMMUNITY TRACK PROGRAM COMMITTEE
Chairs
Regis Barondeau (Université du Québec à Montréal)
Dirk Riehle (Friedrich-Alexander
Try the Sweble parser for extracting structured data from Wikitext
http://sweble.org
http://dirkriehle.com, +49 157 8153 4150, +1 650 450 8550
On Nov 22, 2011 9:35 PM, Fred Zimmerman zimzaz@gmail.com wrote:
hi,
I want to programmatically extract lists from list pages on Wikipedia. That
Hello everyone!
Wikihadoop sounds like a great project!
I wanted to point out that you can make it even more powerful for many
research applications by combining it with the Sweble Wikitext parser.
Doing so, you could enable Wikipedia dump processing not only on the rough XML
dump level, but
On 05/03/2011 08:28 PM, Neil Harris wrote:
On 03/05/11 19:44, MZMcBride wrote:
...
The point is that the wikitext and its parsing should be completely separate
from MediaWiki/PHP/HipHop/Zend.
I think some of the bigger picture is getting lost here. Wikimedia produces
XML dumps that contain
You should identify whether you mean MediaWikitext, or some other
dialect -- MediaWiki Is Not The Only Wiki...
and you should post to wikitext-l as well. The real parser maniacs hang
out over there, even though traffic is low.
It is MediaWiki's Wikitext; elsewhere it is usually called wiki
You should identify whether you mean MediaWikitext, or some other
dialect -- MediaWiki Is Not The Only Wiki...
and you should post to wikitext-l as well. The real parser maniacs
hang out over there, even though traffic is low.
It is MediaWiki's Wikitext; elsewhere it is usually called wiki