[Wikitech-l] [Language Engineering] Reminder: Bug triage on Wednesday, April 24 2013 at 1700 UTC/1000 PDT

2013-04-24 Thread Runa Bhattacharjee
Hello,

This is a reminder that the Language Engineering team will be hosting
a bug triage session today, i.e. 24th of April 2013 at 1700 UTC/1000
PDT on #mediawiki-i18n (Freenode). The bug list is at
http://etherpad.wikimedia.org/BugTriage-i18n-2013-04 . Event details
can be found be in the section below.


Thanks
Runa

What: Translation User Interface bug triage
Date: April 24 2013
Time: 1700-1800 UTC, 1000-1100 PDT (Timezone conversion: http://hexm.de/r0)
Channel: #mediawiki-i18n (Freenode)
Etherpad: http://etherpad.wikimedia.org/BugTriage-i18n-2013-04
Questions can be sent to: runa at wikimedia dot org


-- Forwarded message --
From: Runa Bhattacharjee rbhattachar...@wikimedia.org
Date: Sat, Apr 20, 2013 at 1:02 AM
Subject: [Language Engineering] Bug triage on Wednesday, April 24 2013
at 1700 UTC/1000 PDT
To: mediawiki-i...@lists.wikimedia.org, Wikimedia Mailing List
wikimedi...@lists.wikimedia.org, wikitech-l@lists.wikimedia.org


What: Translation User Interface bug triage
Date: April 24 2013
Time: 1700-1800 UTC, 1000-1100 PDT (Timezone conversion: http://hexm.de/r0)
Channel: #mediawiki-i18n (Freenode)
Etherpad: http://etherpad.wikimedia.org/BugTriage-i18n-2013-04
Questions can be sent to: runa at wikimedia dot org

Hello,

The Language Engineering team would like to invite everyone for the
upcoming bug triage session on Wednesday, April 24 2013 at 1700 UTC
(1000 PDT).  During this 1 hour session we will be using the etherpad
listed above to collaborate. We have already listed some bugs, but
please feel free to add more bugs, comments and any other related
issues that you’d like to see addressed during the session. You can
send questions directly to me on email or IRC (nick: arrbee). Please
see above for event details.

Thank you.

regards
Runa

--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation


-- 
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Google Summer of Code '13] Project Idea - Inspire Me Button

2013-04-24 Thread Gaurav Chawla
On 23 April 2013 11:00, Steven Walling steven.wall...@gmail.com wrote:


 Tracking reader activity on Wikipedia is a _very_
 touchy subject for a whole host of legitimate reasons, and to be totally
 honest I don't think you're going to be able to implement
 any recommender system based on people's reading habits.



Can't he use *broad* categories for tracking? like instead of registering
an
activity as by the article or the immediate category in which it fall, we
can
traverse up the categories until we get a category under which there are
atleast 1000 articles. This will ensure that the user is not tracked
explicitly,
like more or less he might not even notice, that he is being tracked
(Note: tracking like this doesnot make us evil :P)

For example: if a reader visits the page [Burmese rupee], then we can
traverse up
the categories like, Burmese rupee- Rupee- Currency denominations-
Currency-
 International finance- International economics- Economics. And then save
the visit as an activity in category of International economics or
Economics
rather than in Rupee or Currency.

Though this will make the suggestions less efficient, but it will surely
generate
insights into the active interests of the user (and thats the purpose).
Also, there can always be an option to manually register the article by the
user
(kindof bookmark/watchlist), and if he does so, the suggestions will
improve for
that user.
.
Regards
Gaurav
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Bartosz Dziewoński
I think ContentHandler  already theoretically has the ability to store
per-page language info, it's just not being used. (And of course it'd
have to be actually deployed somewhere else than Wikidata.) Unless I'm
missing something, this mostly needs an interface (which is not a
small undertaking by any means, either).

-- 
-- Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Denny Vrandečić
Just to add, ContentHandler is deployed on all Wikimedia projects.


2013/4/24 Bartosz Dziewoński matma@gmail.com

 I think ContentHandler  already theoretically has the ability to store
 per-page language info, it's just not being used. (And of course it'd
 have to be actually deployed somewhere else than Wikidata.) Unless I'm
 missing something, this mostly needs an interface (which is not a
 small undertaking by any means, either).

 --
 -- Matma Rex

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC on possible interproject link interfaces

2013-04-24 Thread Mathieu Stumpf

Le 2013-04-24 03:53, David Cuenca a écrit :

Dear all,

I have started a new RFC with some proposals for the interproject 
links and

you can add more if you want.

https://meta.wikimedia.org/wiki/Requests_for_comment/Interproject_links_interface

It has been a long standing issue and one of the most voted 
enhancements in

Bugzilla
https://bugzilla.wikimedia.org/show_bug.cgi?id=708


Very intereting. I'm working on a template[1] that I want to propose to 
the french wikipedia community to respond to the following issue: some 
articles are taking for granted that the reader will know certain 
non-trivial concepts. My idea would be to give the reader an opportunity 
to quickly check that it doesn't lake such a knowledge requirement 
through a box which would list each concept with the relevant wikipedia 
article, but also the relevant wikiversity course if the reader would 
like to follow such a sturctured material on this topic.


For now I have to finish the template so I can go to the community with 
a a working example and let them decide on something else than mere 
textual description mockup. I know not everybody will be enthusiastic 
with this idea, especially given that (at least on the french chapter) 
wikiversty courses are not equal in quality, but to my mind this is a 
vicious circle: poor quality-few visibility-few contributors-poor 
quality.


[1] https://fr.wikipedia.org/wiki/Mod%C3%A8le:Pr%C3%A9alable



To have the sister projects templates at the bottom of the page it is 
also
one of the reasons why sister projects have been also so hidden from 
the

eyes of the big public, and now with Wikidata also the issue of
maintainability can be addressed as well (similar problem as with
interlanguage links).

Micru
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


--
Association Culture-Libre
http://www.culture-libre.org/

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Bartosz Dziewoński

On Wed, 24 Apr 2013 12:53:50 +0200, Denny Vrandečić 
denny.vrande...@wikimedia.de wrote:


Just to add, ContentHandler is deployed on all Wikimedia projects.


But with $wgContentHandlerUseDB = false, so not really.

--
Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Paul Selitskas
I've already tried both using page properties to store page content
language and modifying ContentHandler::getPageLanguage()[1]. In both
cases parser worked in a different language scope and didn't process
magic words written in a default wiki language (e.g. Russian
[[Категория:Test]] wouldn't work on a German page; English had to be
used in both pages). It's OK for a wiki with the English language as
default, but if such multi-lingual wiki worked for years with German
on board, and then you implement the above said, all pages in other
languages wouldn't be parsed properly.

I couldn't achieve page content manipulations at the time of parsing
(by means of magic words). It may be either me being one-eyed or the
current parser design.

P.S. In page properties, I had to set the page properties through the
command line. You have to make an Action^WSpecial Page for that. Also,
it will need some sort of restriction policy to prevent vandalism.

--
[1] By determining the postfix (/en, /ru, /zh, etc.)

On Wed, Apr 24, 2013 at 8:00 AM, MZMcBride z...@mzmcbride.com wrote:
 Erik Moeller wrote:
I'd like to start a broader conversation about language support in MW
core [...]

 Mailing lists are good for conversation, but a lot of your e-mail was
 insightful notes that I want to make sure don't get lost. I hope you'll
 eventually put together an RFC (https://www.mediawiki.org/wiki/RFC) or
 equivalent.

 [...]

I'll stop there - I'm sure you can think of other issues with the
current approach. For third party users, the effort of replicating
something like the semi-acceptable Commons or Meta user experience is
pretty significant, as well, due to the large number of templates and
local hacks employed.

 Well, for Commons, clearly the answer is for everyone to write in glyphs.
 Wingdings, Webdings, that fancy new color Unicode that Apple has.
 Meta-Wiki, on the other hand, now that's a real problem. ;-)

Would it make sense to add a language property to pages, so it can be
used to solve a lot of the above issues, and provide appropriate and
consistent user experience built on them? (Keeping in mind that some
pages would be multilingual and would need to be identified as such.)
If so, this seems like a major architectural undertaking that should
only be taken on as a partnership between domain experts (site and
platform architecture, language engineering, Visual Editor/Parsoid,
etc.).

 I'm not sure I'd call what you're proposing a major architectural
 undertaking, though perhaps I'm defining a much narrower problem scope.
 Below is my take on where we are currently and where we should head with
 regard to page properties.

 We need better page properties (metadata) support. A few years ago, a
 page_props table was added to MediaWiki:

 * https://www.mediawiki.org/wiki/Manual:Page_props_table

 Within the past year, MediaWiki core has seen the info action resuscitated
 and Special:PagesWithProp implemented:

 * https://www.mediawiki.org/w/index.php?title=MediaWikiaction=info
 * https://www.mediawiki.org/wiki/Special:PagesWithProp

 That is, a lot of the infrastructure needed to support a basic language
 property field already exists, in my mind.

 However, where we currently fall short is providing a reasonable interface
 for adding or modifying page properties. Currently, we use the page text
 to set nearly any property, via magic words (e.g., __NEWSECTIONLINK__ or
 {{DISPLAYTITLE:}}). The obvious advantage to doing this is the
 accountability, transparency, and reversibility of using the same system
 that edits rely on (text table, revision table). The obvious disadvantage
 is that the input system is a giant textarea.

 If we could design a sane interface for modifying page properties (such as
 display title and a default category sort key) that included logging and
 accountability and reversibility, adding page content language as an
 additional page property would be pretty trivial. (MediaWiki could even do
 neat tricks like take a hint from either the user interface language of
 the page creator or examine the page contents themselves to make an
 educated guess about the page content language.) And as a fallback, I
 believe every site already defines a site-wide content language (even
 Meta-Wiki and Commons). The info action can then report this information
 on a per-page basis and Special:PagesWithProp can allow lookups by page
 property (i.e., by page content language).

 MZMcBride



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
З павагай,
Павел Селіцкас/Pavel Selitskas
Wizardist @ Wikimedia projects

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Amir E. Aharoni
2013/4/24 Paul Selitskas p.selits...@gmail.com:
 I've already tried both using page properties to store page content
 language and modifying ContentHandler::getPageLanguage()[1]. In both
 cases parser worked in a different language scope and didn't process
 magic words written in a default wiki language (e.g. Russian
 [[Категория:Test]] wouldn't work on a German page; English had to be
 used in both pages). It's OK for a wiki with the English language as
 default, but if such multi-lingual wiki worked for years with German
 on board, and then you implement the above said, all pages in other
 languages wouldn't be parsed properly.

If I understand correctly, the Visual Editor should gradually
eliminate the need for users to use magic words directly, as well as
for stuff like [[Category:]] and #REDIRECT. It should all be done
using a GUI eventually. So the need for localized magic words should
disappear, too.

--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
‪“We're living in pieces,
I want to live in peace.” – T. Moore‬

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread James Forrester
On 24 April 2013 06:28, Amir E. Aharoni amir.ahar...@mail.huji.ac.il wrote:
 2013/4/24 Paul Selitskas p.selits...@gmail.com:
 I've already tried both using page properties to store page content
 language and modifying ContentHandler::getPageLanguage()[1]. In both
 cases parser worked in a different language scope and didn't process
 magic words written in a default wiki language (e.g. Russian
 [[Категория:Test]] wouldn't work on a German page; English had to be
 used in both pages). It's OK for a wiki with the English language as
 default, but if such multi-lingual wiki worked for years with German
 on board, and then you implement the above said, all pages in other
 languages wouldn't be parsed properly.

 If I understand correctly, the Visual Editor should gradually
 eliminate the need for users to use magic words directly, as well as
 for stuff like [[Category:]] and #REDIRECT. It should all be done
 using a GUI eventually. So the need for localized magic words should
 disappear, too.

This is correct; a Page Settings (meta-data) dialog is coming soon
to a VisualEditor near you, initially with just Categories, but
longer-term all behavioural magic words, language links and any other
meta-data people can think of will be there. This will mean that users
will not be surprised to find a mysterious __NOGALLERY__ and wonder
what it does; there will be a place to describe what it does in their
user-display language. The need for multi-lingual magic words in the
same context will thus fade (though as we're planning for side-by-side
wikitext and VisualEditor editing, there may still be some demand).

Of course, this only solves the problem for Wikimedia and other people
happy to run a Parsoid service alongside MediaWiki. We have a general
plan to build out a no wikitext ever, just store HTML+RDFa MediaWiki
option, so only legacy sites would need Parsoid (and if you were
willing to convert your storage from wikitext to HTML, not even that),
but this is a lower priority than getting everything working. :-)

J.
--
James D. Forrester
Product Manager, VisualEditor
Wikimedia Foundation, Inc.

jforres...@wikimedia.org | @jdforrester

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikitech-l Digest, Vol 117, Issue 98

2013-04-24 Thread Tuszynski, Jaroslaw W.
I agree with Erik that multi-language support on multi-language projects like 
Commons is very messy, complicated and inconsistent. The system has morphed 
into a web of java scripts and templates designed and maintained by many users 
that know only small section of the whole system. For example, I do a lot of 
template maintenance and internationalization (i18n), however have very little 
understanding of poorly documented java scripts[3] used (which interact with 
some templates) or the interactions between translatewiki (where many 
translations are made) and Commons (see [2]).

One of the challenges is that many issues experienced by some users in one 
language are not experienced by others. For example, since many templates on 
Commons are very close to the template expansion limit, the limit is often 
crossed in one language but not in the other. (Hopefully that will be solved by 
rewriting some of the templates in Lua.). Also there is a very different 
functionality for logged in and not logged in users. For example language links 
on the bottom of some templates, like [[Template:Delete]] [4] work for not 
logged in users but do not do anything if you are logged in. 

Another huge challenge of current and future systems is that Erik already 
pointed out: that many translations are not 1:1. People are often adding 
corrections to text in languages they know, so slowly different language 
versions are drifting apart. For example, I lately noticed that some 
significant changes to template:PD-Polish [5] did not make it to any of the 
other versions, so different people see different license template. The only 
solution for this I can think of is some sort of marking the of the text to 
highlight out-of-date translations and provide also up-to-date version in other 
language.

Whatever system we use should allow two forms of i18n used: macro (where whole 
pages or large sections are translated as a whole) and micro ( where individual 
words,  phrases or sentences are translated). Also since Commons mostly deal 
with images, a lot of translated content is image metadata like technique used 
to create an artwork or the century the creator of the artwork lived in. This 
type of metadata can be handled by language-independent properties like the 
ones used at wikidata (see [6]).

I see that there will be a scheduled talk about Extension:Translate and Commons 
at Wikimania 2013 [1]. 

[1] 
http://wikimania2013.wikimedia.org/wiki/Submissions/Multilingual_Wikimedia_Commons_-_What_can_we_do_about_it
 
[2] 
https://commons.wikimedia.org/wiki/User:Multichill/Template_i18n_at_Translatewiki
 
[3] 
https://commons.wikimedia.org/wiki/MediaWiki_talk:Multilingual_description.js 
[4] https://commons.wikimedia.org/wiki/Template:Delete 
[5] 
https://commons.wikimedia.org/w/index.php?title=Template%3APD-Polish%2Fendiff=90390931oldid=74914315
 
[6] 
http://www.wikidata.org/wiki/Wikidata:Project_chat/Archive/2013/03#Wikidata_and_Commons
 

Jarek T.
User:jarekt
 
Date: Tue, 23 Apr 2013 20:29:49 -0700
From: Erik Moeller e...@wikimedia.org
To: Wikimedia developers wikitech-l@lists.wikimedia.org
Subject: [Wikitech-l] Support for multiple content languages in MW
core
Message-ID:
CAEg6ZHmcSU=M8w2314EbsLZ2ZWTYgyzSLpwBqWbyw9nauo=w...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

Hi folks,

I'd like to start a broader conversation about language support in MW
core, and the potential need to re-think some pretty fundamental
design decisions in MediaWiki if we want to move past the point of
diminishing returns in some language-related improvements.

In a nutshell, is it time to make MW aware of multiple content
languages in a single wiki? If so, how would we go about it?

Hypothesis: Because support for multiple languages existing in a
single wiki is mostly handled through JS hacks, templates, and manual
markup added to the content (such as divs indicating language
direction), we are providing an opaque, confusing and often
inconsistent user experience in our multilingual wikis, which is a
major impediment for growth of non-English content in those wikis, and
participation by contributors who are not English speakers.

Categories have long been called out as one of the biggest factors,
and they certainly are (since Commons categories are largely in
English, they are by definition excluding folks who don't speak the
language), but I'd like to focus on the non-category parts of the
problem for the purposes of this conversation.

Support for the hypothesis (please correct misconceptions or errors):

1) There's no consistent method by which multiple language editions of
the same page are surfaced for selection by the use. Different wikis
use different templates (often multiple variants and layouts in a
single wiki), different positioning, different rules, etc., leading to
inconsistent user experience. Consistency is offered by language
headers generated by the Translate extension, but these are used for

Re: [Wikitech-l] GSoC and OPW Participation

2013-04-24 Thread Jiabao Wu
Hello,

Thanks for your friendly replies.

I am interested in the awesome Visual Editor  and would like to take the
plugins project idea. I would like to focus on adding support for editing
equations and also insertion of images.

These are common and useful functions, what are your opinions on the value
of this proposal to mediawiki?

Google docs handles these two quite nicely with their editor. Though we
can't see the language they represent it with I would like to accomplish
something similar for visual editor.

Cheers,
Jiabao
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikisource-l] Fwd: Bug 189: Add a music wikimodule resolved/fixed

2013-04-24 Thread David Cuenca
I have started a page in Meta to discuss the options of either creating a
Wikisource version dedicated to musical scores or an independent project
https://meta.wikimedia.org/wiki/Requests_for_comment/Musical_score_transcription_project_proposal

Micru

On Wed, Apr 24, 2013 at 4:34 AM, John Vandenberg jay...@gmail.com wrote:

 Yay!!

 John Vandenberg.
 sent from Galaxy Note
 -- Forwarded message --
 From: MZMcBride z...@mzmcbride.com
 Date: Apr 23, 2013 12:28 PM
 Subject: [Wikitech-l] Bug 189: Add a music wikimodule resolved/fixed
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Cc:

 Hi.

 https://bugzilla.wikimedia.org/show_bug.cgi?id=189

 Congrats to all involved in getting bug 189 resolved! :-)

 Bug 189 was one of the oldest unresolved and one of the better known bugs
 in Bugzilla involving a request to add a music module to Wikimedia wikis.
 Quick stats about the bug:

 * Opened: 2004-08-22
 * Votes: 48
 * Comments: 123

 The bug filer is still around and left a nice note on the bug
 (https://bugzilla.wikimedia.org/show_bug.cgi?id=189#c123):

 ---
 Congratulations to all !

 It makes my dream comes true today !

 Thanks million times!
 ---

 https://en.wikipedia.org/wiki/Note seemed like an easy target for
 demoing the newly deployed Score extension
 (https://www.mediawiki.org/wiki/Extension:Score) on a production site,
 if anyone's interested. I tried looking around for a point and click
 lilypond or ABC code generation tool (preferably Web-based), but a lot of
 these tools quickly went over my head.

 MZMcBride



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikisource-l mailing list
 wikisourc...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikisource-l




-- 
Etiamsi omnes, ego non
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: [WikimediaMobile] Rethinking MobileFormatter / Skins

2013-04-24 Thread Jon Robson
The things that are not the most trivial are the most challenging and fun.
I'd hate to wait for Parsoid - if anything this would help Parsoid by
defining the needs of mediawiki's parser.

I would suggest starting with just extracting table of contents and
section data into individual components...

On Tue, Apr 23, 2013 at 12:27 PM, Daniel Friesen
dan...@nadir-seen-fire.com wrote:
 It's an interesting idea. Though it won't be trivial. But we could start to
 express the parser output of pages as high level components nested in each
 other. Like a document with a TOC nested in it, as well as high-level
 headers (or maybe sections), emeded image frames, etc... where things like
 the headers and image frames are components with their own rendering that
 the skin could create components to replace the rendering for. And even
 strip the component out of the page and put it somewhere else.

 Though it can't be a simple list of components since many of these things
 have specific locations inside the content to go in.

 Then again. Ideas like this may just make VisualEditor/Parsoid harder. We
 might want to come back to this once Parsoid is done with it's dom, etc...


 On Tue, 23 Apr 2013 12:01:00 -0700, Jon Robson jdlrob...@gmail.com wrote:

 Daniel
 Great to hear I'm not alone in my thoughts. Things like the edit
 section link shouldn't have to be hacked in at the parser level - if
 the parser returned the components of a page it would be trivial for
 the skin to add these itself.

 The fact you are resorting to hacks to do things which should be
 trivial (the infoboxes is a great example) suggest that this is a
 smell that would be good to fix.

 I can't help but think of Wikipedia redefined [1] which had some
 interesting ideas that alas would be very hard to try out in the
 current setup.

 Another example I can give is in mobile is we render cleanup templates
 as an overlay as these can fill an entire mobile screen on certain
 articles making the content unreadable (very much against a world in
 which every single human being can freely share in the sum of all
 knowledge. :)). We have had to add some javascript which collapses
 them but it would be great to have a better solution for
 non-javascript users (maybe hide the cleanup templates or put them at
 the bottom of the page and create a link to that at the top for those
 who need them) as currently this a world in which every single human
 being ... with javascript ..  can freely share in the sum of all
 knowledge. :)

 [1] http://www.wikipediaredefined.com/


 --
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
http://jonrobson.me.uk
@rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Marc A. Pelletier
On 04/23/2013 11:29 PM, Erik Moeller wrote:
 (Keeping in mind that some
 pages would be multilingual and would need to be identified as such.)
 If so, this seems like a major architectural undertaking that should
 only be taken on as a partnership between domain experts (site and
 platform architecture, language engineering, Visual Editor/Parsoid,
 etc.).

My two currency subunits:

A wikidata-like approach seems like the only sensical approach to the
problem IMO; that is, the concept of a 'page (read: data item)' should
be language neutral and branch off in a set of real pages with their
own title and language information.

metapage X would have an enumeration of representations in different
languages, each with their own localized title(s) and contents.  This
way, given any such page, the actual information needed to switch
between languages and handle language-specific presentation is
immediately available.  Categories would need no magical handling, that
category Y is named Images of dogs in English and Imágenes de perros
in Spanish is just part of the normal structure.

Add to this a simple user preference of language ordering for when
their language is unavailable, and you have a good framework.

All that'd be left is...  UI.  :-)

-- Marc


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Zuul (gerrit/jenkins) upgraded

2013-04-24 Thread Antoine Musso
Hello,

This morning was very quiet so I have been bold and upgraded our Zuul
installation.

The new version comes with:

 - faster reporting (we have been hit by that a few weeks ago)
 - various performances improvement
 - configuration validation (I have setup the Jenkins job already)
 - support for Gerrit events introduced in 2.5/2.6
 - statistics reporting (though that needs statds)
 - customization of success and failure messages

Overall, there is no user facing change.

Next steps would be customizing the messages a bit and complete the
Debian package to make upgrades easier.

Thanks to Faidon for all his help while packaging the python modules
dependencies.

cheers,

-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] GSoC query regarding jQuery.ime project

2013-04-24 Thread Praveen Singh
Hi,

I am Praveen Singh, a final Year Computer Science Graduate student at JIIT,
India. I wish to apply for GSoC 2013 and I am thinking about jQuery.ime
extensions for Firefox and Chrome as a project for the same.
What I understood about the jQuery.ime project after going through its Github
repository https://github.com/wikimedia/jquery.ime is :

   - It provides multilingual input support for the editable fields on a
   page.
   - Rules and keyboard mappings for different languages are defined in
   different js files for corresponding languages.

Does the project simply aims at packaging the source files into an
extension ?

If so, doesn't that sounds a pretty small project for the complete GSoC
timeline ? (Your thoughts ?)

Or is it the fact that we need to develop two different extensions (for
firefox and chrome), that makes it a good enough project for GSoC ?

Enlighten me if I am not clear with the objectives of the project.

Thanks,
Praveen Singh
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Making inter-language links shorter

2013-04-24 Thread Pau Giner

 Also, did you think of the accessibility issues in your solution?


First, I want to clarify that the prototype was made just to communicate
the idea in terms of interaction. The The implementation is just a quick
hack to simulate this interaction.

For a production implementation I can image the whole list of languages to
be sent to the client, and then, the list being shortened by Javascript.
For those users without Javascript (from screen-readers, to Search engine
crawlers) the same list of links they receive now will be available for
them.

In any case, developers could provide even better strategies to solve that.
As an interaction designer I just wanted to share the idea to collect
possible concerns with the interaction proposed, to be fixed in next
iterations of the designs before development effort is made.




On Sat, Apr 20, 2013 at 2:04 AM, Mathieu Stumpf 
psychosl...@culture-libre.org wrote:

 Also, did you think of the accessibility issues in your solution? Here I
 especialy think of people with view disabilities, for who js often mean
 no way to get the content, while a long list of hyperlinks is
 manageable.

 Le vendredi 19 avril 2013 à 20:19 +0200, Pau Giner a écrit :
  Thanks for all the feedback!
 
 
  I'll try to respond below to some of the issues raised:
 
 
  Which is the problem?
 
 
  As it has been mentioned, one of the most effective ways of hiding
  something is to surround it in the middle of a long list. This
  produces two problems:
* Lack of discoverability. users may be not aware that the
  content is available in their language (which goes against our
  goal of providing access to knowledge). Speakers of small
  languages that access the English Wikipedia because it has
  more content, are forced to make an effort each time to check
  if each article is also available in their language.
* Problems for multi-lingual exploration. It is hard to switch
  between multiple language versions since the users has to look
  for his languages each time in the whole list.
 
 
  The fact that some Wikipedias adjust the order of the languages, the
  existence of user scripts and an Opera extension to solve the issue,
  is an indicator of the existence of such problem.
 
 
 
  We support lots of languages (+300) but users are normally interested
  in a small (1-8) subset of those. We need to make these subset easily
  discoverable for our users, and providing them in the middle of a list
  with 200 items is not the best way to do it in my opinion.
 
 
  Possible cultural and value problems
 
 
  As it was commented, the multilingual nature of Wikipedia is a strong
  held value. However, currently it is hard to know in how many
  languages an article is available since you need to count the links.
  With the proposed approach we provide a number which helps to
  communicate that. So I think we are not going against that value.
 
 
  I think that concerns about the imposition of languages per region are
  not a big issue when the previous user choices and the browser accept
  language are considered with more priority than Geo-IP. Users just
  need to select their language once and it will be appearing in the
  short list the next times. These concerns should be more relevant with
  the current situation where some Wikis put some languages on top
  regardless user choices (for some work 100% of the time, for others
  they fail 100% of the time).
 
 
 
  I also don't think that we should prioritise the need to  hide
  languages that users somehow dislikes over making it easy to access
  the languages that the user wants. In any case, the former is also not
  supported with the current approach.
 
 
  Why to hide?
 
 
  I understand the problems commented when language links were initially
  hidden in Vector, since uses were required to make an additional step
  to get into the same long list of links we currently have. With the
  proposed approach, the extra step is only taken in exceptional cases
  (e.g., a user in a foreign country accessing from a public pc), and
  this is made only once (not for each language change), and aids such
  as search are provided to make it really quick.
 
 
  The reordering alternative has some problems compared with the
  proposed approach. For example, when a language does not appear on
  top, it is hard to determine whether the current article is not
  provided in that language or it is in the middle of the list. In
  addition, with reordering, you cannot rely on alphabetical order
  (while you can present the short list alphabetically).
 
 
 
 
  Considering size and quality of the article
 
 
  It can be a factor to consider since communicating that an article has
  good versions in other languages is a good thing. But I think it is a
  low priority one, since I find hard to imagine a user selecting a
   language which she does not understand (otherwise will be already in
  the 

Re: [Wikitech-l] [Design] Making inter-language links shorter

2013-04-24 Thread Brion Vibber
Please note that screen readers and readers without JavaScript aren't
in any way the same thing -- modern screen readers hook into real
browsers like IE, Safari, Chrome, and Firefox and JavaScript runs just
great in them.

Serious accessibility concerns should really be referred to someone who's
an expert... do we have anybody on staff or volunteer who has real, direct,
current experience with vision-impaired users and their needs?

-- brion


On Wed, Apr 24, 2013 at 11:51 AM, Pau Giner pgi...@wikimedia.org wrote:

 
  Also, did you think of the accessibility issues in your solution?


 First, I want to clarify that the prototype was made just to communicate
 the idea in terms of interaction. The The implementation is just a quick
 hack to simulate this interaction.

 For a production implementation I can image the whole list of languages to
 be sent to the client, and then, the list being shortened by Javascript.
 For those users without Javascript (from screen-readers, to Search engine
 crawlers) the same list of links they receive now will be available for
 them.

 In any case, developers could provide even better strategies to solve that.
 As an interaction designer I just wanted to share the idea to collect
 possible concerns with the interaction proposed, to be fixed in next
 iterations of the designs before development effort is made.




 On Sat, Apr 20, 2013 at 2:04 AM, Mathieu Stumpf 
 psychosl...@culture-libre.org wrote:

  Also, did you think of the accessibility issues in your solution? Here I
  especialy think of people with view disabilities, for who js often mean
  no way to get the content, while a long list of hyperlinks is
  manageable.
 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] how long would it take to update user options for all en.wiki users?

2013-04-24 Thread Ryan Kaldari
Anyone have an estimate for how long it would take this maintenance 
script to run for en.wiki?

https://gerrit.wikimedia.org/r/#/c/60689

Ryan Kaldari

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ideating on GSoC project : Mobilize Wikidata

2013-04-24 Thread Lydia Pintscher
On Wed, Apr 24, 2013 at 3:00 AM, Pragun Bhutani pragu...@gmail.com wrote:
 So I've been trying to implement the suggestions and I think I'm one step
 away from seeing results. I've got Wikibase and all its dependancies set
 up, I've got mobile frontend installed (but commented out for the moment).

Sweet!

 I can't figure out how to get some Wikidata style data on to my local
 installation to see how it looks on a mobile though!

 If somebody could point me in the right direction, I'll do that and will
 set up a local tunnel to share the results (if any!).

 That should give me some information and I should be able to draft a rough
 proposal with the project needs!

I take it you talked to Denny on IRC and this is all solved now. Let
me know if not please.
Looking forward to reading your proposal.


Cheers
Lydia

--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects

Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.

Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ideating on GSoC project : Mobilize Wikidata

2013-04-24 Thread Pragun Bhutani
It's all solved now yes. And I have a working model based on those
suggestions already. Will share a rough proposal in a couple of hours! :)

On Thu, Apr 25, 2013 at 1:26 AM, Lydia Pintscher 
lydia.pintsc...@wikimedia.de wrote:

 On Wed, Apr 24, 2013 at 3:00 AM, Pragun Bhutani pragu...@gmail.com
 wrote:
  So I've been trying to implement the suggestions and I think I'm one step
  away from seeing results. I've got Wikibase and all its dependancies set
  up, I've got mobile frontend installed (but commented out for the
 moment).

 Sweet!

  I can't figure out how to get some Wikidata style data on to my local
  installation to see how it looks on a mobile though!
 
  If somebody could point me in the right direction, I'll do that and will
  set up a local tunnel to share the results (if any!).
 
  That should give me some information and I should be able to draft a
 rough
  proposal with the project needs!

 I take it you talked to Denny on IRC and this is all solved now. Let
 me know if not please.
 Looking forward to reading your proposal.


 Cheers
 Lydia

 --
 Lydia Pintscher - http://about.me/lydia.pintscher
 Community Communications for Technical Projects

 Wikimedia Deutschland e.V.
 Obentrautstr. 72
 10963 Berlin
 www.wikimedia.de

 Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.

 Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
 unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
 Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Pragun Bhutani
http://pragunbhutani.in
Skype : pragun.bhutani
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Making inter-language links shorter

2013-04-24 Thread Sumana Harihareswara
https://www.mediawiki.org/wiki/Accessibility has a list of People and
organizations working on MediaWiki accessibility including Sami
Mubarak, who works on the accessibility of MediaWiki for the blind (in
Arabic).  I've cc'd Sami.

Hope this helps!

-- 
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation

On 04/24/2013 03:17 PM, Brion Vibber wrote:
 Please note that screen readers and readers without JavaScript aren't
 in any way the same thing -- modern screen readers hook into real
 browsers like IE, Safari, Chrome, and Firefox and JavaScript runs just
 great in them.
 
 Serious accessibility concerns should really be referred to someone who's
 an expert... do we have anybody on staff or volunteer who has real, direct,
 current experience with vision-impaired users and their needs?
 
 -- brion
 
 
 On Wed, Apr 24, 2013 at 11:51 AM, Pau Giner pgi...@wikimedia.org wrote:
 

 Also, did you think of the accessibility issues in your solution?


 First, I want to clarify that the prototype was made just to communicate
 the idea in terms of interaction. The The implementation is just a quick
 hack to simulate this interaction.

 For a production implementation I can image the whole list of languages to
 be sent to the client, and then, the list being shortened by Javascript.
 For those users without Javascript (from screen-readers, to Search engine
 crawlers) the same list of links they receive now will be available for
 them.

 In any case, developers could provide even better strategies to solve that.
 As an interaction designer I just wanted to share the idea to collect
 possible concerns with the interaction proposed, to be fixed in next
 iterations of the designs before development effort is made.




 On Sat, Apr 20, 2013 at 2:04 AM, Mathieu Stumpf 
 psychosl...@culture-libre.org wrote:

 Also, did you think of the accessibility issues in your solution? Here I
 especialy think of people with view disabilities, for who js often mean
 no way to get the content, while a long list of hyperlinks is
 manageable.


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Design] Making inter-language links shorter

2013-04-24 Thread Federico Leva (Nemo)
I think this is one of the most amazing achievements the ULS will allow 
us, thanks Pau.


Pau Giner, 18/04/2013 18:50:

As part of the future plans for the Universal Language Selector, we were
considering to:

  * Show only a short list of the relevant languages for the user based
on geo-IP, previous choices and browser settings of the current
user. The language the users are looking for will be there most of
the times.
  * Include a more option to access the rest of the languages for
which the content exists with an indicator of the number of languages.
  * Provide a list of the rest of the languages that users can easily
scan (grouped by script and region ao that alphabetical ordering is
possible), and search (allowing users to search a language name in
another language, using ISO codes or even making typos).


The interface is IMHO fine: if done well, this supersedes any problem of 
link sorting and (hopefully) also any temptation to just collapse the 
list of languages (a bad thing in every and all cases).


The problem is the selection of languages. Will the languages shown by 
default those that ULS shows under most common languages?
	There's a fundamental difference here compared to interface language 
choice: that the user doesn't know in advance what language to look for, 
because the article may exist or not. Search, in this case, is often 
useless, unless you're in the wrong language and you want to go to the 
right language, which is an important usecase but not the only one.
	If I'm presented with the list of all the common languages, I will 
occasionally be bothered by a long list of annoyingly useless dialects 
of Italy, but it's so rare and person-dependent problem that it's indeed 
not worth looking at. The problem is rather with the languages that I 
will most likely not know but are always of some value, like Latin, 
Spanish, Portuguese, French. I don't want to look for those languages 
one by one with the search, nor to skim for them in multiple lists; 
grouping by script is completely useless for Latin script, so many 
languages use it (even ancient greek is more commonly studied in Italy 
than most latin script languages).
	In short, I think the list of languages shown by default should 
probably be more generous than with the ULS standard, and that all the 
(main) languages of the same *family* should be shown in it.


Nemo

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] File Licensing Guidelines

2013-04-24 Thread Matthew Walker
At the risk of starting another huge bikeshed like [1] I feel like we need
some good guidance on just how in the heck we are required to license
extensions/images/source code files. With the help of Marktraceur we now
have
http://www.mediawiki.org/wiki/Manual:Coding_conventions#Source_File_Headers
which
is somewhat specific to PHP but could be generalized to JS, CSS, and SQL.

[1] http://lists.wikimedia.org/pipermail/wikitech-l/2013-March/067217.html

But I have some additional questions... breaking this up into bits; my
current thought matrix is that:

== Extensions ==
* Must have a LICENSE file in the root with the full text of the license
for the extension, and appended any additional licenses for
libraries/resources they've pulled in
** How do we specify what license goes to what included component?

== PHP Files ==
* For generic files, include a statement like
http://www.mediawiki.org/wiki/Manual:Coding_conventions#Source_File_Headers
* If it's the Extension.php file $wgExtensionCredits array should have the
following items
** author
** version
** url
** license?
** If we include additional libraries, so we add another entry to the
wgExtensionCredits array?

== JS/CSS Files ==
This gets a bit confusing because apparently we're supposed to have a
license in every bit of content pushed to the user; did we ever settle that
huge thread in any meaninful way? E.g. how to push minimized but licensed
files?

== Image Files ==
Really shouldn't be licensed under GPLv2; but right now they implicitly
are. Is there a way to explicitly identify image/binary content as being CC
licensed? Do we just add a line to the license file about this?

== And... go! ==

 __
 moo. 
 --
\   ^__^
 \  ($$)\___
(__)\   )\/\
  U ||w |
|| ||


~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File Licensing Guidelines

2013-04-24 Thread Brian Wolff
On 2013-04-24 6:46 PM, Matthew Walker mwal...@wikimedia.org wrote:

 At the risk of starting another huge bikeshed like [1] I feel like we need
 some good guidance on just how in the heck we are required to license
 extensions/images/source code files. With the help of Marktraceur we now
 have

http://www.mediawiki.org/wiki/Manual:Coding_conventions#Source_File_Headers
 which
 is somewhat specific to PHP but could be generalized to JS, CSS, and SQL.

 [1] http://lists.wikimedia.org/pipermail/wikitech-l/2013-March/067217.html

 But I have some additional questions... breaking this up into bits; my
 current thought matrix is that:

 == Extensions ==
 * Must have a LICENSE file in the root with the full text of the license
 for the extension, and appended any additional licenses for
 libraries/resources they've pulled in
 ** How do we specify what license goes to what included component?

By saying in the license file component x is under license y.


 == PHP Files ==
 * For generic files, include a statement like

http://www.mediawiki.org/wiki/Manual:Coding_conventions#Source_File_Headers
 * If it's the Extension.php file $wgExtensionCredits array should have the
 following items
 ** author
 ** version
 ** url
 ** license?
 ** If we include additional libraries, so we add another entry to the
 wgExtensionCredits array?

Adding license info to extension credits is interesting idea. I have no
idea how we would displsy it though.

 == JS/CSS Files ==
 This gets a bit confusing because apparently we're supposed to have a
 license in every bit of content pushed to the user; did we ever settle
that
 huge thread in any meaninful way? E.g. how to push minimized but licensed
 files?

Says who? I do not believe this is a requirement. It perhaps would be nice,
if done sanely, but not a requirement.

 == Image Files ==
 Really shouldn't be licensed under GPLv2; but right now they implicitly
 are. Is there a way to explicitly identify image/binary content as being
CC
 licensed? Do we just add a line to the license file about this?

Yes they should be licensed under gplv2 as they are part of the software.
It would be nice if they were dual licensed under something cc-by-sa like
as well. Line in the license file sounds fine.

I think you are slightly over thinking things. We just need to adequetely
communicate license to potential reusers. It doesnt overly matter as to how
provided the people are informed.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] File Licensing Guidelines

2013-04-24 Thread Tyler Romeo
On Wed, Apr 24, 2013 at 6:27 PM, Brian Wolff bawo...@gmail.com wrote:

 Says who? I do not believe this is a requirement. It perhaps would be nice,
 if done sanely, but not a requirement.


Says the GPL. To be specific:

From section 0
 To “convey” a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through a
computer network, with no transfer of a copy, is not conveying.

From section 5
 You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the terms of
section 4, provided that you also meet all of these conditions:
 ...
 b) The work must carry prominent notices stating that it is released
under this License and any conditions added under section 7.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2015
Major in Computer Science
www.whizkidztech.com | tylerro...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l