Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Gabriel Wicke
On 05/13/2014 05:37 PM, Daniel Kinzler wrote:
 Hi all!
 
 During the hackathon, I worked on a patch that would make it possible for
 non-textual content to be included on wikitext pages using the template 
 syntax.
 The idea is that if we have a content handler that e.g. generates awesome
 diagrams from JSON data, like the extension Dan Andreescu wrote, we want to be
 able to use that output on a wiki page. But until now, that would have 
 required
 the content handler to generate wikitext for the transclusion - not easily 
 done.


It sounds like this won't work well with current Parsoid. We are using
action=expandtemplates for the preprocessing of transclusions, and then
parse the contents using Parsoid. The content is finally
passed through the sanitizer to keep XSS at bay.

This means that HTML returned from the preprocessor needs to be valid in
wikitext to avoid being stripped out by the sanitizer. Maybe that's actually
possible, but my impression is that you are shooting for something that's
closer to the behavior of a tag extension. Those already bypass the
sanitizer, so would be less troublesome in the short term. We currently also
can't process transclusions independently to HTML, as we still have to
support unbalanced templates. We are moving into that direction though,
which should also make it easier to support non-wikitext transclusion content.

In the longer team, Parsoid will request pre-sanitized and balanced HTML
from the content API [1,2] for everything but unbalanced wikitext content
[3]. The content API will treat it like any other request, and ask the
storage service for the HTML. If that's found, then it is directly returned
and no rendering happens. This is going to be the typical and fast case. If
there is however no HTML in storage for that revision the content API will
just call the renderer service and save the HTML back / return it to clients
like Parsoid.

So it is important to think of renderers as services, so that they are
usable from the content API and Parsoid. For existing PHP code this could
even be action=parse, but for new renderers without a need or desire to tie
themselves to MediaWiki internals I'd recommend to think of them as their
own service. This can also make them more attractive to third party
contributors from outside the MediaWiki world, as has for example recently
happened with Mathoid.

Gabriel

[1]: https://www.mediawiki.org/wiki/Requests_for_comment/Content_API
[2]: https://github.com/gwicke/restface
[3]: We are currently mentoring a GSoC project to collect statistics on
issues like unbalanced templates, which should allow us to systematically
mark those transclusions by wrapping them in a domparse tag in wikitext.
All transclusions outside of domparse will then be expected to yield
stand-alone HTML.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Derk-Jan Hartman
PS, i'm building an instance that is running this extension.

On Wed, May 14, 2014 at 12:34 AM, Jon Robson jrob...@wikimedia.org wrote:
 During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
 generic maps prototype extension [1]. We have noticed that many maps
 like extensions keep popping up and believed it was time we
 standardised on one that all these extensions could use so we share
 data better.

 We took a look at all the existing use cases and tried to imagine what
 such an extension would look like that wouldn't be too tied into a
 specific use case.

 The extension we came up with was a map extension that introduces a
 Map namespace where data for the map is stored in raw GeoJSON and can
 be edited via a JavaScript map editor interface. It also allows the
 inclusion of maps in wiki articles via a map template.

 Dan Andreescu also created a similar visualisation namespace which may
 want to be folded into this as a map could be seen as a visualisation.
 I invite Dan to comment on this with further details :-)!

 I'd be interested in people's thoughts around this extension. In
 particular I'd be interested in the answer to the question For my
 usecase A what would the WikiMaps extension have to support for me to
 use it.

 Thanks for your involvement in this discussion. Let's finally get a
 maps extension up on a wikimedia box!
 Jon

 [1] https://github.com/jdlrobson/WikiMaps

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Question about the ContentHandler and Extension:Score

2014-05-14 Thread David Cuenca
As a continuation to the community wish expressed last year to start a
musical score transcription project [1], I'm investigating what is needed
to make it happen.

The biggest hurdle seems to be how to separate content from layout. On the
one hand users should be able to define which part to work on (a staff, a
voice, an instrument, lyrics...), otoh they should be able to select which
elements to render for different representations. For instance a single
page has all parts but just for one page, a complete score has all the
parts for all pages, a piano part has only the piano staffs for all pages,
etc.

Lilypond can handle different layout representations, but of course the
content selection has to be prepared before launching the desired layout
rendering.

Any ideas about how to deal with this situation?

Cheers,
Micru



[1]
https://meta.wikimedia.org/wiki/Requests_for_comment/Musical_score_transcription_project_proposal
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Affiliation in username

2014-05-14 Thread Gabriel Wicke
On 05/07/2014 10:47 PM, Tyler Romeo wrote:
 One interesting idea might be what Reddit does:
 
 For a moderator of a subreddit, whenever they make a post it just appears
 normally. However, after posting they can choose to officiate it. All
 that does is highlight their username a different color and indicates they
 are acting in their position as moderator rather than a regular user.
 
 This idea could be applied to edits in core, and maybe posts in Flow. WMF
 employees in a special user group could make an edit, and then press a
 button on the history page to highlight that as an official edit.

A similar proposal for per-edit affiliation selection came up recently in
Zürich. It does sound more usable than having to log in using different
accounts.

In our context it might make more sense to let the user select the
affiliation at save time though, rather than making it an extra step after save.

Gabriel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Recommended viewing: talk on Cognitive Science and Design

2014-05-14 Thread Derk-Jan Hartman
I just love this Google I/O 2013 talk on human perception and
cognition, and its implications for interactive and visual design. It
is accessible, but with a lot of information and applies very well to
us I think.

I'm sure that many designers know all about this and some have
probably seen the clip before, but it is also very good for
developers, because many of these things we know subconsciously, but
it's not really part of our vocabulary.

https://www.youtube.com/watch?v=z2exxj4COhU

DJ

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Daniel Kinzler
Thanks all for the imput!

Am 14.05.2014 10:17, schrieb Gabriel Wicke: On 05/13/2014 05:37 PM, Daniel
Kinzler wrote:
 It sounds like this won't work well with current Parsoid. We are using
 action=expandtemplates for the preprocessing of transclusions, and then
 parse the contents using Parsoid. The content is finally
 passed through the sanitizer to keep XSS at bay.

 This means that HTML returned from the preprocessor needs to be valid in
 wikitext to avoid being stripped out by the sanitizer. Maybe that's actually
 possible, but my impression is that you are shooting for something that's
 closer to the behavior of a tag extension. Those already bypass the
 sanitizer, so would be less troublesome in the short term.

Yes. Just treat html.../html like a tag extension, and it should work fine.
Do you see any problems with that?

 So it is important to think of renderers as services, so that they are
 usable from the content API and Parsoid. For existing PHP code this could
 even be action=parse, but for new renderers without a need or desire to tie
 themselves to MediaWiki internals I'd recommend to think of them as their
 own service. This can also make them more attractive to third party
 contributors from outside the MediaWiki world, as has for example recently
 happened with Mathoid.

True, but that has little to do with my patch. It just means that 3rd party
Content objects should preferably implement getHtml() by calling out to a
service object.

Am 13.05.2014 21:38, schrieb Brad Jorsch (Anomie):
 To avoid the wikitext mangling, you could wrap it in some tag that works
 like html if $wgRawHtml is set and pre otherwise.

But pre will result in *escaped* HTML. That's just another kind of mangling.
It's at all the normal result of parsing.

Basically, the html mode is for expandtemplates only, and not intended to be
follow up by actual parsing.

Am 13.05.2014 21:38, schrieb Brad Jorsch (Anomie):
 Or one step further, maybe a tag foo wikitext={{P}}html goes here/foo
 that parses just as {{P}} does (and ignores html goes here entirely),
 which preserves the property that the output of expandtemplates will mostly
 work when passed back to the parser.

Hm... that's an interesting idea, I'll think about it!

Btw, just so this is mentioned somewhere: it would be very easy to simply not
expand such templates at all in expandtemplates mode, keeping them as {{T}} or
[[T]].

Am 14.05.2014 00:11, schrieb Matthew Flaschen:
 From working with Dan on this, the main issue is the ResourceLoader module 
 that the diagrams require (it uses a JavaScript library called Vega, plus a
 couple supporting libraries, and simple MW setup code).
 
 The container element that it needs can be as simple as:
 
 div data-something=.../div
 
 which is actually valid wikitext.

So, there is no server side rendering at all? It's all done using JS on the
client? Ok then, HTML transclusion isn't the solution.

 Can you outline how RL modules would be handled in the transclusion
 scenario?

The current patch does not really address that problem, I'm afraid. I can think
of two solutions:

* Create an SyntheticHtmlContent class that would hold meta info about modules
etc, just like ParserOutput - perhaps it would just contain a ParserOutput
object.  And an equvalent SyntheticWikitextContent class, perhaps. That would
allow us to pass such meta-info around as needed.

* Move the entire logic for HTML based transclusion into the wikitext parser,
where it can just call getParserOutput() on the respective Content object. We
would then no longer need the generic infrastructure for HTML transclusion.
Maybe that would be a better solution in the end.

Hm... yes, I should make an alternative patch using that approach, so we can
compare.


Thanks for your input!
-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikimedia engineering report, April 2014

2014-05-14 Thread Guillaume Paumier
Hi,

The report covering Wikimedia engineering activities in April 2014 is now
available.

Wiki version:
https://www.mediawiki.org/wiki/Wikimedia_engineering_report/2014/April
Blog version:
https://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/

We're also proposing a shorter, simpler and translatable version of this
report that does not assume specialized technical knowledge:
https://www.mediawiki.org/wiki/Wikimedia_engineering_report/2014/April/summary

Below is the HTML text of the report.

As always, feedback is appreciated on the usefulness of the report and its
summary, and on how to improve them.

--

Major news in April include:

   - the change of
formathttps://blog.wikimedia.org/2014/04/10/mediawiki-localization-file-format-changed-from-php-to-json/
of
   MediaWiki localization files from PHP to JSON, and the associated
modernization
   of the LocalisationUpdate
extensionhttps://blog.wikimedia.org/2014/04/03/modernising-mediawikis-localisation-update/
   ;
   - the move of Wikimedia
Labshttps://blog.wikimedia.org/2014/04/04/migrating-wikimedia-labs-to-a-new-data-center/
to
   a new data center;
   - the “Heartbleed” security
vulnerabilityhttps://blog.wikimedia.org/2014/04/10/wikimedias-response-to-the-heartbleed-security-vulnerability/
and
   how the Wikimedia Foundation’s team responded to it;
   - an explanation of how the Mobile team uses
Trellohttps://blog.wikimedia.org/2014/04/15/agile-and-trello-the-planning-cycle/
to
   plan their development sprints;
   - a project report on a grant to create “gadgets” for
VisualEditorhttps://blog.wikimedia.org/2014/04/22/visualeditor-gadgets/
   .

*Note: We’re also providing a shorter, simpler and translatable version of
this report
https://www.mediawiki.org/wiki/Wikimedia_engineering_report/2014/April/summary
that
does not assume specialized technical knowledge.*

Engineering metrics in April:

   - 158 unique committers contributed patchsets of code to MediaWiki.
   - The total number of unresolved
commitshttps://gerrit.wikimedia.org/r/#q,status:open+project:%255Emediawiki.*,n,zwent
   from around 1315 to about 1305.
   - About 30 shell requests
https://www.mediawiki.org/wiki/Shell_requests were
   processed.

Contents

   - 1 
Personnelhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Personnel
  - 1.1 Work with
ushttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Work_with_us
  - 1.2 
Announcementshttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Announcements
   - 2 Technical
Operationshttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Technical_Operations
   - 3 Features
Engineeringhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Features_Engineering
  - 3.1 Editor retention: Editing
toolshttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Editor_retention:_Editing_tools
  - 3.2 Core
Featureshttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Core_Features
  - 3.3 
Growthhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Growth
  - 3.4 
Supporthttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Support
   - 4 
Mobilehttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Mobile
   - 5 Language
Engineeringhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Language_Engineering
   - 6 Platform
Engineeringhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Platform_Engineering
  - 6.1 MediaWiki
Corehttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#MediaWiki_Core
  - 6.2 Quality
assurancehttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Quality_assurance
  - 6.3 
Multimediahttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Multimedia
  - 6.4 Engineering Community
Teamhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Engineering_Community_Team
   - 7 
Analyticshttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Analytics
   - 8 
Kiwixhttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Kiwix
   - 9 
Wikidatahttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Wikidata
   - 10 
Futurehttps://blog.wikimedia.org/2014/05/14/engineering-report-april-2014/#Future

 Personnel Work with us https://wikimediafoundation.org/wiki/Work_with_us

Are you looking to work for Wikimedia? We have a lot of hiring coming up,
and we really love talking to active community members about these roles.

   - VP of 
Engineeringhttp://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qSa9VfwQcs=9UL9Vfwtpage=Job%20Descriptionj=ods8Xfwu
   - 
ScrumMasterhttp://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qSa9VfwQcs=9UL9Vfwtpage=Job%20Descriptionj=oSrSYfwT
   - Software Engineer – VisualEditor

Re: [Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Antoine Musso
Le 14/05/2014 00:34, Jon Robson a écrit :
 During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
 generic maps prototype extension [1]. We have noticed that many maps
 like extensions keep popping up and believed it was time we
 standardised on one that all these extensions could use so we share
 data better.
 [1] https://github.com/jdlrobson/WikiMaps

snip
 Dan Andreescu also created a similar visualisation namespace which may
 want to be folded into this as a map could be seen as a visualisation.
 I invite Dan to comment on this with further details :-)!

Hello Jon,

In short, I have been very impressed by the lightning presentation of
WikiMaps and Dan Vizualization extension.


- WikiMaps seems to be a subset of Dan hack, it uses GeoJSON and renders
them on top of an OpenStreet map layer.

- Dan Viz extensions goes a step further since it uses any JSON based
format (ie GeoJSON) and then let you pick a renderer (ie: map) and
finely tweak the resulting output.

Both will make it way easier to render data set in a meaningful way to
our users and I am inviting you to *merge both efforts* to build the
next generation data visualization utility.

A typical use case for me would be:

- get the the population of countries over time (from wikidata?)
- collaboratively work on a data visualization
- have the resulting render parameters saved up with an id
- insert it in article



Such a system has been created previously but went down in 2010: Swivel.
You can still find articles about it though, ie:

 
http://datavisualization.ch/tools/swivel-review-–-a-guest-post-on-information-aesthetics/
 
http://infosthetics.com/archives/2010/04/social_visualization_software_review_swivel.html

It was essentially wikidata geared toward datasets with a very nice
graphical interface to build and share data visualization.  People could
vote for the best rendering and you could comment and share them easily.


(((random I am a naive person mumbling)))


The last fifteen years have seen information and knowledge spreading all
around the planet, big data is the next revolution.  The challenge comes
in apprehending them and your visualization tools are definitely a step
forward.

If anyone has doubt about data visualization, you should have a look at
a 20 minutes tech talk which nicely highlight how there is no more third
world countries any more (among other debunking):
 http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen


Heck, if I had the opportunity I will reach out to the community, make
data visualization part of the Wikimedia strategic plan and raise a few
millions dollars to make it a project of its own.  It has so much
leverage to better understand the world we are living in.  If such
project looked for a data visualization evangelist, I would be on the
front line.


-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming jQuery upgrade (breaking change)

2014-05-14 Thread Krinkle
I don't think it is possible or worth the effort to scan for these in an 
automated fashion within Jenkins.

Static analysis is virtually impossible due to the method names being too 
simple, and lots of it relies on details of how methods are called, as well.

For example, $something.error( .. ) was deprecated. Doing a static search for 
.error would yield to many false positives. And trying to parse the scripts 
and figure out what is and isn't a jQuery object seems like a task outside the 
scope here (especially considering this is a one-time migration).

Doing it via the method the migrate instrumentation uses is more reliable 
(though it still doesn't cover everything), but that requires execution. You'd 
have to somehow execute and trigger all code paths.

It would imho give a false sense of security. I'm afraid this comes down to 
requiring active maintenance and knowledge of the code base. While migration is 
simple and should not require knowledge of a module's workings, identifying 
them in the first place can't be done statically. Take time in the next few 
weeks to play with the various teams and projects you're a part of and take a 
few minutes to ensure there are no deprecation notices being fired when using 
them.

In addition, (to see if maybe you missed any) you could perform a few grep/ack 
searches if you want to be extra sure (manually, so that you can mentally 
exclude false positives). 

— Krinkle

On 7 May 2014, at 19:27, Siebrand Mazeland siebr...@kitano.nl wrote:

 Is there any way we can have a Jenkins job check for the use of deprecated 
 and report it, or have a scan of Gerrit repos done and reports made available 
 somewhere?
 
 Cheers!
 
 --
 Siebrand
 
 Op 7 mei 2014 om 18:29 heeft Krinkle krinklem...@gmail.com het volgende 
 geschreven:
 
 Hey all,
 
 TL;DR: jQuery will soon be upgraded from v1.8.3 to v1.11.x (the latest). This
 major release removes deprecated functionality. Please migrate away from this
 deprecated functionality as soon as possible.
 
 It's been a long time coming but we're now finally upgrading the jQuery 
 package
 that ships with MediaWiki.
 
 We used to regularly upgrade jQuery in the past, but got stuck at v1.8 a 
 couple
 of years ago due to lack of time and concern about disruption. Because of 
 this,
 many developers have needed to work around bugs that were already fixed in 
 later
 versions of jQuery. Thankfully, jQuery v1.9 (and its v2 counterpart) has been
 the first release in jQuery history that needed an upgrade guide[1][2]. It's 
 a
 major release that cleans up deprecated and dubious functionality.
 
 Migration of existing code in extensions, gadgets, and user  site scripts
 should be trivial (swapping one method for another, maybe with a slight 
 change
 to the parameters passed). This is all documented in the upgrade guide[1][2].
 The upgrade guide may look scary (as it lists many of your favourite 
 methods),
 but they are mostly just addressing edge cases.
 
 == Call to action ==
 
 This is a call for you, to:
 
 1) Get familiar with http://jquery.com/upgrade-guide/1.9/.
 
 2) Start migrating your code.
 
 jQuery v1.9 is about removing deprecated functionality. The new 
 functionality is
 already present in jQuery 1.8 or, in some cases, earlier.
 
 3) Look out for deprecation warnings.
 
 Once instrumentation has begun, using ?debug=true will log jQuery 
 deprecation
 warnings to the console. Look for ones marked JQMIGRATE [7]. You might also
 find deprecation notices from mediawiki.js, for more about those see the mail
 from last October [8].
 
 == Plan ==
 
 1) Instrumentation and logging
 
 The first phase is to instrument jQuery to work out all the areas which will
 need work. I have started work on loading jQuery Migrate alongside the 
 current
 version of jQuery. I expect that to land in master this week [6], and roll 
 out on
 Wikimedia wikis the week after. This will enable you to detect usage of most
 deprecated functionality through your browser console. Don't forget the 
 upgrade
 guide[1], as Migrate cannot detect everything.
 
 2) Upgrade and Migrate
 
 After this, the actual upgrade will take place, whilst Migrate stays. This
 should not break anything since Migrate covers almost all functionality that
 will be removed. The instrumentation and logging will remain during this 
 phase;
 the only effective change at this point is whatever jQuery didn't think was
 worth covering in Migrate or were just one of many bug fixes.
 
 3) Finalise upgrade
 
 Finally, we will remove the migration plugin (both the Migrate compatibility
 layer and its instrumentation). This will bring us to a clean version of 
 latest
 jQuery v1.x without compatibility hacks.
 
 
 A rough timeline:
 
 * 12 May 2014 (1.24wmf4 [9]): Phase 1 – Instrumentation and logging starts. 
 This
 will run for 4 weeks (until June 9).
 
 * 19 May 2014 (1.24wmf5): Phase 2 – Upgrade and Migrate. This will run for 
 3
 weeks (upto June 9). The instrumentation continues 

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Gabriel Wicke
On 05/14/2014 01:40 PM, Daniel Kinzler wrote:
 This means that HTML returned from the preprocessor needs to be valid in
 wikitext to avoid being stripped out by the sanitizer. Maybe that's actually
 possible, but my impression is that you are shooting for something that's
 closer to the behavior of a tag extension. Those already bypass the
 sanitizer, so would be less troublesome in the short term.
 
 Yes. Just treat html.../html like a tag extension, and it should work 
 fine.
 Do you see any problems with that?

First of all you'll have to make sure that users cannot inject html tags
as that would enable arbitrary XSS. I might have missed it, but I believe
that this is not yet done in your current patch.

In contrast to normal tag extensions html would also contain fully
rendered HTML, and should not be piped through action=parse as is done in
Parsoid for tag extensions (in absence of a direct tag extension expansion
API end point). We and other users of the expandtemplates API will have to
add special-case handling for this pseudo tag extension.

In HTML, the html tag is also not meant to be used inside the body of a
page. I'd suggest using a different tag name to avoid issues with HTML
parsers and potential name conflicts with existing tag extensions.

Overall it does not feel like a very clean way to do this. My preference
would be to let the consumer directly ask for pre-expanded wikitext *or*
HTML, without overloading action=expandtemplates. Even indicating the
content type explicitly in the API response (rather than inline with an HTML
tag) would be a better stop-gap as it would avoid some of the security and
compatibility issues described above.

 So it is important to think of renderers as services, so that they are
 usable from the content API and Parsoid. For existing PHP code this could
 even be action=parse, but for new renderers without a need or desire to tie
 themselves to MediaWiki internals I'd recommend to think of them as their
 own service. This can also make them more attractive to third party
 contributors from outside the MediaWiki world, as has for example recently
 happened with Mathoid.
 
 True, but that has little to do with my patch. It just means that 3rd party
 Content objects should preferably implement getHtml() by calling out to a
 service object.

You are right that it is not an immediate issue with your patch. The point
is about the *longer-term* role of the ContentHandler vs. the content API.
The ContentHandler could either try to be the central piece of our new
content API, or could become an integration point that normally calls out to
the content API and other services to retrieve HTML.

To me the latter is preferable as it enables us to optimize the content API
for high request rates by concentrating on doing one job well, and lets us
leverage this API from the server-side MediaWiki front-end through
ContentHandler.

Gabriel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Jon Robson
Yes - Dan and I talked about this during the hackathon. Dan was
thinking bigger and even more generic then me. The two could
potentially play hand in hand - essentially the Map namespace would be
used to make and curate maps and provide basic embedding and the
Visualisation* namespace could be used to point to a Map page as the
data source to do more complex things with the data set.

It would be good to play with this idea some more. The best next step
in my opinion would be to find a more complicated example of what the
community is currently doing with maps and use the Visualisation
namespace to recreate that visualisation using a map that exists in
the map namespace.

PS. Do we own wikimaps.org - I can imagine a map wiki would be
extremely useful project to us.

* or suggest better name :-)


On Wed, May 14, 2014 at 2:14 PM, Antoine Musso hashar+...@free.fr wrote:
 Le 14/05/2014 00:34, Jon Robson a écrit :
 During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
 generic maps prototype extension [1]. We have noticed that many maps
 like extensions keep popping up and believed it was time we
 standardised on one that all these extensions could use so we share
 data better.
 [1] https://github.com/jdlrobson/WikiMaps

 snip
 Dan Andreescu also created a similar visualisation namespace which may
 want to be folded into this as a map could be seen as a visualisation.
 I invite Dan to comment on this with further details :-)!

 Hello Jon,

 In short, I have been very impressed by the lightning presentation of
 WikiMaps and Dan Vizualization extension.


 - WikiMaps seems to be a subset of Dan hack, it uses GeoJSON and renders
 them on top of an OpenStreet map layer.

 - Dan Viz extensions goes a step further since it uses any JSON based
 format (ie GeoJSON) and then let you pick a renderer (ie: map) and
 finely tweak the resulting output.

 Both will make it way easier to render data set in a meaningful way to
 our users and I am inviting you to *merge both efforts* to build the
 next generation data visualization utility.

 A typical use case for me would be:

 - get the the population of countries over time (from wikidata?)
 - collaboratively work on a data visualization
 - have the resulting render parameters saved up with an id
 - insert it in article



 Such a system has been created previously but went down in 2010: Swivel.
 You can still find articles about it though, ie:

  
 http://datavisualization.ch/tools/swivel-review-–-a-guest-post-on-information-aesthetics/
  
 http://infosthetics.com/archives/2010/04/social_visualization_software_review_swivel.html

 It was essentially wikidata geared toward datasets with a very nice
 graphical interface to build and share data visualization.  People could
 vote for the best rendering and you could comment and share them easily.


 (((random I am a naive person mumbling)))


 The last fifteen years have seen information and knowledge spreading all
 around the planet, big data is the next revolution.  The challenge comes
 in apprehending them and your visualization tools are definitely a step
 forward.

 If anyone has doubt about data visualization, you should have a look at
 a 20 minutes tech talk which nicely highlight how there is no more third
 world countries any more (among other debunking):
  http://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen


 Heck, if I had the opportunity I will reach out to the community, make
 data visualization part of the Wikimedia strategic plan and raise a few
 millions dollars to make it a project of its own.  It has so much
 leverage to better understand the world we are living in.  If such
 project looked for a data visualization evangelist, I would be on the
 front line.


 --
 Antoine hashar Musso


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
* http://jonrobson.me.uk
* https://www.facebook.com/jonrobson
* @rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Daniel Kinzler
Am 14.05.2014 15:11, schrieb Gabriel Wicke:
 On 05/14/2014 01:40 PM, Daniel Kinzler wrote:
 This means that HTML returned from the preprocessor needs to be valid in
 wikitext to avoid being stripped out by the sanitizer. Maybe that's actually
 possible, but my impression is that you are shooting for something that's
 closer to the behavior of a tag extension. Those already bypass the
 sanitizer, so would be less troublesome in the short term.

 Yes. Just treat html.../html like a tag extension, and it should work 
 fine.
 Do you see any problems with that?
 
 First of all you'll have to make sure that users cannot inject html tags
 as that would enable arbitrary XSS. I might have missed it, but I believe
 that this is not yet done in your current patch.

My patch doesn't change the handling of html.../html by the parser. As
before, the parser will pass HTML code in html.../html through only if
wgRawHtml is enabled, and will mangle/sanitize it otherwise.

My patch does mean however that the text return by expandtemplates may not
render as expected when processed by the parser. Perhaps anomie's approach of
preserving the original template call would work, something like:

  html template={{T}}.../html

Then, the parser could apply the normal expansion when encountering the tag,
ignoring the pre-rendered HTML.

 In contrast to normal tag extensions html would also contain fully
 rendered HTML, and should not be piped through action=parse as is done in
 Parsoid for tag extensions (in absence of a direct tag extension expansion
 API end point). We and other users of the expandtemplates API will have to
 add special-case handling for this pseudo tag extension.

Handling for the html tag should already be in place, since it's part of the
core spec. The issue is only to know when to allow/trust such html tags, and
when to treat them as plain text (or like a pre tag).

 In HTML, the html tag is also not meant to be used inside the body of a
 page. I'd suggest using a different tag name to avoid issues with HTML
 parsers and potential name conflicts with existing tag extensions.

As above: html is part of the core syntax, to support $wgRawHtml. It's just
disabled per default.

 Overall it does not feel like a very clean way to do this. My preference
 would be to let the consumer directly ask for pre-expanded wikitext *or*
 HTML, without overloading action=expandtemplates. 

The question is how to represent non-wikitext transclusions in the output of
expandtemplates. We'll need an answer to this question in any case.

For the main purpose of my patch, expandtemplates is irrelevant. I added the
special mode that generates html specifically to have a consistent wikitext
representation for use by expandtemplates. I could simply disable it just as
well, so no expansion would apply for such templates when calling
expandtemplates (as is done for special page inclusiono).

 Even indicating the
 content type explicitly in the API response (rather than inline with an HTML
 tag) would be a better stop-gap as it would avoid some of the security and
 compatibility issues described above.

The content type did not change. It's wikitext.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Antoine Musso
Le 14/05/2014 15:16, Jon Robson a écrit :
 PS. Do we own wikimaps.org - I can imagine a map wiki would be
 extremely useful project to us.

The wikimaps.org is registered to the Wikimedia foundation.  It has been
created back in 2004!


Tip: under Linux/Mac OS you can query domain name registrars from the
command line.  Fire out a terminal and:

 $ whois wikimaps.org
 Domain Name:WIKIMAPS.ORG
 Domain ID: D104738187-LROR
 Creation Date: 2004-08-09T18:53:11Z
 Updated Date: 2012-05-05T00:25:54Z
 Registry Expiry Date: 2019-08-09T18:53:11Z
 Sponsoring Registrar:MarkMonitor Inc. (R37-LROR)
 ...


-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Maps-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Jon Robson
Tim I completely agree. This is something we need to setup.
Patches very much welcomed! :-)



On Wed, May 14, 2014 at 7:51 AM, Tim Alder t...@alder-digital.de wrote:
 I think the most important feature is to create on serverside a
 thumbnail for each map by using something like http://phantomjs.org/
 This thumbnails should than be in the big WMF caches. The map would
 become interactively only in the case a user click on it.
 This would reduce the numbers of request for loading a page and JS
 overhead and it would increase the stability of the system.
 Without this feature I afraid to never see the extension live in Wikipedia.

 Other nice features you can see at umap.openstreetmap.fr:
 *Choosing different backgrounds
 *POIs with interactive descriptions
 *Geometry import from OSM (WIWOSM)
 *different layers
 *...

 Greeting Tim alias Kolossos


 Am 14.05.2014 00:34, schrieb Jon Robson:
 During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
 generic maps prototype extension [1]. We have noticed that many maps
 like extensions keep popping up and believed it was time we
 standardised on one that all these extensions could use so we share
 data better.

 We took a look at all the existing use cases and tried to imagine what
 such an extension would look like that wouldn't be too tied into a
 specific use case.

 The extension we came up with was a map extension that introduces a
 Map namespace where data for the map is stored in raw GeoJSON and can
 be edited via a JavaScript map editor interface. It also allows the
 inclusion of maps in wiki articles via a map template.

 Dan Andreescu also created a similar visualisation namespace which may
 want to be folded into this as a map could be seen as a visualisation.
 I invite Dan to comment on this with further details :-)!

 I'd be interested in people's thoughts around this extension. In
 particular I'd be interested in the answer to the question For my
 usecase A what would the WikiMaps extension have to support for me to
 use it.

 Thanks for your involvement in this discussion. Let's finally get a
 maps extension up on a wikimedia box!
 Jon

 [1] https://github.com/jdlrobson/WikiMaps

 ___
 Maps-l mailing list
 map...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/maps-l



 ___
 Maps-l mailing list
 map...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/maps-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Gabriel Wicke
On 05/14/2014 03:22 PM, Daniel Kinzler wrote:
 My patch doesn't change the handling of html.../html by the parser. As
 before, the parser will pass HTML code in html.../html through only if
 wgRawHtml is enabled, and will mangle/sanitize it otherwise.


Oh, I thought that you wanted to support normal wikis with $wgRawHtml disabled.

 The content type did not change. It's wikitext.
Anything is wikitext ;)

Gabriel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] performance guidelines discussion today

2014-05-14 Thread Sumana Harihareswara
Sorry for the short notice. Today at 2100 UTC, instead of the regular
RfC discussion, we'll talk about the performance guidelines draft in
#wikimedia-office . We discussed this some in Zurich but I'd love a
chance to ask some followup questions to firm everything up. I'd also
welcome the chance to explain the two similar documents I'm working on:
architecture and security guidelines.

https://www.mediawiki.org/wiki/Architecture_meetings/Performance_guidelines_discussion_2014-05-14

Time:
http://www.worldtimebuddy.com/?qm=1lid=2950159,5128581,2147714,100h=5128581date=2014-5-14sln=17-18

11pm Berlin
5pm NYC
2pm San Francisco
7am Sydney

Next week it'll be an RfC chat. :) (I welcome volunteers!)
-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-14 Thread Dan Andreescu

  Can you outline how RL modules would be handled in the transclusion
  scenario?

 The current patch does not really address that problem, I'm afraid. I can
 think
 of two solutions:

 * Create an SyntheticHtmlContent class that would hold meta info about
 modules
 etc, just like ParserOutput - perhaps it would just contain a ParserOutput
 object.  And an equvalent SyntheticWikitextContent class, perhaps. That
 would
 allow us to pass such meta-info around as needed.

 * Move the entire logic for HTML based transclusion into the wikitext
 parser,
 where it can just call getParserOutput() on the respective Content object.
 We
 would then no longer need the generic infrastructure for HTML transclusion.
 Maybe that would be a better solution in the end.

 Hm... yes, I should make an alternative patch using that approach, so we
 can
 compare.


Thanks a lot Daniel, I'm happy to help test / try out any solutions you
want to experiment with.  I've moved my work to gerrit:
https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/extensions/Limnand
the last commit (with a lot of help from Matt F.) may be ready for you
to use as a use case.  Let me know if it'd be helpful to install this
somewhere in labs.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Maps-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-14 Thread Dan Andreescu
Thanks for starting this Jon, the end result is going to be awesome.  So
here's how I see things, it's roughly along the lines of what you've been
saying:

Server-side rendering and scaling is important.  This is one of the main
reasons I picked Vega [2] for my hack.  The same visualization grammar can
be used to generate png or svg [2].  I see the approach to visualization as
being similar to Parsoid:

* user creates a visualization (with a visual editor) and saves
* Vega parses that server side and generates an image, then refreshes the
caches accordingly
* a transcluded visualization renders the image from cache as a link to the
interactive version
* when the link is clicked, something like MediaViewer shows the
interactive visualization
* alternatively, we can allow editors to show the interactive version in
the transclusion itself, but that has performance implications until
browser caches are filled.

Now a little bit about where I see the Map namespace.  A visualization in
the world of Vega has three parts:

* Data (one or more sets of data, can be geojson, topojson, tsv, csv,
layers from OSM [3], OHM [4], etc.)
* Transformations on that data (scales, normalization, etc.)
* Marks (visual representations)

Transformations and Marks can be written by hand or by a visual editor that
introspects the specification to show what's possible.  Data to me is the
tricky part.  We may need to restrict Vega to only consume open data that's
hosted and curated by us, and that could be done in a few different ways:

* Namespaces like the Maps namespace that enables awesome collaborative
editing of geojson
* Datasets in WikiData using an alternative data model
* File namespace serving raw data from Commons (where people are familiar
with take down notices and have the infrastructure to deal with that)

But yes, I do see the Maps namespace as one of the sources of data that we
could visualize with Vega.  And recent developments in Vega make me feel
that it's really a solid choice for generic visualization.  We have
interactivity, headless mode, a seemingly clear path to a visual editor via
introspection of the grammar specification, and pretty much everything I
can think of needing from such a tool.

For the short term, I think further exploration of the Map namespace is
great, but I think generic visualization work could go into the
Visualization namespace.  My suggestion for a name for this namespace may
seem a bit obscure.  It's a word that means to illuminate: Limn [5].
 There's an old project by that name of which I'm not very fond (despite
writing some of it myself), but I've always thought the word was beautiful
and fit.  To what Antoine was saying earlier, we should illuminate the
world's knowledge with beautiful visualizations.


[1] https://github.com/trifacta/vega
[2] https://github.com/trifacta/vega/wiki/Headless-Mode
[3] OSM - Open Street Maps http://wiki.openstreetmap.org/wiki/Main_Page
[4] OHM - Open Historical Maps
http://wiki.openstreetmap.org/wiki/Open_Historical_Map
[5] Limn - depict or describe in painting or words:
https://github.com/wikimedia/mediawiki-extensions-Limn


On Wed, May 14, 2014 at 9:43 AM, Jon Robson jrob...@wikimedia.org wrote:

 Tim I completely agree. This is something we need to setup.
 Patches very much welcomed! :-)



 On Wed, May 14, 2014 at 7:51 AM, Tim Alder t...@alder-digital.de wrote:
  I think the most important feature is to create on serverside a
  thumbnail for each map by using something like http://phantomjs.org/
  This thumbnails should than be in the big WMF caches. The map would
  become interactively only in the case a user click on it.
  This would reduce the numbers of request for loading a page and JS
  overhead and it would increase the stability of the system.
  Without this feature I afraid to never see the extension live in
 Wikipedia.
 
  Other nice features you can see at umap.openstreetmap.fr:
  *Choosing different backgrounds
  *POIs with interactive descriptions
  *Geometry import from OSM (WIWOSM)
  *different layers
  *...
 
  Greeting Tim alias Kolossos
 
 
  Am 14.05.2014 00:34, schrieb Jon Robson:
  During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
  generic maps prototype extension [1]. We have noticed that many maps
  like extensions keep popping up and believed it was time we
  standardised on one that all these extensions could use so we share
  data better.
 
  We took a look at all the existing use cases and tried to imagine what
  such an extension would look like that wouldn't be too tied into a
  specific use case.
 
  The extension we came up with was a map extension that introduces a
  Map namespace where data for the map is stored in raw GeoJSON and can
  be edited via a JavaScript map editor interface. It also allows the
  inclusion of maps in wiki articles via a map template.
 
  Dan Andreescu also created a similar visualisation namespace which may
  want to be folded into this as a map could be seen as a 

Re: [Wikitech-l] is this how our thumbnail caching works?

2014-05-14 Thread Bryan Davis
On Tue, May 13, 2014 at 4:13 PM, Sumana Harihareswara
suma...@wikimedia.org wrote:
 I am trying to figure out how thumbnail retrieval  caching works right
 now - with Swift, and the frontline  secondary (frontend and
 backend) Varnishes. (I am working on the caching-related bit of the
 performance guidelines, and want to understand and help push forward on
 https://www.mediawiki.org/wiki/Requests_for_comment/Simplify_thumbnail_cache
 .) I looked for docs but didn't find anything that had been updated this
 year.

I was supposed to document this stuff when I first started with the
Foundation. Unfortunately I never really got it done. I've got some
notes and possibly most helpfully a diagram that I redrew in
Omigraffle based on a diagram that Faidon drew on the wall at the
office for me one day last fall. I've had this sitting around on my
local hard drive for months without uploading it anywhere, so I just
threw it up on mw.o [0].

The diagram shows the major components that you described in your
summary. Traffic from the internet for
http://upload.wikimedia.org/.../some_thumb_url.png hits a front end
LVS which routes to a frontend Varnish server. If the URL is not
cached locally by that Varnish instance, it will compute a hash of the
URL to select the backend Varnish instance that may have the content.
If the backend Varnish doesn't have the content it will request the
thumbnail from the Swift cluster. This request passes through an LVS
that selects a frontend Swift server. The frontend Swift server will
handle the request by asking the backend Swift cluster for the desired
image. If the image isn't found in the backend cluster, the frontend
Swift server will make a request to an image scaler server to have it
created. The image scalers run thumb.php from mediawiki/core.git to
fetch the original image from swift (which goes back to the same LVS
- Swift frontend - Swift backend path as the thumb request came
down). Once the original image is on the image scaler it will run it
through the mime type appropriate scaling software to produce a
thumbnail image. I don't remember if at this point the image is stored
in Swift by the image scaler via thumb.php's internal logic or if that
is handled by the frontend Swift server when it gets the response. In
either case, the newly created thumbnail ends up stored in the Swift
cluster and is returned as the image scaler's http response to the
frontend Swift server handling the original request. The frontend
Swift server in turn returns the thumbnail image to the backend
Varnish server which will cache it locally and then return the image
to the frontend Varnish. Finally the frontend Varnish will cache the
image response in local memory and return the image to the original
requestor.

The next time this exact thumbnail is requested, it may be found in
the frontend Varnish if the LVS routes to the same Varnish and it
hasn't been evicted from the in memory cache by time or the need to
store something newer. The image will stay in the backend Varnish
cache until it ages out based on the response headers or it is evicted
to make room for newer content. In the worst case the thumbnail will
be found in the Swift cluster where 3 copies of the thumbnail file are
stored indefinitely. The only way that the thumbnail will be removed
from Swift is when a new version of the source image is uploaded or
deleted and a purge request is sent out from the wiki.


[0]: https://www.mediawiki.org/wiki/File:Thumbnail-stack.svg
[1]: 
https://wikitech.wikimedia.org/wiki/Swift/Dev_Notes#Removing_NFS_from_the_scalers

Bryan
-- 
Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
[[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming jQuery upgrade (breaking change)

2014-05-14 Thread Siebrand Mazeland

 Op 14 mei 2014 om 14:58 heeft Krinkle krinklem...@gmail.com het volgende 
 geschreven:
 
 I don't think it is possible or worth the effort to scan for these in an 
 automated fashion within Jenkins.
 
 Static analysis is virtually impossible due to the method names being too 
 simple, and lots of it relies on details of how methods are called, as well.

At translatewiki.net, we log client side issues using a script[1]. Might 
something like that be of any benefit?

[1] 
http://git.wikimedia.org/blob/translatewiki.git/HEAD/webfiles%2Ftwn.jserrorlog.js

--
Siebrand
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming jQuery upgrade (breaking change)

2014-05-14 Thread Krinkle
That's an entirely different thing from scanning or catching things in 
development. That's harvesting from clients in production. That is certainly 
possible, and Wikimedia does that, too.

Deprecated properties[1], and features[2] use mw.track[3] to emit an event.

And the WikimediaEvents extension forwards these to EventLogging (at a sampled 
rate of course). Which are then available privately in the analytics database, 
and made available anonymised in Graphite[5].

You can set up similar logging for JQMIGRATE. Note though that jQuery Migrate 
doesn't have nice keys, you'll have to do with the full descriptive sentence of 
the warning (but you're doing that already at TWN).

You could try something like this:

if ( jQuery.migrateWarnings ) {
 jQuery.migrateWarnings.push = function (msg) {
mw.twn.log( '/webfiles/jswarning', { msg: '[jquery.migrate]' + msg, stack: 
new Error().stack } );
 };
}

I might set up some tracking for it at Wikimedia as well, but I'm not sure if 
that'll work properly.


— Krinkle

[1] 
https://github.com/wikimedia/mediawiki-core/blob/wmf/1.24wmf4/resources/src/mediawiki/mediawiki.js#L567
[2] 
https://github.com/wikimedia/mediawiki-core/blob/wmf/1.24wmf4/resources/src/mediawiki.api/mediawiki.api.js#L189
[3] 
https://github.com/wikimedia/mediawiki-core/blob/wmf/1.24wmf4/resources/src/mediawiki/mediawiki.js#L410-L427
[4] 
https://github.com/wikimedia/mediawiki-extensions-WikimediaEvents/blob/master/modules/ext.wikimediaEvents.deprecate.js#L14-L16
[5] http://codepen.io/Krinkle/full/zyodJ/

On 14 May 2014, at 19:07, Siebrand Mazeland siebr...@kitano.nl wrote:

 
 Op 14 mei 2014 om 14:58 heeft Krinkle krinklem...@gmail.com het volgende 
 geschreven:
 
 I don't think it is possible or worth the effort to scan for these in an 
 automated fashion within Jenkins.
 
 Static analysis is virtually impossible due to the method names being too 
 simple, and lots of it relies on details of how methods are called, as well.
 
 At translatewiki.net, we log client side issues using a script[1]. Might 
 something like that be of any benefit?
 
 [1] 
 http://git.wikimedia.org/blob/translatewiki.git/HEAD/webfiles%2Ftwn.jserrorlog.js
 
 --
 Siebrand
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] 1.23.0: second release candidate ready for download

2014-05-14 Thread Mark A . Hershberger
The second release candidate for 1.23.0 (1.23.0-rc.1) is now available
for download.

Note that we're making two changes in the release process:

 1. We'll be making this and all future releases on Wednesday instead of
Thursday or Friday.  This will allow people to avoid late Friday
nights.

 2. We're switching to Semantic Versioning (http://semver.org) for the
release candidates and other releases.

The changes since 1.23.0rc0 are as follows:

 * Added pp_sortkey column to page_props table, so pages can be efficiently
   queried and sorted by property value (bug 58032).
 * Introduced $wgPagePropsHaveSortkey as a backwards-compatibility switch,
   for using the old schema of the page_props table, in case the respective
   schema update was not applied.
 * (bug 13250) Restored method for clearing a watchlist in web UI
   so that users with large watchlists don't have to perform
   contortions to clear them.
 * (bug 63269) Email notifications were not correctly handling the
   [[MediaWiki:Helppage]] message being set to a full URL (the default).
   If you customized [[MediaWiki:Enotif body]] (the text of email 
notifications),
   you'll need to edit it locally to include the URL via the new variable
   $HELPPAGE instead of the parser functions fullurl and canonicalurl; otherwise
   you don't have to do anything.
 * $wgProfileToDatabase was removed. Set $wgProfiler to ProfilerSimpleDB
   in StartProfiler.php instead of using this.
 * The fix for bug 14323 was pushed to MediaWiki 1.24 since the
   changes cause problems for Semantic MediaWiki users.
 * The modules intended for use by custom skins were renamed.

Full release notes:
https://git.wikimedia.org/blob/mediawiki%2Fcore.git/1.23.0-rc.1/RELEASE-NOTES-1.23
https://www.mediawiki.org/wiki/Release_notes/1.23

**

Download:
http://download.wikimedia.org/mediawiki/1.23/mediawiki-core-1.23.0-rc.1.tar.gz
http://download.wikimedia.org/mediawiki/1.23/mediawiki-1.23.0-rc.1.tar.gz

GPG signatures:
http://download.wikimedia.org/mediawiki/1.23/mediawiki-core-1.23.0-rc.1.tar.gz.sig
http://download.wikimedia.org/mediawiki/1.23/mediawiki-1.23.0-rc.1.tar.gz.sig

Public keys:
https://www.mediawiki.org/keys/keys.html

Mark A. Hershberger
(Release Management Team)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Too much open bugs (Wikimedia Commons, Media storage)

2014-05-14 Thread Steinsplitter Wiki
Hi,
I wanted to bring to your attention a few critical issues that are negatively 
affecting the workof the Commons community and the wider Wikimedia projects.
First, there are about 36 unresolved bugs [0] in the Media storage component. 
Swift is a vitalcomponent of the projects' ability to show images and other 
media, and it having so manyopen bugs causes serious ongoing issues, not only 
on Commons, but everywhere.
Some of these bugs are high priority, and have been open since 2012. The 
community of adminsare straining under the load, and it's not getting any 
better now that additional paths into theupload pipeline are becoming available.
UploadWizard, too, is fundamentally broken. There are a *lot* of open bugs [1] 
[2] and the interfaceis in need of design love.
It would be awesome to see these issues addressed in the coming months. In 
particular, becauseWiki Loves Monuments will start in September, it would be 
great to have these issues fixedon or before August 13th, giving Commons about 
two weeks to test the fixes and changesbefore the contest starts, and plenty of 
room for backporting urgent fixes.
Regards,Steinsplitter
https://commons.wikimedia.org/wiki/User:Steinsplitter

[0] 
https://bugzilla.wikimedia.org/buglist.cgi?component=Media%20storageproduct=Wikimediaresolution=---[1]
 
https://commons.wikimedia.org/wiki/Special:PrefixIndex/Commons:Upload_help/Archive[2]
 
https://bugzilla.wikimedia.org/buglist.cgi?component=UploadWizardproduct=MediaWiki%20extensionsresolution=---
  
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Multimedia] Too much open bugs (Wikimedia Commons, Media storage)

2014-05-14 Thread Fabrice Florin
Thanks, Steinsplitter!

We really appreciate your call to action to address some of the multimedia 
issues you list below.

You will be glad to hear that our multimedia team is now shifting its focus to 
more of that technical debt and is taking on an upgrade of the Upload Wizard as 
our next big project for coming months.

We will keep you posted on major developments, but if you are interested in 
being more involved in this multimedia work, we invite you to join this mailing 
list, where we actively discuss current tasks, in partnership with community 
members:
https://lists.wikimedia.org/mailman/listinfo/multimedia

You can also track our work on this planning board for our current development 
cycle:
http://ur1.ca/h7w5s

We look forward to working with you and other community members to address 
these issues over time. :)

Be well,


Fabrice

___

Fabrice Florin
Product Manager
Wikimedia Foundation

http://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)

On May 14, 2014, at 12:40 PM, Greg Grossmeier g...@wikimedia.org wrote:

 FYI.
 
 - Forwarded message from Steinsplitter Wiki steinsplitter-w...@live.com 
 -
 
 Date: Wed, 14 May 2014 21:01:38 +0200
 From: Steinsplitter Wiki steinsplitter-w...@live.com
 To: wikitech-l@lists.wikimedia.org wikitech-l@lists.wikimedia.org
 Subject: [Wikitech-l] Too much open bugs (Wikimedia Commons, Media storage)
 Reply-To: Wikimedia developers wikitech-l@lists.wikimedia.org
 
 Hi,
 I wanted to bring to your attention a few critical issues that are 
 negatively affecting the workof the Commons community and the wider 
 Wikimedia projects.
 First, there are about 36 unresolved bugs [0] in the Media storage 
 component. Swift is a vitalcomponent of the projects' ability to show images 
 and other media, and it having so manyopen bugs causes serious ongoing 
 issues, not only on Commons, but everywhere.
 Some of these bugs are high priority, and have been open since 2012. The 
 community of adminsare straining under the load, and it's not getting any 
 better now that additional paths into theupload pipeline are becoming 
 available.
 UploadWizard, too, is fundamentally broken. There are a *lot* of open bugs 
 [1] [2] and the interfaceis in need of design love.
 It would be awesome to see these issues addressed in the coming months. In 
 particular, becauseWiki Loves Monuments will start in September, it would be 
 great to have these issues fixedon or before August 13th, giving Commons 
 about two weeks to test the fixes and changesbefore the contest starts, and 
 plenty of room for backporting urgent fixes.
 Regards,Steinsplitter
 https://commons.wikimedia.org/wiki/User:Steinsplitter
 
 [0] 
 https://bugzilla.wikimedia.org/buglist.cgi?component=Media%20storageproduct=Wikimediaresolution=---[1]
  
 https://commons.wikimedia.org/wiki/Special:PrefixIndex/Commons:Upload_help/Archive[2]
  
 https://bugzilla.wikimedia.org/buglist.cgi?component=UploadWizardproduct=MediaWiki%20extensionsresolution=---

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 - End forwarded message -
 
 -- 
 | Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
 | identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |
 
 ___
 Multimedia mailing list
 multime...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/multimedia





___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Announcement: Alex Monk joins Wikimedia as Features Contractor

2014-05-14 Thread Terry Chay
Hello everyone,

It's a testament to either how awesome our people are… or just how notorious I 
am for not announcing things promptly that I noticed this sitting in my Google 
Docs the other day with the note from the VisualEditor Team: terry, we and 
Alex think this is now good for you to post, if you're OK with it.

I'm including the letter as is. Please note the date at the end.

…

Hello all,

It is with great pleasure that I’m announcing that Alex Monk[0] has joined the 
Wikimedia Foundation as a contractor in Features Engineering.

Alex has been a great, wide-ranging and hugely supporting volunteer developer 
for 2 years on MediaWiki core and several extensions, including LiquidThreads 
and Echo, helping us get rid of outdated extensions once and for all, and 
fixing general site issues.

Alex will be working with the VisualEditor[3] team to help better integrate the 
editor into MediaWiki, and create and improve the editing tools, as well as 
fixes and support for MediaWiki core and other extensions as needed.

Alex is currently a student in England, and will be working with us part-time 
whilst he continues his studies. In his spare time, Alex was recently part of a 
team at his college competing in this year’s Student Robotics[4] competition, 
and also enjoys playing online computer games.

We’re delighted that he’s agreed to join us, with his first official day 
Monday, 10 March. 

Please join me in an only-slightly-belated welcome of Alex to the Wikimedia 
Foundation.

Take care,
Terry

[0]: [[mw:User:Krenair]]
[1]: [[mw:Extension:LiquidThreads]]
[2]: [[mw:Extension:Echo]]
[3]: [[mw:VisualEditor]]
[4]: [[w:en:Student Robotics]]

…

Even if the staff tries to cover up for me, they now know that I'll somehow 
conspire to forget to do so until two months later. :-D

Please join us in a very-belated welcome of Alex to the Wikimedia Foundation!

Take care,

terry


terry chay  최태리
Director of Features Engineering
Wikimedia Foundation
“Imagine a world in which every single human being can freely share in the sum 
of all knowledge. That's our commitment.”

p: +1 (415) 839-6885 x6832
m: +1 (408) 480-8902
e: tc...@wikimedia.org
i: http://terrychay.com/
w: http://meta.wikimedia.org/wiki/User:Tychay
aim: terrychay

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Security announcement: XSS in MobileFrontend

2014-05-14 Thread Max Semenik
During internal review, an XSS (cross-site scripting) vulnerability was
discovered in MobileFrontend extension.
Due to an unneeded unescaping of already sanitized section titles, HTML
inserted as plaintext into them was injected into DOM.
While on ordinary page views only users who have intentionally enabled
MobileFrontend's beta mode are in danger, it is possible to construct URLs
that enable beta for every user following them. Another requirement for
this vulnerability is screen witdth which must be at least 768 pixels.

Affected versions include MobileFrontend for MediaWiki 1.23 (branch
REL1_23, still in release candidate phase) and 1.24 (master). If you are
running a 1.24 WMF branch earlier than wmf/1.24wmf3, please update to a
later branch.

-- 
Best regards,
Max Semenik ([[User:MaxSem]])
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security announcement: XSS in MobileFrontend

2014-05-14 Thread Max Semenik
Almost forgot: this is https://bugzilla.wikimedia.org/show_bug.cgi?id=65042


On Wed, May 14, 2014 at 4:28 PM, Max Semenik maxsem.w...@gmail.com wrote:

 During internal review, an XSS (cross-site scripting) vulnerability was
 discovered in MobileFrontend extension.
 Due to an unneeded unescaping of already sanitized section titles, HTML
 inserted as plaintext into them was injected into DOM.
 While on ordinary page views only users who have intentionally enabled
 MobileFrontend's beta mode are in danger, it is possible to construct URLs
 that enable beta for every user following them. Another requirement for
 this vulnerability is screen witdth which must be at least 768 pixels.

 Affected versions include MobileFrontend for MediaWiki 1.23 (branch
 REL1_23, still in release candidate phase) and 1.24 (master). If you are
 running a 1.24 WMF branch earlier than wmf/1.24wmf3, please update to a
 later branch.

 --
 Best regards,
 Max Semenik ([[User:MaxSem]])




-- 
Best regards,
Max Semenik ([[User:MaxSem]])
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Security announcement: XSS in MobileFrontend

2014-05-14 Thread Daniel Friesen
On 2014-05-14, 4:28 PM, Max Semenik wrote:
 Another requirement for this vulnerability is screen width which must be at 
 least 768 pixels.
LOL, Some part of me just loves and finds vulnerability requirements
like this awesomely amusing.

I wonder if there's an XKCD entry like this.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] performance guidelines discussion today

2014-05-14 Thread Alolita Sharma
That's 230am in India. Wish these meetings were held a bit earlier (9am PDT
:-)

Best,
Alolita

Alolita Sharma
आलोलिता शर्मा
Director of Engineering
Internationalization  Localization
Wikimedia Foundation


On Wed, May 14, 2014 at 5:47 PM, Sumana Harihareswara suma...@wikimedia.org
 wrote:

 Sorry for the short notice. Today at 2100 UTC, instead of the regular
 RfC discussion, we'll talk about the performance guidelines draft in
 #wikimedia-office . We discussed this some in Zurich but I'd love a
 chance to ask some followup questions to firm everything up. I'd also
 welcome the chance to explain the two similar documents I'm working on:
 architecture and security guidelines.


 https://www.mediawiki.org/wiki/Architecture_meetings/Performance_guidelines_discussion_2014-05-14

 Time:

 http://www.worldtimebuddy.com/?qm=1lid=2950159,5128581,2147714,100h=5128581date=2014-5-14sln=17-18

 11pm Berlin
 5pm NYC
 2pm San Francisco
 7am Sydney

 Next week it'll be an RfC chat. :) (I welcome volunteers!)
 --
 Sumana Harihareswara
 Senior Technical Writer
 Wikimedia Foundation

 ___
 Engineering mailing list
 engineer...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/engineering

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l