Re: [Wikitech-l] Image scaling proposal: server-side mip-mapping

2014-05-13 Thread Gilles Dubuc
In a surprising turn of events, the latest survey
https://www.surveymonkey.com/s/F6CGPDJ shows that people consistently
prefer the chained thumbnails (each thumbnail generated based on the next
bigger thumbnail) to the ones we currently generate from the original and
to thumbnails generated all based on the largest thumbnail. Both in terms
of sharpness (not surprising, since there are more passes of sharpening due
to the chaining) and in terms of quality. I suspect that this is due to
sharpening being more pronounced visually.

JHeald on Commons village pump also brought up the fact that the resizing
we currently do with ImageMagick's -thumbnail introduces artifacts on some
images, which I've verified:
* using -thumbnail
https://dl.dropboxusercontent.com/u/109867/imagickchaining/sharpening/435-sharpened.jpg
* using -resize
https://dl.dropboxusercontent.com/u/109867/imagickchaining/sharpening/435-sharpened-resize.jpg
(I found out about that after the survey, for which all images have been
generated using the status quo -thumbnail option)

I'm pretty sure that we are using -thumbnail because it advertises itself
as being faster for large images. However with the testing I've done on
large images, it seems like if we were chaining thumbnail generation, the
performance gains would be so large that we could afford to use -resize and
avoid those artifacts, while still generating thumbnails much faster than
we currently do.

In conclusion, it seems safe to implement chaining where we maintain a set
of reference thumbnail, each generated based on the bigger one. Image
quality isn't impacted negatively by doing that, according to the survey.
And we would be able to use -resize, which would save us from artifacts and
improve image quality. Unless anyone objects, the Multimedia team can start
working on that change. I consider that the idea of generating those
reference thumbnails at upload time before the file is considered uploaded
to be as separate task, which we're also exploring at the moment.


On Fri, May 9, 2014 at 10:59 AM, Gilles Dubuc gil...@wikimedia.org wrote:

 After taking a closer look at what commands run exactly in production, it
 turns out that I probably applied the IM parameters in the wrong order when
 I put together the survey (order matters, particularly for sharpening).
 I'll regenerate the images and make another (hopefully better) survey that
 will compare the status quo, chained thumbnails and single thumbnail
 reference.


 On Mon, May 5, 2014 at 11:04 AM, Gilles Dubuc gil...@wikimedia.orgwrote:

 Buttons is French: Suiv. - Make it English


 That's a bug in SurveyMonkey, the buttons are in French because I was
 using the French version of the site at the time the survey was created,
 and now that text on those buttons can't be fixed. I'll make sure to switch
 SurveyMoney to English before creating the next one.

 No swap or overlay function for being able to compare


 SurveyMonkey is quite limited, it can't do that, unfortunately. The
 alternative would be to build my own survey from scratch, but that would be
 require a lot of resources for little benefit. This is really a one-off
 need.


 I wonder if the mip-mapping approach could somehow be combined with
 tiles?
 If we want proper zooming for large images, we will have to split them up
 into tiles of various sizes, and serve only the tiles for the visible
 portion when the user zooms on a small section of the image. Splitting up
 an image is a fast operation, so maybe it could be done on the fly (with
 caching for a small subset based on traffic), in which case having a
 chain
 of scaled versions of the image would take care of the zooming use case
 as
 well.


 Yes we could definitely have the reference thumbnail sizes be split up on
 the fly to generate tiles, when we get around to implementing proper
 zooming. It's as simple as making Varnish cache the tiles and the php
 backend generate them on the fly by splitting the reference thumbnails.

 Regarding the survey I ran on wikitech-l, so far there are 26
 respondents. It seems that on the images with a lot of edges (the test
 images provided by Rob) at least 30% of people can tell the difference in
 terms of quality/sharpness. On regular images people can't really tell.
 Thus, I wouldn't venture to do the full chaining, as a third of visitors
 will be able to tell that there's a quality degradation. I'll run another
 survey later in the week where instead of full chaining all the thumbs are
 generated based on the biggest thumb.




 On Sat, May 3, 2014 at 1:25 AM, Gergo Tisza gti...@wikimedia.org wrote:

 On Thu, May 1, 2014 at 7:02 AM, Gilles Dubuc gil...@wikimedia.org
 wrote:

  Another point about picking the one true bucket list: currently Media
  Viewer's buckets have been picked based on the most common screen
  resolutions, because Media Viewer tries to always use the entire width
 of
  the screen to display the image, so trying to achieve a 1-to-1 pixel
  correspondence 

[Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-13 Thread Daniel Kinzler
Hi all!

During the hackathon, I worked on a patch that would make it possible for
non-textual content to be included on wikitext pages using the template syntax.
The idea is that if we have a content handler that e.g. generates awesome
diagrams from JSON data, like the extension Dan Andreescu wrote, we want to be
able to use that output on a wiki page. But until now, that would have required
the content handler to generate wikitext for the transclusion - not easily done.

So, I came up with a way for ContentHandler to wrap the HTML generated by
another ContentHandler so it can be used for transclusion.

Have a look at the patch at https://gerrit.wikimedia.org/r/#/c/132710/. Note
that I have completely rewritten it since my first version at the hackathon.

It would be great to get some feedback on this, and have it merged soon, so we
can start using non-textual content to its full potential.

Here is a quick overview of the information flow. Let's assume we have a
template page T that is supposed to be transcluded on a target page P; the
template page uses the non-text content model X, while the target page is
wikitext. So:

* When Parser parses P, it encounters {{T}}
* Parser loads the Content object for T (an XContent object, for model X), and
calls getTextForTransclusion() on it, with CONTENT_MODEL_WIKITEXT as the target
format.
* getTextForTransclusion() calls getContentForTransclusion()
* getContentForTransclusion() calls convert( CONTENT_MODEL_WIKITEXT ) which
fails (because content model X doesn't provide a wikitext representation).
* getContentForTransclusion() then calls convertContentViaHtml()
* convertContentViaHtml() calls getTextForTransclusion( CONTENT_MODEL_HTML ) to
get the HTML representation.
* getTextForTransclusion() calls getContentForTransclusion() calls convert()
which handles the conversion to HTML by calling getHtml() directly.
* convertContentViaHtml() takes the HTML and calls makeContentFromHtml() on the
ContentHandler for wikitext.
* makeContentFromHtml() replaces the actual HTML by a parser strip mark, and
returns a WikitextContent containing this strip mark.
* The strip mark is eventually returns to the original Parser instances, and
used to replace {{T}} on the original page.

This essentialyl means that any content can be converted to HTML, and can be
transcluded into any content that provides an implementation of
makeContentFromHtml(). This actually changes how transclusion of JS and CSS
pages into wikitext pages work. You can try this out by transclusing a JS page
like MediaWiki:Test.js as a template on a wikitext page.


The old getWikitextForTransclusion() is now a shorthand for
getTextForTransclusion( CONTENT_MODEL_WIKITEXT ).


As Brion pointed out in a comment to my original, there is another caveat: what
should the expandtemplates module do when expanding non-wikitext templates? I
decided to just wrap the HTML in html.../html tags instead of using a strip
mark in this case. The resulting wikitext is however only correct if
$wgRawHtml is enabled, otherwise, the HTML will get mangled/escaped by wikitext
parsing. This seems acceptable to me, but please let me know if you have a
better idea.


So, let me know what you think!
Daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wmfall] Welcome Bernd Sitzmann as Software Developer to the Mobile App Team

2014-05-13 Thread Monte Hurd
Awesome! Welcome! :)


On Mon, May 12, 2014 at 4:52 PM, Brion Vibber bvib...@wikimedia.org wrote:

 Hertzlich willkommen Bernd!

 -- brion
 On May 12, 2014 10:09 AM, Tomasz Finc tf...@wikimedia.org wrote:

 Bernd will be working remotely from Fort Collins, CO, where he has
 lived ever since emigrating from Germany many years ago.

 He joins the Wikimedia Foundation after developing software at HP in
 Germany and the US for many years. He's worked on both front-end and
 back-end components, developing applications for enterprise management
 software as well as consumer software (HP MediaSmart Server, WebOS
 related work, and a couple of Android apps).

 Bernd is very passionate about user experience and Android. He is
 excited to contribute to open source projects. When not developing
 Android apps, he also enjoys learning about some of the latest web
 technologies, currently favoring Meteor.js. Afk he enjoys playing
 volleyball, ultimate frisbee and soccer.

 Bernd will join the Apps team working closely with Yuvi and Dmitry on
 the rebooted native Android Wikipedia app.

 Please Welcome Bernd

 --tomasz

 ___
 Wmfall mailing list
 wmf...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wmfall


 ___
 Wmfall mailing list
 wmf...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wmfall


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wmfall] Welcome Bernd Sitzmann as Software Developer to the Mobile App Team

2014-05-13 Thread Arthur Richards
Welcome - looking forward to working with you :)


On Tue, May 13, 2014 at 7:11 PM, Monte Hurd mh...@wikimedia.org wrote:

 Awesome! Welcome! :)


 On Mon, May 12, 2014 at 4:52 PM, Brion Vibber bvib...@wikimedia.orgwrote:

 Hertzlich willkommen Bernd!

 -- brion
 On May 12, 2014 10:09 AM, Tomasz Finc tf...@wikimedia.org wrote:

 Bernd will be working remotely from Fort Collins, CO, where he has
 lived ever since emigrating from Germany many years ago.

 He joins the Wikimedia Foundation after developing software at HP in
 Germany and the US for many years. He's worked on both front-end and
 back-end components, developing applications for enterprise management
 software as well as consumer software (HP MediaSmart Server, WebOS
 related work, and a couple of Android apps).

 Bernd is very passionate about user experience and Android. He is
 excited to contribute to open source projects. When not developing
 Android apps, he also enjoys learning about some of the latest web
 technologies, currently favoring Meteor.js. Afk he enjoys playing
 volleyball, ultimate frisbee and soccer.

 Bernd will join the Apps team working closely with Yuvi and Dmitry on
 the rebooted native Android Wikipedia app.

 Please Welcome Bernd

 --tomasz

 ___
 Wmfall mailing list
 wmf...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wmfall


 ___
 Wmfall mailing list
 wmf...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wmfall



 ___
 Wmfall mailing list
 wmf...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wmfall




-- 
Arthur Richards
Software Engineer, Mobile
[[User:Awjrichards]]
IRC: awjr
+1-415-839-6885 x6687
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Advice needed for MediaWiki-Vagrant problem

2014-05-13 Thread Ori Livneh
On Sat, May 10, 2014 at 11:48 AM, Ori Livneh o...@wikimedia.org wrote:

 Vagrant 1.6 changed the order of steps Vagrant performs on initialization:
 it now evaluates the project's Vagrantfile after loading plugins and
 parsing command-line arguments. This means that the various subcommands
 provide for role management no longer work, since the relevant plugins are
 loaded from the top of Vagrantfile, which is now too late a stage to be
 loading plugins.


We're not the only ones whose Vagrant setup broke as a result of this
change. An hour ago Vagrant's lead developer said I'll think about this
and see if we can bring this back. 
https://github.com/mitchellh/vagrant/issues/3775#issuecomment-42980896. So
I'll wait a bit longer before trying out alternative approaches. In the
meantime, please stick to 1.5.x.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-13 Thread Brad Jorsch (Anomie)
On Tue, May 13, 2014 at 11:37 AM, Daniel Kinzler dan...@brightbyte.dewrote:

 As Brion pointed out in a comment to my original, there is another caveat:
 what
 should the expandtemplates module do when expanding non-wikitext
 templates? I
 decided to just wrap the HTML in html.../html tags instead of using a
 strip
 mark in this case. The resulting wikitext is however only correct if
 $wgRawHtml is enabled, otherwise, the HTML will get mangled/escaped by
 wikitext
 parsing. This seems acceptable to me, but please let me know if you have a
 better idea.


Just brainstorming:

To avoid the wikitext mangling, you could wrap it in some tag that works
like html if $wgRawHtml is set and pre otherwise.

Or one step further, maybe a tag foo wikitext={{P}}html goes here/foo
that parses just as {{P}} does (and ignores html goes here entirely),
which preserves the property that the output of expandtemplates will mostly
work when passed back to the parser.


-- 
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HTML templating systems MediaWiki - is this summary right?

2014-05-13 Thread Sumana Harihareswara
On 04/02/2014 10:19 PM, S Page wrote:
 On Wed, Apr 2, 2014 at 11:09 AM, Sumana Harihareswara suma...@wikimedia.org
 wrote:
 
 TL;DR: who's testing out the non-Knockout approaches?

 
 Besides those listed at [1]
 The Flow discussion system needs to render templates on both the client and
 server[2]. The Flow team is going to use handlebars.js and its lightncandy
 PHP implementation; we wanted to try KnockOff/TAssembly but the timing
 isn't right. We will be ripping off :) MobileFrontend's integration of
 Hogan.js client-side templates.
 
 (Gabriel Wicke wrote I know that for example handlebars is used in a few
 teams right now. -- who else?)
 
 
 oojs - https://www.mediawiki.org/wiki/OOjs_UI -- could use this
 toolkit with one of the other template approaches, or maybe this is
 enough by itself!

 
 As I understand it, OOjs UI is more a rich widget library rather than a
 templating system. You would compose a page out of widgets that render what
 you want, and yes you could use OOjs UI with a templating engine (it
 operates on jQuery elements).
 
 
 Currently used inside VisualEditor and I am not sure
 whether any other MediaWiki extensions or teams are using it?

 
 The Multimedia team is using OOjs UI for the About this file dialog in
 the Media Viewer[3] (currently a beta feature). They haven't styled it to
 use Agora controls.
 
 Mobile is using VisualEditor with the beginnings of an Agora theme.
 
 Hope this helps, corrections welcome.
 
 [1]
 https://www.mediawiki.org/wiki/Requests_for_comment/HTML_templating_library#Existing_implementations_in_MediaWiki_extensions
 [2] https://www.mediawiki.org/wiki/Flow/Epic_Front-End#Templating
 [3] https://www.mediawiki.org/wiki/Multimedia/About_Media_Viewer

And Ryan Kaldari wrote on April 1:

 The mobile web team will be evaluating Gabriel's KnockoutJS template
 implementation sometime between April 14 and April 28. The things we will
 be looking at include: how well it will work for mobile's current
 templating needs, how appropriate it is for mobile delivery, and how much
 effort would be involved in migrating our existing templates to it. We'll
 update the list with our findings then.

Do the Flow or Mobile teams have any updates on how well their
experiments worked? Thanks!

-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Change the installer to make Project: the default option for meta namespace name

2014-05-13 Thread Krinkle
Note that Project and Project talk already work on any wiki. They are the 
canonical names for the namespace in question.

Namespaces have:
* a localised/standard name for the local wiki (based on configuration like 
Wikipedia and/or localisation like Usario).
* a canonical name (that is language and wiki independent, except for extra 
custom namespaces where the localised name becomes the canonical one, but 
project is not an extra custom namespace).
* aliases (alternate translations or legacy names, such as Image).

https://es.wikipedia.org/wiki/Project_talk:X
-
https://es.wikipedia.org/wiki/Wikipedia_ discusión:X

— Krinkle

On 11 May 2014, at 16:15, Matthew Flaschen mflasc...@wikimedia.org wrote:

 On 05/04/2014 05:08 AM, Max Semenik wrote:
 This proposal makes no sense: all these namespaces _are_ dependent on wiki
 language, so even if you force Project down the throats of non-English
 users, project talk would still be e.g. Project_ахцәажәара for Abkhazian
 wikis and so on.
 
 I think you may be misunderstanding the proposal.  I think it's proposing to 
 use standard namespace names for the project namespace, rather than 
 wiki-specific ones like the Wikipedia namespace.
 
 That's not the same as not localizing the standard namespaces.  For instance, 
 Spanish Wikipedia uses the standard namespaces for e.g. User (Usario) and 
 Category (Categoría) (that means e.g. Usario:Foo and User:Foo both work).
 
 However, it uses Wikipedia for the meta namespace.  Under this, it seems by 
 default a new Spanish wiki would use Proyecto, with Project also working and 
 referring to the same page.
 
 Matt Flaschen
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HTML templating systems MediaWiki - is this summary right?

2014-05-13 Thread Ryan Kaldari
On Tue, May 13, 2014 at 1:42 PM, Sumana Harihareswara suma...@wikimedia.org
 wrote:

 And Ryan Kaldari wrote on April 1:

  The mobile web team will be evaluating Gabriel's KnockoutJS template
  implementation sometime between April 14 and April 28. The things we will
  be looking at include: how well it will work for mobile's current
  templating needs, how appropriate it is for mobile delivery, and how much
  effort would be involved in migrating our existing templates to it. We'll
  update the list with our findings then.

 Do the Flow or Mobile teams have any updates on how well their
 experiments worked? Thanks!


Unfortunately, that card got moved to the backlog due to time constraints
and higher priorities, so we have not yet evaluated the KnockoutJS template
implementation. We are continuing to use hogan/handlebars in the meantime.

Ryan Kaldari
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-13 Thread Matthew Flaschen

On 05/13/2014 11:37 AM, Daniel Kinzler wrote:

Hi all!

During the hackathon, I worked on a patch that would make it possible for
non-textual content to be included on wikitext pages using the template syntax.
The idea is that if we have a content handler that e.g. generates awesome
diagrams from JSON data, like the extension Dan Andreescu wrote, we want to be
able to use that output on a wiki page. But until now, that would have required
the content handler to generate wikitext for the transclusion - not easily done.


From working with Dan on this, the main issue is the ResourceLoader 
module that the diagrams require (it uses a JavaScript library called 
Vega, plus a couple supporting libraries, and simple MW setup code).


The container element that it needs can be as simple as:

div data-something=.../div

which is actually valid wikitext.

Can you outline how RL modules would be handled in the transclusion 
scenario?


Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] is this how our thumbnail caching works?

2014-05-13 Thread Sumana Harihareswara
I am trying to figure out how thumbnail retrieval  caching works right
now - with Swift, and the frontline  secondary (frontend and
backend) Varnishes. (I am working on the caching-related bit of the
performance guidelines, and want to understand and help push forward on
https://www.mediawiki.org/wiki/Requests_for_comment/Simplify_thumbnail_cache
.) I looked for docs but didn't find anything that had been updated this
year.

Here's how I think it works, assuming you are a MediaWiki developer
who's written, e.g., a page that includes a thumbnail of an image:

First, your code must get the metadata about the image, which might come
from the local database, or memcached, or Commons. Then, you need to get
a thumbnail of the image at the dimensions your page requires. Rather
than create the thumbnail immediately on demand via parsing the filename
and dimensions, Wikimedia's MediaWiki is configured to use the 404
handler. (see [[Manual:Thumb_handler.php]]) Your page first receives a
URL indicating the eventual location of the thumbnail, then the browser
asks for that URL. If it hasn't been created yet, the web server
initially gets an internal 404 error; the 404 handler then kicks off the
thumbnailer to create the thumbnail, and the response gets sent to the
client.

As it is sent to the client, each thumbnail is stored in a Swift store
and stored in our frontline and secondary Varnish caches.

(The Varnish caches cache entire HTTP responses, including thumbnails of
images, frequently-requested pages, ResourceLoader modules, and similar
items that can be retrieved by URL. The frontline Varnishes keep these
in memory. (A weighted-random load balancer (LVS) distributes web
requests to the front-end Varnishes.) But if a frontline Varnish doesn't
have a response cached, it passes the request to the secondary Varnishes
via hash-based load balancing (on the hash of the URL). The secondary
Varnishes hold more responses, storing them ondisk. Every URL is on at
most one secondary Varnish.)

So, at the end of this whole process, any given thumbnail is in:
* the Swift thumbnail store (and will persist until the canonical image
changes, or is deleted, or we run out of space and flush Swift)
* the frontline and secondary Varnishes (and will persist until the
canonical image changes, or is deleted, or we restart the frontline
Varnishes or we evict data from the hard disks of the secondary Varnishes)

Is this right?

-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Output from Zurich Hackathon - yet another maps extension!

2014-05-13 Thread Jon Robson
During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
generic maps prototype extension [1]. We have noticed that many maps
like extensions keep popping up and believed it was time we
standardised on one that all these extensions could use so we share
data better.

We took a look at all the existing use cases and tried to imagine what
such an extension would look like that wouldn't be too tied into a
specific use case.

The extension we came up with was a map extension that introduces a
Map namespace where data for the map is stored in raw GeoJSON and can
be edited via a JavaScript map editor interface. It also allows the
inclusion of maps in wiki articles via a map template.

Dan Andreescu also created a similar visualisation namespace which may
want to be folded into this as a map could be seen as a visualisation.
I invite Dan to comment on this with further details :-)!

I'd be interested in people's thoughts around this extension. In
particular I'd be interested in the answer to the question For my
usecase A what would the WikiMaps extension have to support for me to
use it.

Thanks for your involvement in this discussion. Let's finally get a
maps extension up on a wikimedia box!
Jon

[1] https://github.com/jdlrobson/WikiMaps

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] is this how our thumbnail caching works?

2014-05-13 Thread Brian Wolff
On May 13, 2014 7:13 PM, Sumana Harihareswara suma...@wikimedia.org
wrote:

 I am trying to figure out how thumbnail retrieval  caching works right
 now - with Swift, and the frontline  secondary (frontend and
 backend) Varnishes. (I am working on the caching-related bit of the
 performance guidelines, and want to understand and help push forward on

https://www.mediawiki.org/wiki/Requests_for_comment/Simplify_thumbnail_cache
 .) I looked for docs but didn't find anything that had been updated this
 year.

 Here's how I think it works, assuming you are a MediaWiki developer
 who's written, e.g., a page that includes a thumbnail of an image:

 First, your code must get the metadata about the image, which might come
 from the local database, or memcached, or Commons. Then, you need to get
 a thumbnail of the image at the dimensions your page requires. Rather
 than create the thumbnail immediately on demand via parsing the filename
 and dimensions, Wikimedia's MediaWiki is configured to use the 404
 handler. (see [[Manual:Thumb_handler.php]]) Your page first receives a
 URL indicating the eventual location of the thumbnail, then the browser
 asks for that URL. If it hasn't been created yet, the web server
 initially gets an internal 404 error; the 404 handler then kicks off the
 thumbnailer to create the thumbnail, and the response gets sent to the
 client.

 As it is sent to the client, each thumbnail is stored in a Swift store
 and stored in our frontline and secondary Varnish caches.

 (The Varnish caches cache entire HTTP responses, including thumbnails of
 images, frequently-requested pages, ResourceLoader modules, and similar
 items that can be retrieved by URL. The frontline Varnishes keep these
 in memory. (A weighted-random load balancer (LVS) distributes web
 requests to the front-end Varnishes.) But if a frontline Varnish doesn't
 have a response cached, it passes the request to the secondary Varnishes
 via hash-based load balancing (on the hash of the URL). The secondary
 Varnishes hold more responses, storing them ondisk. Every URL is on at
 most one secondary Varnish.)

 So, at the end of this whole process, any given thumbnail is in:
 * the Swift thumbnail store (and will persist until the canonical image
 changes, or is deleted, or we run out of space and flush Swift)
 * the frontline and secondary Varnishes (and will persist until the
 canonical image changes, or is deleted, or we restart the frontline
 Varnishes or we evict data from the hard disks of the secondary Varnishes)

 Is this right?

 --
 Sumana Harihareswara
 Senior Technical Writer
 Wikimedia Foundation

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

That is mostly correct afaik. The varnish set up also includes different
caches in different locations (so during invalidation failures you can have
correct data in say usa but not europe, which confuses bug reporters
considerably)

Removal of thumb from storage can also happen by doing ?action=purge on the
image description page. I believe varnish caches only store for a max of 30
days (not 100% sure on that). Swift stores forever.

Im not sure if its in scope of what your trying to document, but htcp
purging is also an important aspect of how our varnish cache works, and a
part that historically has exploded several times.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] is this how our thumbnail caching works?

2014-05-13 Thread Matthew Flaschen

On 05/13/2014 06:13 PM, Sumana Harihareswara wrote:

I am trying to figure out how thumbnail retrieval  caching works right
now - with Swift, and the frontline  secondary (frontend and
backend) Varnishes. (I am working on the caching-related bit of the
performance guidelines, and want to understand and help push forward on
https://www.mediawiki.org/wiki/Requests_for_comment/Simplify_thumbnail_cache
.) I looked for docs but didn't find anything that had been updated this
year.

Here's how I think it works, assuming you are a MediaWiki developer
who's written, e.g., a page that includes a thumbnail of an image:


My understanding is that the image scaling/storage infrastructure is 
basically only used for user images (upload.wikimedia.org).  Code images 
generally go on http://bits.wikimedia.org/ and don't use Swift (though 
sometimes code may refer to a user image).


Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] workflow to add multiple patches to gerrit.wikimedia.org:29418/operations/puppet.git

2014-05-13 Thread Matthew Flaschen

On 11/20/2013 08:55 AM, Petr Bena wrote:

aha I see now, I should have branch before patch1. I didn't. What am I
supposed to do now?


I think:

git checkout production
git checkout -b patch1

(that was just to back up patch1, in case you didn't have it anywhere 
else)


git checkout production
git reset --hard origin/production

would have done what you want.  That would set your production branch to 
the last version of production fetched from Gerrit (so not including any 
unmerged changes).


Note, --hard erases any uncommitted changes.

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread George Herbert
I am getting redirect loops on every page I try right now on en.wikipedia


-- 
-george william herbert
george.herb...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread Jeremy Baron
On May 13, 2014 11:00 PM, George Herbert george.herb...@gmail.com wrote:
 I am getting redirect loops on every page I try right now on en.wikipedia

More details?

Tested logged in and out. (clean cookie jar) didn't notice anything weird.

-Jeremy
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread George Herbert
Seems to have stopped now.  2-3 min worth?

When it was happening it was affecting https traffic on chrome / windows /
landline (SF Bay Area), chrome / iphone / ATT (SF Bay Area).  Was trying
to test more combos but it started working again before I got data.


On Tue, May 13, 2014 at 8:00 PM, George Herbert george.herb...@gmail.comwrote:


 I am getting redirect loops on every page I try right now on en.wikipedia


 --
 -george william herbert
 george.herb...@gmail.com




-- 
-george william herbert
george.herb...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread George Herbert
I only got about nine page checks in before it started working again, eight
on PC and one on mobile, all on chrome.

It started perhaps a minute or two before 8:00, ended almost immediately
after.

Can't rule out some local effect, but it did affect both iphone and
desktop...



On Tue, May 13, 2014 at 8:08 PM, Jeremy Baron jer...@tuxmachine.com wrote:

 On May 13, 2014 11:00 PM, George Herbert george.herb...@gmail.com
 wrote:
  I am getting redirect loops on every page I try right now on en.wikipedia

 More details?

 Tested logged in and out. (clean cookie jar) didn't notice anything weird.

 -Jeremy
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
-george william herbert
george.herb...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread MZMcBride
George Herbert wrote:
Seems to have stopped now.  2-3 min worth?

When it was happening it was affecting https traffic on chrome / windows /
landline (SF Bay Area), chrome / iphone / ATT (SF Bay Area).  Was trying
to test more combos but it started working again before I got data.

Hi.

== May 14 ==
* 03:01 Tim: reverting apache change
* 02:53 Tim: deploying apache configuration change
https://gerrit.wikimedia.org/r/106109

https://wikitech.wikimedia.org/wiki/Server_admin_log can be a decent
reference for issues such as this. Hope that helps.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] En.wikipedia redirect loops on everything (??!!)

2014-05-13 Thread Tim Starling
On 14/05/14 13:00, George Herbert wrote:
 I am getting redirect loops on every page I try right now on en.wikipedia

A change to the Apache configuration accidentally overwrote the
HTTP_X_FORWARDED_PROTO server environment variable and caused
MediaWiki to think that all requests were plain HTTP rather than
HTTPS. This caused infinite redirects in the MediaWiki features that
were intended to force HTTPS for logged-in users. I reverted the
change after 3 minutes or so, following reports on IRC.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l