Re: [Wikitech-l] Requests with data URIs appended to proper URLs

2014-06-05 Thread S Page
On Thu, Jun 5, 2014 at 3:44 PM, Matthew Flaschen 
wrote:

>
> I did a quick check, and Jigsaw (the W3C's validator) does complain about
> our data URLs on that page:
>
> http://jigsaw.w3.org/css-validator/validator?uri=https%
> 3A%2F%2Fbits.wikimedia.org%2Fes.wikipedia.org%2Fload.php%
> 3Fdebug%3Dfalse%26lang%3Des%26modules%3Dext.gadget.a-commons-directo%
> 252Cimagenesinfobox%252CrefToolbar%7Cext.uls.nojs%7Cext.visualEditor.
> viewPageTarget.noscript%7Cext.wikihiero%7Cmediawiki.legacy.
> commonPrint%252Cshared%7Cmediawiki.skinning.interface%7Cmediawiki.ui.
> button%7Cskins.vector.styles%26only%3Dstyles%26skin%
> 3Dvector%26*&profile=css3&usermedium=all&warning=1&vextwarning=&lang=en
>
> -
>  .filehistory a img, #file img:hover
>
> Value Error : background url(data:image/png;base64,iVBO[blahblah]) is an
> incorrect URL url(data:image/png;base64,iVBO[blahblah]) repeat
>

The Jigsaw CSS validator complains about any data URL inside url() unless
it's in quotes.  The snippet
.filehistory a img,#file img:hover{
  background:white
url('data:image/png;base64,iVBORw0KGgoNSUhEUgAAABAQCAA6mKC9GElEQVQYV2N4DwX/oYBhgARgDJjEAAkAAEC99wFuu0VFAElFTkSuQmCC')
repeat;
  background:white url(//
bits.wikimedia.org/static-1.24wmf7/skins/common/images/Checker-16x16.png?2014-05-29T15:05:00Z)
repeat!ie
}
passes with the added quotes.

Stackoverflow [1] thinks this a bug in Jigsaw, but regardless why would the
CSS generate bogus requests in a cross-section of browsers?
"some less forgiving browsers" doesn't normally include 60% Firefox 29 and
31% Chrome 35.

If it's only es and pt, I wonder if it's something else in the
bits.wikimedia response that makes the browser try to interpret the charset
in the data URI as other than charset=US-ASCII . I don't know of a charset
that would not interpret these ASCII characters as ASCII.

Besides the other theories advanced, might it be sporadic ResourceLoader
mis-minification?

What's the HTTP Referer for these requests, can we tell if it's coming from
an external  to CSS or CSS a mw.loader.implement() piece of
JavaScript inserting the CSS?

[1] <
http://stackoverflow.com/questions/15481088/are-unquoted-data-uris-valid-in-css
>

-- 
=S Page  Features engineer
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread Tyler Romeo
On Thu, Jun 5, 2014 at 4:50 PM, David Gerard  wrote:

> Or, indeed, MediaWiki tarball version itself.


MediaWiki is a web application. As amazing as it would be for Wikipedia to
be secure against traffic analysis, we are not going to introduce
presentation-layer logic into an application-layer product.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Juergen Fenn
2014-06-06 0:16 GMT+02:00 Danny Horn :
> The Flow team is going to work in a few weeks on automatically archiving
> talk pages, so that we can enable Flow on pages where there are already
> existing conversations. Basically, this means moving the old discussions on
> an archive page, and leaving a link for "See archived talk page" visible on
> the new Flow board.
>
> That means there'll be a minute where a currently active discussion would
> get interrupted, and have to be restarted on the new Flow board. That will
> be a pain, but it would only be a one-time inconvenience during that
> transition moment.

Interesting point. Thanks for keeping us up to date. You might like to
know, though, that on German Wikipedia most discussions about Flow
seem to focus on how to turn it off or how to keep it out of the
project altogether. Switching to Flow would require a community
consensus anyway. So could you please consider a global switch for
communities that would rather like to disable these new features
completely.

Regards,
Jürgen.

PS. We have never enabled the LiquidThreads extension neither. And the
new Beta link up right did not stay there for long.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Requests with data URIs appended to proper URLs

2014-06-05 Thread Matthew Flaschen

On 06/05/2014 05:23 PM, Nuria Ruiz wrote:

I can see those images in the CSS  file that results after this call as
background images on the default skin of es.wikipedia. They look correct in
the CSS:

http://bits.wikimedia.org/es.wikipedia.org/load.php?debug=false&lang=es&modules=ext.gadget.a-commons-directo%2Cimagenesinfobox%2CrefToolbar%7Cext.uls.nojs%7Cext.visualEditor.viewPageTarget.noscript%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cmediawiki.ui.button%7Cskins.vector.styles&only=styles&skin=vector&*


The likely explanation could be that there is some syntax issue (that only
appears on windows?) on the data image path specified by us in the css and
thus some less forgiving browsers are not interpreting the image in the css
as a "data" url but rather as a regular one and then the url is composed as
if it was a relative url (using the current domain) and fetched (or tried
to fetch)


I did a quick check, and Jigsaw (the W3C's validator) does complain 
about our data URLs on that page:


http://jigsaw.w3.org/css-validator/validator?uri=https%3A%2F%2Fbits.wikimedia.org%2Fes.wikipedia.org%2Fload.php%3Fdebug%3Dfalse%26lang%3Des%26modules%3Dext.gadget.a-commons-directo%252Cimagenesinfobox%252CrefToolbar%7Cext.uls.nojs%7Cext.visualEditor.viewPageTarget.noscript%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%252Cshared%7Cmediawiki.skinning.interface%7Cmediawiki.ui.button%7Cskins.vector.styles%26only%3Dstyles%26skin%3Dvector%26*&profile=css3&usermedium=all&warning=1&vextwarning=&lang=en

-
 .filehistory a img, #file img:hover

Value Error : background 
url(data:image/png;base64,iVBORw0KGgoNSUhEUgAAABAQCAA6mKC9GElEQVQYV2N4DwX/oYBhgARgDJjEAAkAAEC99wFuu0VFAElFTkSuQmCC) 
is an incorrect URL 
url(data:image/png;base64,iVBORw0KGgoNSUhEUgAAABAQCAA6mKC9GElEQVQYV2N4DwX/oYBhgARgDJjEAAkAAEC99wFuu0VFAElFTkSuQmCC) 
repeat

-

I haven't done any background research (is this error new, do browsers 
care about this, is it just the data: protocol or something else

, etc.).

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Danny Horn
The Flow team is going to work in a few weeks on automatically archiving
talk pages, so that we can enable Flow on pages where there are already
existing conversations. Basically, this means moving the old discussions on
an archive page, and leaving a link for "See archived talk page" visible on
the new Flow board.

That means there'll be a minute where a currently active discussion would
get interrupted, and have to be restarted on the new Flow board. That will
be a pain, but it would only be a one-time inconvenience during that
transition moment.

The team's goal for LiquidThreads transition is essentially the same --
turning the existing conversations into a form that we can archive, and
preserve for the future. If we tried to turn ongoing LQT discussions into
ongoing Flow discussions, we'd actually be spending more development time
on archiving the deprecated feature than we would spend on archiving wiki
talk pages.

There's a few more big features for Flow that we're working on this summer;
we'll have some new things to show off pretty soon.

Danny
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Hovercards - Request for feedback and volunteer-wikis

2014-06-05 Thread Nick Wilson (Quiddity)
The Hovercards Beta Feature [1] is inspired by the Navigation Popups 
gadget. The goal of Hovercards is to make the reading experience better 
for our readers. When readers hover over links to other articles, they 
are provided a short summary and image. They can decide whether they 
need to visit that subject more fully before continuing the current subject.


We are looking for local project wikis who are interested in turning on 
the Hovercards feature for all readers, ahead of the future Beta Feature 
graduation. Please let us know, if your wiki might be interested in this.


We're also looking for additional feedback on all aspects of the 
extension.[2]


-

[1] https://www.mediawiki.org/wiki/Beta_Features/Hovercards
[2] https://www.mediawiki.org/wiki/Talk:Beta_Features/Hovercards

Sidenotes:
* The recent flickering-bug (primarily in Firefox) was fixed today.
* There is a gerrit patch which is examining the conflict of Navigation 
Popups and Hovercards. https://gerrit.wikimedia.org/r/#/c/120188/
* We also plan to suggest some potential updates to the styles in the 
original Navpopups gadget, so that it has better usability and is 
visually consistent with Hovercard and Reference-tooltip styles.


--

Quiddity
Community Liaison

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Chad
On Thu, Jun 5, 2014 at 2:38 PM, Jon Robson  wrote:

> So from what I can see Flow pretty much does everything LiquidThreads
> does. Usually better (permalinks with LiquidThreads are one thing that
> completely bugs me - they don't always take me to the correct place)
>
> As I understand it there is a migration script that turns LiquidThread
> pages to Flow boards.
>
> It seems silly maintaining 2 pieces of software that do the same thing
> on Wikimedia clusters.
>
> So I want to know:
> * What are the blockers for doing this?
> * Are there any use cases / killer features in LiquidThreads that are
> not in Flow that need to be ported over?
>
>
I did my part:

https://www.mediawiki.org/w/index.php?title=User_talk:%5Edemon&diff=prev&oldid=1023166

-Chad
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Federico Leva (Nemo)

+1 to converting all talk pages past and future to standard wikitext.

Jon, that happens "only" when someone else has replied to the thread in 
the meanwhile: get faster. ;) 
https://bugzilla.wikimedia.org/show_bug.cgi?id=34247#c2


Nemo

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Jon Robson
It's usually in e-mail notifications
e.g. the one I got today which takes me to the wrong thread (throws me
at top of page)
<<<
Hi Jdlrobson,

this is a notification from MediaWiki that a new thread on Extension
talk:MobileFrontend, 'Not able to save changes to existing pages in
MobileFront end',
was created on 5 June 2014 at 17:13 by Deepak343

You can see it at


The text is:
Not able to save changes to existing pages in MobileFront end by an
anonymous user. When he saves it says "Error - edit not saved". I have
set the directive of anonymous editing to YES in mobilefrontend
settings.
<<<

On Thu, Jun 5, 2014 at 2:13 PM, Helder .  wrote:
> On Thu, Jun 5, 2014 at 5:38 PM, Jon Robson  wrote:
>> permalinks with LiquidThreads are one thing that
>> completely bugs me - they don't always take me to the correct place
> They work fine for me. Do you have any specific examples where it fails?
>
> Helder
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
* http://jonrobson.me.uk
* https://www.facebook.com/jonrobson
* @rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Requests with data URIs appended to proper URLs

2014-06-05 Thread Nuria Ruiz
I can see those images in the CSS  file that results after this call as
background images on the default skin of es.wikipedia. They look correct in
the CSS:

http://bits.wikimedia.org/es.wikipedia.org/load.php?debug=false&lang=es&modules=ext.gadget.a-commons-directo%2Cimagenesinfobox%2CrefToolbar%7Cext.uls.nojs%7Cext.visualEditor.viewPageTarget.noscript%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cmediawiki.ui.button%7Cskins.vector.styles&only=styles&skin=vector&*


The likely explanation could be that there is some syntax issue (that only
appears on windows?) on the data image path specified by us in the css and
thus some less forgiving browsers are not interpreting the image in the css
as a "data" url but rather as a regular one and then the url is composed as
if it was a relative url (using the current domain) and fetched (or tried
to fetch)

That is, some browsers on windows are interpreting data urls as if they
were like this:
background-url:url('/some/relative/path')

In any case that fetch should be a 404 so the thing we should probably
think of fixing  going forward is not counting 404 urls in pageviews.






On Thu, Jun 5, 2014 at 6:12 PM, Bartosz Dziewoński 
wrote:

> On Thu, 05 Jun 2014 15:36:07 +0200, Christian Aistleitner <
> christ...@quelltextlich.at> wrote:
>
>  The image data in the data uri scheme decodes to images from
>> VectorBeta [3] like:
>>  VectorBeta/resources/typography/images/search-fade.png
>>   VectorBeta/resources/typography/images/tab-break.png
>>   VectorBeta/resources/typography/images/tab-current-fade.png
>>   VectorBeta/resources/typography/images/portal-break.png
>>
>
> These images are also part of the core Vector skin, where
> they sit at [mediawiki/core]/skins/vector/images.
>
> --
> Matma Rex
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Helder .
On Thu, Jun 5, 2014 at 5:38 PM, Jon Robson  wrote:
> permalinks with LiquidThreads are one thing that
> completely bugs me - they don't always take me to the correct place
They work fine for me. Do you have any specific examples where it fails?

Helder

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread David Gerard
On 5 June 2014 21:38, Jon Robson  wrote:

> So from what I can see Flow pretty much does everything LiquidThreads
> does. Usually better (permalinks with LiquidThreads are one thing that
> completely bugs me - they don't always take me to the correct place)
> As I understand it there is a migration script that turns LiquidThread
> pages to Flow boards.


Speaking as admin of a wiki stuck with LQT: OH YES PLEASE.

(Assuming Flow won't end up all-but-abandoned as well.)


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Bartosz Dziewoński

On Thu, 05 Jun 2014 22:38:54 +0200, Jon Robson  wrote:


So I want to know:
* What are the blockers for doing this?
* Are there any use cases / killer features in LiquidThreads that are
not in Flow that need to be ported over?


One thing that immediately springs to mind is being able to enable it on a page 
without modifying cluster configuration. There may be others.

--
Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread David Gerard
On 5 June 2014 17:45, Zack Weinberg  wrote:

> I'd like to restart the conversation about hardening Wikipedia (or
> possibly Wikimedia in general) against traffic analysis.


Or, indeed, MediaWiki tarball version itself.



- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread Gabriel Wicke
On 06/05/2014 11:53 AM, Nick White wrote:
> As was mentioned, external resources like variously sized images 
> would probably be the trickiest thing to figure out good ways 
> around. IIRC SPDY has some inlining multiple resources in the same 
> packet sort of stuff, which we might be able to take advantage of to 
> help here (it's been ages since I read about it, though).


When using SPDY browsers multiplex and interleave all requests over a single
TCP connection, while with HTTP they typically open around six parallel
connections for HTTPS. HTTP pipelining also tends to be disabled in desktop
browsers, which makes it relatively easy to figure out the size of
individual requests & thus potentially the page viewed or edited.

With Apple finally adding SPDY support in the latest Safari release (after
IE ;) support should soon grow beyond the 67% of global equests claimed
currently [1], which is good for performance, security and architectural
simplicity.

Ops has a draft goal of experimental SPDY support in Q2, so it seems that
it's going to happen soon. Also see bug 33890 [2].

Gabriel

[1]: http://caniuse.com/spdy
[2]: https://bugzilla.wikimedia.org/show_bug.cgi?id=33890

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] LiquidThreads - how do we kill it?

2014-06-05 Thread Jon Robson
So from what I can see Flow pretty much does everything LiquidThreads
does. Usually better (permalinks with LiquidThreads are one thing that
completely bugs me - they don't always take me to the correct place)

As I understand it there is a migration script that turns LiquidThread
pages to Flow boards.

It seems silly maintaining 2 pieces of software that do the same thing
on Wikimedia clusters.

So I want to know:
* What are the blockers for doing this?
* Are there any use cases / killer features in LiquidThreads that are
not in Flow that need to be ported over?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread C. Scott Ananian
Introducting my own working theory here, ignore if you wish.

I'd think that the *first* thing that would have to happen is that the page
and the images it contains would have to be delivered in one stream.  There
are both HTML5 (resource bundling) and protocol (SPDY) mechanisms for doing
this.  Some URL rearrangement to consolidate hosts might be required as
well. But in the interest of being forward-looking I'd suggest taking some
implementation of resource bundling as a given.  Similarly, I wouldn't
waste time profiling both compressed and uncompressed data; assume the
bundle is compressed.

That leaves a simpler question: if the contents of a page (including
images) are delivered as a compressed bundle over a single host connection,
how easy is it to tell the what page I'm looking at from the size of the
response?  How much padding would need to be added to bundles to foil the
analysis?

Some extensions:
1) Assume that it's not just a single page at a time, but a sequence of N
linked pages.  How does the required padding depend on N?
2) Can correlations be made? For example, once I've visited page A which
contained image X, visiting page B which also has image X will not require
X to be re-downloaded.  So can I distinguish certain pairs of "long
request/short request"?
3) What if the attacker is not interested in specific pages, but instead in
enumerating users looking at a *specific* page (or set of pages).  That is,
instead of looking at averages across all pages, look at the "unusual"
outliers.
4) Active targetted attack: assume that the attacker can modify pages or
images referenced by pages.  This makes it easier to tell when specific
pages are accessed.   How can this be prevented/discouraged?
5) As you mentioned, disclosing user identity from read-only traffic
patterns is also a concern.

I'm afraid the answers will be that traffic analysis is still very easy.
 But it still be useful to know *how* easy.
 --scott
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Thanks a lot Chris for helping in solving this.


On Thu, Jun 5, 2014 at 11:37 PM, Amanpreet Singh <
amanpreet.iitr2...@gmail.com> wrote:

> Thanks Chris for the help,
> I have tried your script and I think this part atleast is fine, I have
> pinged you with output of the script.
>
>
> On Thu, Jun 5, 2014 at 11:05 PM, Chris Steipp 
> wrote:
>
>> On Thu, Jun 5, 2014 at 9:23 AM, Amanpreet Singh <
>> amanpreet.iitr2...@gmail.com> wrote:
>>
>> > Dear Chris
>> > I tried this but still no result it gives same error NULL,
>> > I also copied your entire demo and pasted it to test but it also
>> returned
>> > same result, maybe its something related to canonicalServerUrl,
>> > I also tried Magnus one it gives 'Error retrieving token:
>> > mwoauth-oauth-exception'.
>> >
>>
>> One of the best ways to debug this is to start using a local instance, so
>> you can look at the server's debug log. Then you can see what error the
>> server is giving. MediaWiki vagrant has an oauth role, so you can just
>> enable role and start using it.
>>
>> canonicalServerUrl is only used for the identity check-- it sounds like
>> you're not even to that point yet, so that shouldn't be an issue (if it
>> was, the client library would say you have an invalid JWT).
>>
>> Feel free to ping me on IRC (csteipp) and I can try to walk you through.
>> You may want to try this script here:
>>
>> https://www.mediawiki.org/wiki/User:CSteipp/OAuth_debug_client
>>
>> That should at least prove it's not a connectivity / curl issue.
>>
>>
>> >
>> >
>> > On Thu, Jun 5, 2014 at 9:14 PM, Chris Steipp 
>> > wrote:
>> >
>> > > On Thursday, June 5, 2014, Amanpreet Singh <
>> amanpreet.iitr2...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > Thanks for quick reply,
>> > > > I am just getting NULL after making an OAuth call and that callback
>> > > wasn't
>> > > > confirmed, I hope I am making call to correct url which is
>> > > >
>> > > >
>> > >
>> >
>> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
>> > > > What I should get back is a token with key and a secret.
>> > > >
>> > >
>> > > Try using www.mediawiki.org-- otherwise the redirect will happen the
>> > > signature won't verify.
>> > >
>> > >
>> > > >
>> > > >
>> > > > On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske <
>> > > magnusman...@googlemail.com
>> > > > >
>> > > > wrote:
>> > > >
>> > > > > If all you want is some quick code infusion, I can offer my PHP
>> > class:
>> > > > >
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
>> > > > >
>> > > > > You'd have to patch the "loadIniFile" method to point at your ini
>> > file,
>> > > > but
>> > > > > the rest should work as is. High-level method are towards the end,
>> > > > usually
>> > > > > self-explaining like "setLabel".
>> > > > >
>> > > > > Cheers,
>> > > > > Magnus
>> > > > >
>> > > > >
>> > > > > On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper <
>> > aklap...@wikimedia.org
>> > > > >
>> > > > > wrote:
>> > > > >
>> > > > > > Hi,
>> > > > > >
>> > > > > > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
>> > > > > > > I need some help regarding my GSoC project in which I need to
>> > > > implement
>> > > > > > an
>> > > > > > > OAuth login system for a browser based plugin, so we can
>> identify
>> > > > > users.
>> > > > > > > But I am stuck and not able to get anything here >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
>> > > > > > > . Kindly help me, and tell me if further information is
>> needed.
>> > > > > >
>> > > > > > What is the problem you're facing, what have you tried already,
>> > > what's
>> > > > > > the output you get, etc.?
>> > > > > >
>> > > > > > andre
>> > > > > > --
>> > > > > > Andre Klapper | Wikimedia Bugwrangler
>> > > > > > http://blogs.gnome.org/aklapper/
>> > > > > >
>> > > > > >
>> > > > > > ___
>> > > > > > Wikitech-l mailing list
>> > > > > > Wikitech-l@lists.wikimedia.org 
>> > > > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> > > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > undefined
>> > > > > ___
>> > > > > Wikitech-l mailing list
>> > > > > Wikitech-l@lists.wikimedia.org 
>> > > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Amanpreet Singh,
>> > > > IIT Roorkee
>> > > > ___
>> > > > Wikitech-l mailing list
>> > > > Wikitech-l@lists.wikimedia.org 
>> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> > > ___
>> > > Wikitech-l mailing list
>> > > Wikitech-l@lists.wikimedia.org
>> > > https://lists.wikimedia.org/mailman/listinfo/wik

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread Nick White
Hi Zack,

On Thu, Jun 05, 2014 at 12:45:11PM -0400, Zack Weinberg wrote:
> I'd like to restart the conversation about hardening Wikipedia (or
> possibly Wikimedia in general) against traffic analysis.  I brought
> this up ... last November, I think, give or take a month?  but it got
> lost in a larger discussion about HTTPS.

This sounds like a great idea to me, thanks for thinking about it 
and sharing it. Privacy of peoples' reading habits is critical, and 
the more we can do to ensure it the better.

> With that data in hand, the next phase would be to develop some sort
> of algorithm for automatically padding HTTP responses to maximize
> eavesdropper confusion while minimizing overhead.  I don't yet know
> exactly how this would work.  I imagine that it would be based on
> clustering the database into sets of pages with similar length but
> radically different contents.  The output of this would be some
> combination of changes to MediaWiki core (for instance, to ensure that
> the overall length of the HTTP response headers does not change when
> one logs in) and an extension module that actually performs the bulk
> of the padding.  I am not at all a PHP developer, so I would need help
> from someone who is with this part.

I'm not a big PHP developer, but given the right project I can be 
enticed into doing some, and I'd be very happy to help out with 
this. Ensuring any changes didn't add complexity would be very 
important, but that should be do-able.

As was mentioned, external resources like variously sized images 
would probably be the trickiest thing to figure out good ways 
around. IIRC SPDY has some inlining multiple resources in the same 
packet sort of stuff, which we might be able to take advantage of to 
help here (it's been ages since I read about it, though).

Nick

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Thanks Chris for the help,
I have tried your script and I think this part atleast is fine, I have
pinged you with output of the script.


On Thu, Jun 5, 2014 at 11:05 PM, Chris Steipp  wrote:

> On Thu, Jun 5, 2014 at 9:23 AM, Amanpreet Singh <
> amanpreet.iitr2...@gmail.com> wrote:
>
> > Dear Chris
> > I tried this but still no result it gives same error NULL,
> > I also copied your entire demo and pasted it to test but it also returned
> > same result, maybe its something related to canonicalServerUrl,
> > I also tried Magnus one it gives 'Error retrieving token:
> > mwoauth-oauth-exception'.
> >
>
> One of the best ways to debug this is to start using a local instance, so
> you can look at the server's debug log. Then you can see what error the
> server is giving. MediaWiki vagrant has an oauth role, so you can just
> enable role and start using it.
>
> canonicalServerUrl is only used for the identity check-- it sounds like
> you're not even to that point yet, so that shouldn't be an issue (if it
> was, the client library would say you have an invalid JWT).
>
> Feel free to ping me on IRC (csteipp) and I can try to walk you through.
> You may want to try this script here:
>
> https://www.mediawiki.org/wiki/User:CSteipp/OAuth_debug_client
>
> That should at least prove it's not a connectivity / curl issue.
>
>
> >
> >
> > On Thu, Jun 5, 2014 at 9:14 PM, Chris Steipp 
> > wrote:
> >
> > > On Thursday, June 5, 2014, Amanpreet Singh <
> amanpreet.iitr2...@gmail.com
> > >
> > > wrote:
> > >
> > > > Thanks for quick reply,
> > > > I am just getting NULL after making an OAuth call and that callback
> > > wasn't
> > > > confirmed, I hope I am making call to correct url which is
> > > >
> > > >
> > >
> >
> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
> > > > What I should get back is a token with key and a secret.
> > > >
> > >
> > > Try using www.mediawiki.org-- otherwise the redirect will happen the
> > > signature won't verify.
> > >
> > >
> > > >
> > > >
> > > > On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske <
> > > magnusman...@googlemail.com
> > > > >
> > > > wrote:
> > > >
> > > > > If all you want is some quick code infusion, I can offer my PHP
> > class:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
> > > > >
> > > > > You'd have to patch the "loadIniFile" method to point at your ini
> > file,
> > > > but
> > > > > the rest should work as is. High-level method are towards the end,
> > > > usually
> > > > > self-explaining like "setLabel".
> > > > >
> > > > > Cheers,
> > > > > Magnus
> > > > >
> > > > >
> > > > > On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper <
> > aklap...@wikimedia.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > > > > > > I need some help regarding my GSoC project in which I need to
> > > > implement
> > > > > > an
> > > > > > > OAuth login system for a browser based plugin, so we can
> identify
> > > > > users.
> > > > > > > But I am stuck and not able to get anything here >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > > > > > > . Kindly help me, and tell me if further information is needed.
> > > > > >
> > > > > > What is the problem you're facing, what have you tried already,
> > > what's
> > > > > > the output you get, etc.?
> > > > > >
> > > > > > andre
> > > > > > --
> > > > > > Andre Klapper | Wikimedia Bugwrangler
> > > > > > http://blogs.gnome.org/aklapper/
> > > > > >
> > > > > >
> > > > > > ___
> > > > > > Wikitech-l mailing list
> > > > > > Wikitech-l@lists.wikimedia.org 
> > > > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > undefined
> > > > > ___
> > > > > Wikitech-l mailing list
> > > > > Wikitech-l@lists.wikimedia.org 
> > > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Amanpreet Singh,
> > > > IIT Roorkee
> > > > ___
> > > > Wikitech-l mailing list
> > > > Wikitech-l@lists.wikimedia.org 
> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > >
> >
> >
> >
> > --
> > Amanpreet Singh,
> > IIT Roorkee
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wi

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Chris Steipp
On Thu, Jun 5, 2014 at 9:23 AM, Amanpreet Singh <
amanpreet.iitr2...@gmail.com> wrote:

> Dear Chris
> I tried this but still no result it gives same error NULL,
> I also copied your entire demo and pasted it to test but it also returned
> same result, maybe its something related to canonicalServerUrl,
> I also tried Magnus one it gives 'Error retrieving token:
> mwoauth-oauth-exception'.
>

One of the best ways to debug this is to start using a local instance, so
you can look at the server's debug log. Then you can see what error the
server is giving. MediaWiki vagrant has an oauth role, so you can just
enable role and start using it.

canonicalServerUrl is only used for the identity check-- it sounds like
you're not even to that point yet, so that shouldn't be an issue (if it
was, the client library would say you have an invalid JWT).

Feel free to ping me on IRC (csteipp) and I can try to walk you through.
You may want to try this script here:

https://www.mediawiki.org/wiki/User:CSteipp/OAuth_debug_client

That should at least prove it's not a connectivity / curl issue.


>
>
> On Thu, Jun 5, 2014 at 9:14 PM, Chris Steipp 
> wrote:
>
> > On Thursday, June 5, 2014, Amanpreet Singh  >
> > wrote:
> >
> > > Thanks for quick reply,
> > > I am just getting NULL after making an OAuth call and that callback
> > wasn't
> > > confirmed, I hope I am making call to correct url which is
> > >
> > >
> >
> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
> > > What I should get back is a token with key and a secret.
> > >
> >
> > Try using www.mediawiki.org-- otherwise the redirect will happen the
> > signature won't verify.
> >
> >
> > >
> > >
> > > On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske <
> > magnusman...@googlemail.com
> > > >
> > > wrote:
> > >
> > > > If all you want is some quick code infusion, I can offer my PHP
> class:
> > > >
> > > >
> > > >
> > >
> >
> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
> > > >
> > > > You'd have to patch the "loadIniFile" method to point at your ini
> file,
> > > but
> > > > the rest should work as is. High-level method are towards the end,
> > > usually
> > > > self-explaining like "setLabel".
> > > >
> > > > Cheers,
> > > > Magnus
> > > >
> > > >
> > > > On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper <
> aklap...@wikimedia.org
> > > >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > > > > > I need some help regarding my GSoC project in which I need to
> > > implement
> > > > > an
> > > > > > OAuth login system for a browser based plugin, so we can identify
> > > > users.
> > > > > > But I am stuck and not able to get anything here >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > > > > > . Kindly help me, and tell me if further information is needed.
> > > > >
> > > > > What is the problem you're facing, what have you tried already,
> > what's
> > > > > the output you get, etc.?
> > > > >
> > > > > andre
> > > > > --
> > > > > Andre Klapper | Wikimedia Bugwrangler
> > > > > http://blogs.gnome.org/aklapper/
> > > > >
> > > > >
> > > > > ___
> > > > > Wikitech-l mailing list
> > > > > Wikitech-l@lists.wikimedia.org 
> > > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > undefined
> > > > ___
> > > > Wikitech-l mailing list
> > > > Wikitech-l@lists.wikimedia.org 
> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > >
> > >
> > >
> > >
> > > --
> > > Amanpreet Singh,
> > > IIT Roorkee
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org 
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> Amanpreet Singh,
> IIT Roorkee
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread Chris Steipp
On Thu, Jun 5, 2014 at 9:45 AM, Zack Weinberg  wrote:

> I'd like to restart the conversation about hardening Wikipedia (or
> possibly Wikimedia in general) against traffic analysis.  I brought
> this up ... last November, I think, give or take a month?  but it got
> lost in a larger discussion about HTTPS.
>

Thanks Zack, I think this is research that needs to happen, but the WMF
doesn't have the resources to do itself right now. I'm very interested in
seeing the results you come up with.


>
> For background, the type of attack that it would be nice to be able to
> prevent is described in this paper:
>
> http://sysseclab.informatics.indiana.edu/projects/sidebuster/sidebuster-final.pdf
>  Someone is eavesdropping on an encrypted connection to
> LANG.wikipedia.org.  (It's not possible to prevent the attacker from
> learning the DNS name and therefore the language the target reads,
> short of Tor or similar.  It's also not possible to prevent them from
> noticing accesses to ancillary servers, e.g. Commons for media.)  The
> attacker's goal is to figure out things like
>
> * what page is the target reading?
> * what _sequence of pages_ is the target reading?  (This is actually
> easier, assuming the attacker knows the internal link graph.)
> * is the target a logged-in user, and if so, which user?
> * did the target just edit a page, and if so, which page?
> * (... y'all are probably better at thinking up these hypotheticals than
> me ...)
>

Anything in the logs-- Account creation is probably an easy target.


>
> Wikipedia is different from a tax-preparation website (the case study
> in the above paper) in that all of the content is public, and edit
> actions are also public.  The attacker can therefore correlate their
> eavesdropping data with observations of Special:RecentChanges and the
> like.  This may mean it is impossible to prevent the attacker from
> detecting edits.  I think it's worth the experiment, though.
>
> What I would like to do, in the short term, is perform a large-scale
> crawl of one or more of the encyclopedias and measure what the above
> eavesdropper would observe.  I would do this over regular HTTPS, from
> a documented IP address, both as a logged-in user and an anonymous
> user.  This would capture only the reading experience; I would also
> like to work with prolific editors to take measurements of the traffic
> patterns generated by that activity.  (Bot edits go via the API, as I
> understand it, and so are not reflective of "naturalistic" editing by
> human users.)
>

Make sure to respect typical bot rate limits. Anonymous crawling should be
fine, although logged in crawling could cause issues. But if you're doing
this from a single machine, I don't think there's too much harm you can do.
Thanks for warning us in advance!

Also, mobile looks very different from desktop. May be worth analyzing it
as well.


>
> With that data in hand, the next phase would be to develop some sort
> of algorithm for automatically padding HTTP responses to maximize
> eavesdropper confusion while minimizing overhead.  I don't yet know
> exactly how this would work.  I imagine that it would be based on
> clustering the database into sets of pages with similar length but
> radically different contents.  The output of this would be some
> combination of changes to MediaWiki core (for instance, to ensure that
> the overall length of the HTTP response headers does not change when
> one logs in) and an extension module that actually performs the bulk
> of the padding.  I am not at all a PHP developer, so I would need help
> from someone who is with this part.
>

Padding the page in output page would be a pretty simple extension,
although ensuring the page size after the web server is gzips it is a
specific size would be more difficult to do efficiently. However, iirc the
most obvious fingerprinting technique was just looking at the number and
sizes of images loaded from commons. Making sure those are consistent sizes
is likely going to be hard.


>
> What do you think?  I know some of this is vague and handwavey but I
> hope it is at least a place to start a discussion.
>

One more thing to take into account is that the WMF is likely going to
switch to spdy, which will completely change the characteristics of the
traffic. So developing a solid process that you can repeat next year would
be time well spent.


>
> zw
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Hardening WP/WM against traffic analysis (take two)

2014-06-05 Thread Zack Weinberg
I'd like to restart the conversation about hardening Wikipedia (or
possibly Wikimedia in general) against traffic analysis.  I brought
this up ... last November, I think, give or take a month?  but it got
lost in a larger discussion about HTTPS.

For background, the type of attack that it would be nice to be able to
prevent is described in this paper:
http://sysseclab.informatics.indiana.edu/projects/sidebuster/sidebuster-final.pdf
 Someone is eavesdropping on an encrypted connection to
LANG.wikipedia.org.  (It's not possible to prevent the attacker from
learning the DNS name and therefore the language the target reads,
short of Tor or similar.  It's also not possible to prevent them from
noticing accesses to ancillary servers, e.g. Commons for media.)  The
attacker's goal is to figure out things like

* what page is the target reading?
* what _sequence of pages_ is the target reading?  (This is actually
easier, assuming the attacker knows the internal link graph.)
* is the target a logged-in user, and if so, which user?
* did the target just edit a page, and if so, which page?
* (... y'all are probably better at thinking up these hypotheticals than me ...)

Wikipedia is different from a tax-preparation website (the case study
in the above paper) in that all of the content is public, and edit
actions are also public.  The attacker can therefore correlate their
eavesdropping data with observations of Special:RecentChanges and the
like.  This may mean it is impossible to prevent the attacker from
detecting edits.  I think it's worth the experiment, though.

What I would like to do, in the short term, is perform a large-scale
crawl of one or more of the encyclopedias and measure what the above
eavesdropper would observe.  I would do this over regular HTTPS, from
a documented IP address, both as a logged-in user and an anonymous
user.  This would capture only the reading experience; I would also
like to work with prolific editors to take measurements of the traffic
patterns generated by that activity.  (Bot edits go via the API, as I
understand it, and so are not reflective of "naturalistic" editing by
human users.)

With that data in hand, the next phase would be to develop some sort
of algorithm for automatically padding HTTP responses to maximize
eavesdropper confusion while minimizing overhead.  I don't yet know
exactly how this would work.  I imagine that it would be based on
clustering the database into sets of pages with similar length but
radically different contents.  The output of this would be some
combination of changes to MediaWiki core (for instance, to ensure that
the overall length of the HTTP response headers does not change when
one logs in) and an extension module that actually performs the bulk
of the padding.  I am not at all a PHP developer, so I would need help
from someone who is with this part.

What do you think?  I know some of this is vague and handwavey but I
hope it is at least a place to start a discussion.

zw

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Dear Chris
I tried this but still no result it gives same error NULL,
I also copied your entire demo and pasted it to test but it also returned
same result, maybe its something related to canonicalServerUrl,
I also tried Magnus one it gives 'Error retrieving token:
mwoauth-oauth-exception'.


On Thu, Jun 5, 2014 at 9:14 PM, Chris Steipp  wrote:

> On Thursday, June 5, 2014, Amanpreet Singh 
> wrote:
>
> > Thanks for quick reply,
> > I am just getting NULL after making an OAuth call and that callback
> wasn't
> > confirmed, I hope I am making call to correct url which is
> >
> >
> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
> > What I should get back is a token with key and a secret.
> >
>
> Try using www.mediawiki.org-- otherwise the redirect will happen the
> signature won't verify.
>
>
> >
> >
> > On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske <
> magnusman...@googlemail.com
> > >
> > wrote:
> >
> > > If all you want is some quick code infusion, I can offer my PHP class:
> > >
> > >
> > >
> >
> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
> > >
> > > You'd have to patch the "loadIniFile" method to point at your ini file,
> > but
> > > the rest should work as is. High-level method are towards the end,
> > usually
> > > self-explaining like "setLabel".
> > >
> > > Cheers,
> > > Magnus
> > >
> > >
> > > On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper  > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > > > > I need some help regarding my GSoC project in which I need to
> > implement
> > > > an
> > > > > OAuth login system for a browser based plugin, so we can identify
> > > users.
> > > > > But I am stuck and not able to get anything here >
> > > > >
> > > >
> > >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > > > > . Kindly help me, and tell me if further information is needed.
> > > >
> > > > What is the problem you're facing, what have you tried already,
> what's
> > > > the output you get, etc.?
> > > >
> > > > andre
> > > > --
> > > > Andre Klapper | Wikimedia Bugwrangler
> > > > http://blogs.gnome.org/aklapper/
> > > >
> > > >
> > > > ___
> > > > Wikitech-l mailing list
> > > > Wikitech-l@lists.wikimedia.org 
> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > >
> > >
> > >
> > >
> > > --
> > > undefined
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org 
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > >
> >
> >
> >
> > --
> > Amanpreet Singh,
> > IIT Roorkee
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org 
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
Amanpreet Singh,
IIT Roorkee
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Requests with data URIs appended to proper URLs

2014-06-05 Thread Bartosz Dziewoński

On Thu, 05 Jun 2014 15:36:07 +0200, Christian Aistleitner 
 wrote:


The image data in the data uri scheme decodes to images from
VectorBeta [3] like:
 VectorBeta/resources/typography/images/search-fade.png
  VectorBeta/resources/typography/images/tab-break.png
  VectorBeta/resources/typography/images/tab-current-fade.png
  VectorBeta/resources/typography/images/portal-break.png


These images are also part of the core Vector skin, where
they sit at [mediawiki/core]/skins/vector/images.

--
Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Chris Steipp
On Thursday, June 5, 2014, Amanpreet Singh 
wrote:

> Thanks for quick reply,
> I am just getting NULL after making an OAuth call and that callback wasn't
> confirmed, I hope I am making call to correct url which is
>
> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
> What I should get back is a token with key and a secret.
>

Try using www.mediawiki.org-- otherwise the redirect will happen the
signature won't verify.


>
>
> On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske  >
> wrote:
>
> > If all you want is some quick code infusion, I can offer my PHP class:
> >
> >
> >
> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
> >
> > You'd have to patch the "loadIniFile" method to point at your ini file,
> but
> > the rest should work as is. High-level method are towards the end,
> usually
> > self-explaining like "setLabel".
> >
> > Cheers,
> > Magnus
> >
> >
> > On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper  >
> > wrote:
> >
> > > Hi,
> > >
> > > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > > > I need some help regarding my GSoC project in which I need to
> implement
> > > an
> > > > OAuth login system for a browser based plugin, so we can identify
> > users.
> > > > But I am stuck and not able to get anything here >
> > > >
> > >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > > > . Kindly help me, and tell me if further information is needed.
> > >
> > > What is the problem you're facing, what have you tried already, what's
> > > the output you get, etc.?
> > >
> > > andre
> > > --
> > > Andre Klapper | Wikimedia Bugwrangler
> > > http://blogs.gnome.org/aklapper/
> > >
> > >
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org 
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > >
> >
> >
> >
> > --
> > undefined
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org 
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> Amanpreet Singh,
> IIT Roorkee
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org 
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Dear Magnus,
Thanks for this I will have a look.


On Thu, Jun 5, 2014 at 8:34 PM, Amanpreet Singh <
amanpreet.iitr2...@gmail.com> wrote:

> Thanks for quick reply,
> I am just getting NULL after making an OAuth call and that callback wasn't
> confirmed, I hope I am making call to correct url which is
> https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
> What I should get back is a token with key and a secret.
>
>
> On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske  > wrote:
>
>> If all you want is some quick code infusion, I can offer my PHP class:
>>
>>
>> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
>>
>> You'd have to patch the "loadIniFile" method to point at your ini file,
>> but
>> the rest should work as is. High-level method are towards the end, usually
>> self-explaining like "setLabel".
>>
>> Cheers,
>> Magnus
>>
>>
>> On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper 
>> wrote:
>>
>> > Hi,
>> >
>> > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
>> > > I need some help regarding my GSoC project in which I need to
>> implement
>> > an
>> > > OAuth login system for a browser based plugin, so we can identify
>> users.
>> > > But I am stuck and not able to get anything here >
>> > >
>> >
>> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
>> > > . Kindly help me, and tell me if further information is needed.
>> >
>> > What is the problem you're facing, what have you tried already, what's
>> > the output you get, etc.?
>> >
>> > andre
>> > --
>> > Andre Klapper | Wikimedia Bugwrangler
>> > http://blogs.gnome.org/aklapper/
>> >
>> >
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> >
>>
>>
>>
>> --
>> undefined
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>
>
>
> --
> Amanpreet Singh,
> IIT Roorkee
>



-- 
Amanpreet Singh,
IIT Roorkee
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Thanks for quick reply,
I am just getting NULL after making an OAuth call and that callback wasn't
confirmed, I hope I am making call to correct url which is
https://mediawiki.org/wiki/index.php?title=Special:OAuth/initiate&format=json&oauth_callback=oob
What I should get back is a token with key and a secret.


On Thu, Jun 5, 2014 at 8:27 PM, Magnus Manske 
wrote:

> If all you want is some quick code infusion, I can offer my PHP class:
>
>
> https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master
>
> You'd have to patch the "loadIniFile" method to point at your ini file, but
> the rest should work as is. High-level method are towards the end, usually
> self-explaining like "setLabel".
>
> Cheers,
> Magnus
>
>
> On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper 
> wrote:
>
> > Hi,
> >
> > On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > > I need some help regarding my GSoC project in which I need to implement
> > an
> > > OAuth login system for a browser based plugin, so we can identify
> users.
> > > But I am stuck and not able to get anything here >
> > >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > > . Kindly help me, and tell me if further information is needed.
> >
> > What is the problem you're facing, what have you tried already, what's
> > the output you get, etc.?
> >
> > andre
> > --
> > Andre Klapper | Wikimedia Bugwrangler
> > http://blogs.gnome.org/aklapper/
> >
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> undefined
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
Amanpreet Singh,
IIT Roorkee
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Magnus Manske
If all you want is some quick code infusion, I can offer my PHP class:

https://bitbucket.org/magnusmanske/magnustools/src/ecb01ddc26c8129737d260d0491ccb410c4c62a3/public_html/php/oauth.php?at=master

You'd have to patch the "loadIniFile" method to point at your ini file, but
the rest should work as is. High-level method are towards the end, usually
self-explaining like "setLabel".

Cheers,
Magnus


On Thu, Jun 5, 2014 at 3:51 PM, Andre Klapper 
wrote:

> Hi,
>
> On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> > I need some help regarding my GSoC project in which I need to implement
> an
> > OAuth login system for a browser based plugin, so we can identify users.
> > But I am stuck and not able to get anything here >
> >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> > . Kindly help me, and tell me if further information is needed.
>
> What is the problem you're facing, what have you tried already, what's
> the output you get, etc.?
>
> andre
> --
> Andre Klapper | Wikimedia Bugwrangler
> http://blogs.gnome.org/aklapper/
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
undefined
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Andre Klapper
Hi,

On Thu, 2014-06-05 at 20:13 +0530, Amanpreet Singh wrote:
> I need some help regarding my GSoC project in which I need to implement an
> OAuth login system for a browser based plugin, so we can identify users.
> But I am stuck and not able to get anything here >
> https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
> . Kindly help me, and tell me if further information is needed.

What is the problem you're facing, what have you tried already, what's
the output you get, etc.?

andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Help: Needed in OAuth

2014-06-05 Thread Amanpreet Singh
Hi,
I need some help regarding my GSoC project in which I need to implement an
OAuth login system for a browser based plugin, so we can identify users.
But I am stuck and not able to get anything here >
https://github.com/apsdehal/Mediawiki-login-api/blob/master/lib/OAuth/MWOAuthClient.php#L77
. Kindly help me, and tell me if further information is needed.

-- 
Amanpreet Singh,
IIT Roorkee
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Requests with data URIs appended to proper URLs

2014-06-05 Thread Christian Aistleitner
Hi,

requests matching

  http://\(es\|pt\).wikipedia.org/wiki/[dD]ata:image/png;base64,iVBORw0K.*

are on the increase. Currently, ~500K/day.

I cannot make sense of those requests, and they look wrong, as they
seem to be a data URI appended to the a proper URL [1].
Corresponding bug is 66112 [2].

The requests' User-Agent identifies them as Firefox and Chrome, both
on various flavors of Windows.

It's not ancient browsers, as the biggest part identifies as
Firefox 29 (~60%) and Chrome 35 (~31%).

It does not seem to be simple bots faking User-Agents, as the number
of requests shows a strong weekly pattern and the Client IPs match
countries for the target wikis, and the IPs themselves differ a
lot—covering 200-500 /24 nets per day in sampled-1000 stream.

Requests go to desktop site of eswiki (~58%) and ptwiki (~38%).

Referrers are mostly empty (~97%).

The image data in the data uri scheme decodes to images from
VectorBeta [3] like:

  VectorBeta/resources/typography/images/search-fade.png
  VectorBeta/resources/typography/images/tab-break.png
  VectorBeta/resources/typography/images/tab-current-fade.png
  VectorBeta/resources/typography/images/portal-break.png

Any clues?

Is this issue on our end or can for example rogue User-JS amount for
that many skew requests?

Have fun,
Chrisitan


P.S.: On stat1002, there are TSVs from the sampled-1000 stream
filtered to the relevant requests for May and June at

  /home/qchris/data-uris

.



[1] Since they are just UI images, here are some concrete examples:

http://es.wikipedia.org/wiki/data:image/png;base64,iVBORw0KGgoNSUhEUgEuCAIAAABmjeQ9RElEQVR42mVO2wrAUAhy/f8fz+niVMTYQ3hLKkgGgN/IPvgIhUYYV/qogdP75J01V+JwrKZr/5YPcnzN3e6t7l+2K+EFX91B1daOi7sASUVORK5CYII=

http://pt.wikipedia.org/wiki/Data:image/png;base64,iVBORw0KGgoNSUhEUgEuCAIAAABmjeQ9RElEQVR42mVO2wrAUAhy/f8fz%2BniVMTYQ3hLKkgGgN/IPvgIhUYYV/qogdP75J01V%2BJwrKZr/5YPcnzN3e6t7l%2B2K%2BEFX91B1daOi7sASUVORK5CYII%3D

http://es.wikipedia.org/wiki/data:image/png;base64,iVBORw0KGgoNSUhEUgEQCAIAAABY/YLgJUlEQVQIHQXBsQEAAAjDoND/73UWdnerhmHVsDQZJrNWVg3Dqge6bgMe6bejNABJRU5ErkJggg==

http://es.wikipedia.org/wiki/Data:image/png;base64,iVBORw0KGgoNSUhEUgEQCAIAAABY/YLgJUlEQVQIHQXBsQEAAAjDoND/73UWdnerhmHVsDQZJrNWVg3Dqge6bgMe6bejNABJRU5ErkJggg%3D%3D

[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=66112

[3] But that's not to say that it's a VectorBeta issue. It might be
for example our (or User-)JS walking DOM and firing off strange
requests.



-- 
 quelltextlich e.U.  \\  Christian Aistleitner 
   Companies' registry: 360296y in Linz
Christian Aistleitner
Gruendbergstrasze 65aEmail:  christ...@quelltextlich.at
4040 Linz, Austria   Phone:  +43 732 / 26 95 63
 Fax:+43 732 / 26 95 63
 Homepage: http://quelltextlich.at/
---


signature.asc
Description: Digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki RC3 & 1.23.0 (Special:SpecialPages seg fault)

2014-06-05 Thread Thomas Fellows
Should also mention I came across this:

https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1102366
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=694473

though slightly different, as mine is a GET request, different version of
php, and POSTs work fine (saving, previewing).  Starting/stopping or
restarting Apache does not fix the problem.

-Tom


On Thu, Jun 5, 2014 at 9:21 AM, Thomas Fellows 
wrote:

> Hey All,
>
> Upgraded from 1.20 to 1.23rc3 yesterday, ran update.php, everything worked
> fine -- except for Special:SpecialPages.  Disabled all extensions to see if
> that was doing it, but no change.  It comes back with a Connection Reset
> error.  I went to Apache's error logs and came across this for each failed
> attempt:
>
> [Thu Jun 05 09:10:51 2014] [notice] child pid 18983 exit signal
> Segmentation fault (11)
>
> Uh oh.
>
> Thought that maybe 1.23.0 would fix the issue, woke up this morning,
> updated again, and the page loaded!!  However, with warnings:
>
> PHP Warning:  Illegal offset type in isset or empty in
> /var/www/ops_dev/includes/User.php on line 1390
> PHP Warning:  Illegal offset type in isset or empty in
> /var/www/ops_dev/includes/User.php on line 1390
> PHP Warning:  Illegal offset type in isset or empty in
> /var/www/ops_dev/includes/User.php on line 1390
> PHP Warning:  Illegal offset type in isset or empty in
> /var/www/ops_dev/includes/User.php on line 1390
> PHP Warning:  Illegal offset type in isset or empty in
> /var/www/ops_dev/includes/User.php on line 1390
>
>
> I thought, great! Maybe one of the extensions is causing the error.  So I
> went and disabled all extensions, and reloaded the page and got the
> 'Connection reset'.
>
> [Thu Jun 05 09:15:11 2014] [notice] child pid 19200 exit signal
> Segmentation fault (11)
>
>
> Re-enabled extensions, Seg fault still.
>
> Tried again a few seconds ago, Warning: Illegal offset type in isset or
> empty in /var/www/ops_dev/includes/User.php on line 1390 (x10)
>
>
> Any help would be greatly appreciated!
>
> Thanks,
> Tom
>
> ---more info
> MediaWiki 1.23.0  (1346cdb)
> 
> 16:57, 4 June 2014  PHP 5.3.6-13ubuntu3.10 (apache2handler)  MySQL 5.1.52
>
> Ubuntu 11.10
>
> Server version: Apache/2.2.20 (Ubuntu)
> Server built:   Mar  8 2013 15:58:04
> Server's Module Magic Number: 20051115:28
> Server loaded:  APR 1.4.5, APR-Util 1.3.12
> Compiled using: APR 1.4.5, APR-Util 1.3.12
> Architecture:   64-bit
> Server MPM: Prefork
>   threaded: no
> forked: yes (variable process count)
>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki RC3 & 1.23.0 (Special:SpecialPages seg fault)

2014-06-05 Thread Thomas Fellows
Hey All,

Upgraded from 1.20 to 1.23rc3 yesterday, ran update.php, everything worked
fine -- except for Special:SpecialPages.  Disabled all extensions to see if
that was doing it, but no change.  It comes back with a Connection Reset
error.  I went to Apache's error logs and came across this for each failed
attempt:

[Thu Jun 05 09:10:51 2014] [notice] child pid 18983 exit signal
Segmentation fault (11)

Uh oh.

Thought that maybe 1.23.0 would fix the issue, woke up this morning,
updated again, and the page loaded!!  However, with warnings:

PHP Warning:  Illegal offset type in isset or empty in
/var/www/ops_dev/includes/User.php on line 1390
PHP Warning:  Illegal offset type in isset or empty in
/var/www/ops_dev/includes/User.php on line 1390
PHP Warning:  Illegal offset type in isset or empty in
/var/www/ops_dev/includes/User.php on line 1390
PHP Warning:  Illegal offset type in isset or empty in
/var/www/ops_dev/includes/User.php on line 1390
PHP Warning:  Illegal offset type in isset or empty in
/var/www/ops_dev/includes/User.php on line 1390


I thought, great! Maybe one of the extensions is causing the error.  So I
went and disabled all extensions, and reloaded the page and got the
'Connection reset'.

[Thu Jun 05 09:15:11 2014] [notice] child pid 19200 exit signal
Segmentation fault (11)


Re-enabled extensions, Seg fault still.

Tried again a few seconds ago, Warning: Illegal offset type in isset or
empty in /var/www/ops_dev/includes/User.php on line 1390 (x10)


Any help would be greatly appreciated!

Thanks,
Tom

---more info
MediaWiki 1.23.0  (1346cdb)

16:57, 4 June 2014  PHP 5.3.6-13ubuntu3.10 (apache2handler)  MySQL 5.1.52

Ubuntu 11.10

Server version: Apache/2.2.20 (Ubuntu)
Server built:   Mar  8 2013 15:58:04
Server's Module Magic Number: 20051115:28
Server loaded:  APR 1.4.5, APR-Util 1.3.12
Compiled using: APR 1.4.5, APR-Util 1.3.12
Architecture:   64-bit
Server MPM: Prefork
  threaded: no
forked: yes (variable process count)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki 1.23.0 released

2014-06-05 Thread Markus Glaser
Hello everyone,

I am happy to announce the availability of the first stable release of the new 
MediaWiki 1.23 release series.

MediaWiki 1.23 is a large release that contains many new features and bug 
fixes. This is a summary of the major changes of interest to users. You can 
consult the RELEASE-NOTES-1.23 file for the full list of changes in this 
version.

This is a Long Term Support release (LTS) and will be supported until May 2017.

Our thanks to everyone who helped to improve MediaWiki by testing the release 
candidates and submitting bug reports.

== What's new? ==

* MediaWiki 1.23 includes all changes released in the smaller 1.23wmfX software 
deployments to Wikimedia sites.

=== Skin autodiscovery deprecated ===

Skin autodiscovery, the legacy skin installation mechanism used by MediaWiki 
since very early versions (around 2004), has been officially deprecated and 
will be removed in MediaWiki 1.25.
* MediaWiki 1.23 will emit warnings in production if a skin using the 
deprecated mechanism is found.
* See Manual:Skin autodiscovery for more information and a migration guide for 
site admins and skin developers.

=== Notifications ===

With 1.23, MediaWiki starts to behave more like a modern website as regards 
notifications, to keep the editors of your wiki engaged and always up to date 
about what interests them. This used to require several custom settings.
* (bug 45020) Make preferences "Add pages I create and files I upload to my 
watchlist" and "pages and files I edit" true by default.
* (bug 45022) Make preference "Email me when a page or file on my watchlist is 
changed" true by default.
* (bug 49719) Watch user page and user talk page by default.
This will allow your new users to immediately start benefiting from the 
watchlist and email notification features, without needing to first read all 
the docs to find out that they're as useful as they are.

=== Merged extensions ===

Merged into 1.23:
* ExpandTemplates (bug 28264).
* AssertEdit (bug 27841) - documented at API:Assert.

=== Interface ===

* (bug 42026) Add option to only show page creations in Special:Contributions 
(and API).
* Add new special page to list duplicate files, Special:ListDuplicatedFiles.
* (bug 60333) Add new special page listing tracking categories 
(Special:TrackingCategories).

=== Editing ===

* A new special page Special:Diff was added, allowing users to create internal 
links to revision comparison pages using syntax such as Special:Diff/12345, 
Special:Diff/12345/prev or Special:Diff/12345/98765.

=== Help pages ===

With 1.23, MediaWiki begins a process of consolidation of its help pages. Now, 
most are using the Translate extension and can be easily translated and updated 
in hundreds languages.

In the coming months, we'll focus on making more of the central help pages 
translatable and on linking them from the relevant MediaWiki interfaces for 
better discoverability. Please help: add your own translations; update existing 
pages and cover missing MediaWiki topics.

Traditionally, help pages have been scattered on countless wikis and poorly 
translated; most of those on mediawiki.org were migrated with the help of some 
Google Code-in students.

=== CSS refresh for Vector ===

* Various Vector CSS properties have been converted to LESS variables.
* The font size of #bodyContent/.mw-body-content has 
been increased to 0.875em.
* The line-height of #bodyContent/.mw-body-content 
has been increased to 1.6.
* The line-height of superscript (sup) and subscript (sub) are now set to 1.
* The default color for content text (but not the headers) is now #252525; 
(dark grey).
* All headers have updated sizes and margins.
* H1 and H2 headers now use a serif font.
* Body font is "sans-serif" as always.

For more information see Typography refresh.

=== Configuration ===

Add Config and GlobalConfig classes:
* Allows configuration options to be fetched from context.
* Only one implementation, GlobalConfig, is provided, which simply returns 
$GLOBALS[$name]. There can be more classes in the future, possibly a 
database-based one. For convinience the "wg" prefix is automatically added.
* This adds the $wgConfigClass global variable which is used to determine which 
implementation of Config to use by default.
* The ContextSource getConfig and setConfig methods were introduced.

Full release notes:
https://git.wikimedia.org/blob/mediawiki%2Fcore.git/1.23.0/RELEASE-NOTES-1.23
https://www.mediawiki.org/wiki/Release_notes/1.23


**
Download:
http://download.wikimedia.org/mediawiki/1.23/mediawiki-1.23.0.tar.gz

GPG signatures:
http://download.wikimedia.org/mediawiki/1.23/mediawiki-1.23.0.tar.gz.sig

Public keys:
https://www.mediawiki.org/keys/keys.html

Markus Glaser
(Release Team)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC] Extension registration

2014-06-05 Thread Gilles Dubuc
>
> What do people think of stopping caring about register_globals
>

I don't see any reason to continue supporting register_globals at any
level. It's been turned off by default in 4.2.0 (circa 2002), deprecated in
5.3.0 and removed in 5.4.0. There's no reason to keep supporting it, it's
not a good use of our resources to maintain that support.


On Wed, Jun 4, 2014 at 6:23 PM, Lee Worden  wrote:

> Date: Wed, 04 Jun 2014 02:12:30 -0700
>> From: Daniel Friesen
>>
>>
>> On 2014-06-04, 1:29 AM, Legoktm wrote:
>>
>>> >== Extension locations ==
>>> >We agreed that we should require extensions to all be in the same
>>> >directory, but that directory should be configurable. By default it
>>> >will point to $IP/extensions.
>>>
>> I still do NOT like this idea.
>>
>> By all means there should be one directory for extensions that are
>> managed by a web/cli installer and the method of loading extensions from
>> that one directory should be simple even when we're still using a php
>> settings file. But when someone is intentionally not using that and
>> doing complex config then we shouldn't stop them from saying to load an
>> extension from a specific directory.
>>
>
> +1
>
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l