Re: [Wikitech-l] Why does api.php surround text with paragraph tags?

2013-08-12 Thread Brian Wolff
On 8/12/13, Daniel Barrett d...@vistaprint.com wrote:
 The MediaWiki API seems to add paragraph tags when it parses wikitext, but
 only sometimes. Example:

 http://www.mediawiki.org/w/api.php?action=parsepage=Extension:Header/versionformat=jsonprop=text

 This page (Extension:Header/version) contains only the text 1.0 but gets
 turned into p1.0/p when parsed via API.  On the other hand, the p
 tag is absent if you transclude {{Extension:Header/version}} into another
 wiki article.

 This doesn't happen if your wiki page contains just a table, so something
 tricky is going on here. :-)

 Is there a way to make api.php suppress the p tag in my first example?

 Thanks,
 DanB



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

If you just have the text 1.0 in the page, then that text is part of
a paragraph. If you transclude the text to a middle of another page,
then it might not be in the context of a paragraph (or be in the
middle of one already begun). If its the first thing transcluded, you
still get the p's. If your article starts with a table, a table is a
block element, and not text, so no paragraph.

Basically, random text by itself forms a paragraph, so it gets marked with a p

You can't really disable that. Certain places in mediawiki core regex
out the p in places where it doesn't make sense (like
Message::parse).

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Parser: Assistance resolving ToC/extension conflict

2013-08-16 Thread Brian Wolff
On 8/16/13, C. Scott Ananian canan...@wikimedia.org wrote:
 And I really should have suggested reporting the feature request in
 bugzilla as well.  It is helpful when patches can be associated with a bug
 number.
   --scott

He listed a bug in his previous email:
https://bugzilla.wikimedia.org/show_bug.cgi?id=45317

I don't think he has a patch, but an idea for how to go about fixing
the issue, and wants advice on if his proposed approach would be
accepted.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia's anti-surveillance plans: site hardening

2013-08-16 Thread Brian Wolff
On 8/16/13, Zack Weinberg za...@cmu.edu wrote:
 Hi, I'm a grad student at CMU studying network security in general and
 censorship / surveillance resistance in particular. I also used to work
 for Mozilla, some of you may remember me in that capacity. My friend
 Sumana Harihareswara asked me to comment on Wikimedia's plans for
 hardening the encyclopedia against state surveillance. I've read the
 discussion to date on this subject, but it was kinda all over the map,
 so I thought it would be better to start a new thread. Actually I'm
 going to start two threads, one for general site hardening and one
 specifically about traffic analysis. This is the one about site
 hardening, which should happen first. Please note that I am subscribed
 to wikitech-l but not wikimedia-l (but I have read the discussion over
 there).

 The roadmap at
 https://blog.wikimedia.org/2013/08/01/future-https-wikimedia-projects/
 looks to me to have the right shape, but there are some missing things
 and points of confusion.

 The first step really must be to enable HTTPS unconditionally for
 everyone (whether or not logged in). I see on the roadmap that there is
 concern that this will lock out large groups of users, e.g. from China;
 a workaround simply *must* be found for this. Everything else that is
 worth doing is rendered ineffective if *any* application layer data is
 *ever* transmitted over an insecure channel. There is no point worrying
 about traffic analysis when an active man-in-the-middle can inject
 malicious JavaScript into unsecured pages, or a passive one can steal
 session cookies as they fly by in cleartext.

 As part of the engineering effort to turn on TLS for everyone, you
 should also provide SPDY, or whatever they're calling it these days.
 It's valuable not only for traffic analysis' sake, but because it offers
 server-side efficiency gains that (in theory anyway) should mitigate the
 TLS overhead somewhat.

 After that's done, there's a grab bag of additional security refinements
 that are deployable immediately or with minimal-to-moderate engineering
 effort. The roadmap mentions HTTP Strict Transport Security; that should
 definitely happen. All cookies should be tagged both Secure and HttpOnly
 (which renders them inaccessible to accidental HTTP loads and to page
 JavaScript); now would also be a good time to prune your cookie
 requirements, ideally to just one which does not reveal via inspection
 whether or not someone is logged in. You should also do
 Content-Security-Policy, as strict as possible. I know this can be a
 huge amount of development effort, but the benefits are equally huge -
 we don't know exactly how it was done, but there's an excellent chance
 CSP on the hidden service would have prevented the exploit discussed
 here:
 https://blog.torproject.org/blog/hidden-services-current-events-and-freedom-hosting

 Several people raised concerns about Wikimedia's certificate authority
 becoming compromised (whether by traditional hacking, social
 engineering, or government coercion). The best available cure for this
 is called certificate pinning, which is unfortunately only doable by
 talking to browser vendors right now; however, I imagine they would be
 happy to apply pins for Wikipedia. There's been some discussion of an
 HSTS extension that would apply a pin
 (http://tools.ietf.org/html/draft-evans-palmer-key-pinning-00) and it's
 also theoretically doable via DANE (http://tools.ietf.org/html/rfc6698);
 however, AFAIK no one implements either of these things yet, and I rate
 it moderately likely that DANE is broken-as-specified. DANE requires
 DNSSEC, which is worth implementing for its own sake (it appears that
 the wikipedia.org. and wikimedia.org. zones are not currently signed).

 Perfect forward secrecy should also be considered at this stage. Folks
 seem to be confused about what PFS is good for. It is *complementary* to
 traffic analysis resistance, but it's not useless in the absence of.
 What it does is provide defense in depth against a server compromise by
 a well-heeled entity who has been logging traffic *contents*. If you
 don't have PFS and the server is compromised, *all* traffic going back
 potentially for years is decryptable, including cleartext passwords and
 other equally valuable info. If you do have PFS, the exposure is limited
 to the session rollover interval.  Browsers are fairly aggressively
 moving away from non-PFS ciphersuites (see
 https://briansmith.org/browser-ciphersuites-01.html; all of the
 non-deprecated suites are PFS).

 Finally, consider paring back the set of ciphersuites accepted by your
 servers. Hopefully we will soon be able to ditch TLS 1.0 entirely (all
 of its ciphersuites have at least one serious flaw).  Again, see
 https://briansmith.org/browser-ciphersuites-01.html for the current
 thinking from the browser side.

 zw

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 

Re: [Wikitech-l] Wikimedia's anti-surveillance plans: site hardening

2013-08-17 Thread Brian Wolff

 hi faidon, i do not think you personally and WMF are particularly
 helpful in accepting contributions. because you:
 * do not communicate openly the problems
 * do not report upstream publically
 * do not ask for help, and even if it gets offered you just ignore it
 with quite some arrogance

 let me give you an example as well. git.wikimedia.org broke, and you,
 faidon, did _absolutely nothing_ to give good feedback to upstream to
 improve the gitblit software. you and colleagues did though adjust
 robots.txt to reduce the traffic arriving at the git.wikimedia.org.
 which is, in my opinion, paying half of the rent. see
 * our bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=51769,
 includes details how to take a stack trace
 * upstream bug:
 https://code.google.com/p/gitblit/issues/detail?id=294, no stacktrace
 reported


Lets not point fingers at specific people. Its really unhelpful and causes
defensiveness.

In the case of gitblit, the problem at this point has been identified (web
spiders DOSing us accidentally). Its really not surprising that creating
zip files ~100mb on the fly is expensive. It doesn't really seem that
likely that a stack trace would help solve such a problem, and really its
more a config issue on our end than a problem with gitblit.

I really don't see anything wrong with what any of the wmf folks did on
that bug.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Key community metrics to influence our plans

2013-08-19 Thread Brian Wolff
 In demographics we can see pretty good information about all data
 sources community. Mailing list members are going down also (mailing
 lists are used less? where is the support of the projects moving on?),


 Data about mailing lists, IRC and even wikis is good to have but I'm not
sure they are key. These channels support the real deal: code repostories
(and Bugzilla too). For instance, a % of discussions that happen now on
Gerrit probably took place before at wikitech-l.

I find that hard to believe (unless you are going back to the pre
Special:CodeReview era.) Perhaps there is an increase of people talking
face to face/internal wmf meetings over using wikitech-l. Perhaps the
(possibly imagined) increase in bikeshedding is making people less likely
to post to the list.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Veracity check for Tech news #34

2013-08-23 Thread Brian Wolff
On 8/23/13, Guillaume Paumier gpaum...@wikimedia.org wrote:
 Hi,

 It would be great if a few pairs of eyes could take a look at
 https://meta.wikimedia.org/wiki/Tech/News/2013/34 before I send it to
 translators, to check that I haven't missed anything super-important
 or misunderstood what the commits are about.

 The tech newsletter is aimed at non-expert Wikimedians whose knowledge
 of English may be limited, so the language may seem vague or naive to
 developers. If you see factual errors, please correct them (or let me
 know directly), but please keep the language simple :)

 Many thanks for your help.

 --
 Guillaume Paumier
 Technical Communications Manager — Wikimedia Foundation

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Note there's two issues with file redirects. The one fixed also
applies to local images in addition to foreign issues. The fixed issue
is that redirects on commons didn't start to work until 24 hours after
they are created. The issue that still remains is after a file moves,
someone has to edit the page (or purge the page) of the page using the
redirected name, before the image works again. (I've submitted a patch
for that - https://gerrit.wikimedia.org/r/#/c/80135/ but its still
pending review).

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WMFs stance on non-GPL code

2013-08-25 Thread Brian Wolff
On 2013-08-25 6:20 PM, Jeremy Baron jer...@tuxmachine.com wrote:

 On Mon, Aug 26, 2013 at 12:15 AM, Jeroen De Dauw jeroended...@gmail.com
wrote:
  I'm curious what the stance of WMF is on BSD, MIT and MPL licensed
code. In
  particular, could such code be deployed on WMF servers?

 I'm sure it is already deployed on WMF servers. Can you elaborate?

 e.g. deployed is not the same as being part of MediaWiki. We use
 [[Jenkins (software)]] on WMF servers and the enwiki article says it
 is MIT licensed.

 What exactly would you like to do?

 -Jeremy

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

My understanding is several extensions that are deployed are under dwtfywwi
license.

This is obviously just my personal opinion (which means nothing), but I
can't imagine there being a problem with a gpl-compatible license that
wasn't the gpl. I'd be surprised if there was a problem with any open
source license.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Javascript to find other articles having the same external link

2013-08-26 Thread Brian Wolff
On 2013-08-26 8:55 AM, Mark Holmquist mtrac...@member.fsf.org wrote:

 On Mon, Aug 26, 2013 at 10:07:16AM +0200, Ole Palnatoke Andersen wrote:
  I'd love to see a similar thing for articles linking to the same book
via
  ISBN.

 You can do that in the JavaScript just by adding to the selector at the
 beginning, and you can also get other magic links at the same time.

 jQuery( a.external, a.mw-magiclink-isbn, a.mw-magiclink-pmid,
a.mw-magiclink-rfc ).after( function() {
   return jQuery( a )
 .text( '⎆' )
 // Shorter, relative link (could also use mw.Title here maybe)
 .attr( href, /wiki/Special:Linksearch/ + this.href )
 .before(   );
 } );

 But it looks like Special:Linksearch doesn't support searching for magic
 links, at least not yet. So I'm afraid this is all for nought.

 I'm going to hope that CirrusSearch will fix this in some capacity, since
 it looks pretty simple to fix, and if Chad would like some help with that,
 he knows where to find me...

 --
 Mark Holmquist
 Software Engineer, Multimedia
 Wikimedia Foundation
 mtrac...@member.fsf.org
 https://wikimediafoundation.org/wiki/User:MHolmquist

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

For reference that's https://bugzilla.wikimedia.org/show_bug.cgi?id=49537

Arguably they aren't really external links and shouldn't be tracked with
them, otoh a table just for magic links seem overkill.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] A metadata API module for commons

2013-08-31 Thread Brian Wolff
Hi all,

I've been working on an api module/extension to extract metadata from
commons image description pages, and display it in the API. I know
this is an area that various people have thought about from time to
time, so I thought it would be of interest to this list.

The specific goals I have:
*Should be usable for a light box type feature (MediaViewer) that
needs to display information like Author and license. [1] (This is
primary use case)
*Should be generic where possible, so that better metadata access can
be had by all wikis, even if they don't follow commons conventions.
For example, should generically support exif data from files where
possible/appropriate, overriding the exif data when more reliable
sources of information are available.
*Should be compatible with a future wikidata on commons thing. [2]
**In particular, I want to read existing description page formatting,
not try and force people to use new parser functions or formatting
conventions, since they may become outdated in near future when
wikidata comes
**Hopefully Wikidata would be able to hook into my system (Well at the
same time providing its own native interface)
*Since descriptions on commons are formatted data (Wikilinks are
especially common) it needs to be able to output formatted data. I
think html is the most easy to use format. Much more easy to use than
say wikitext (However this is perhaps debatable)


What I've come up with is a new api metadata property (Currently
pending review in gerrit) called extmetadata that has a hook
extensions can hook into. [3] [4] [5] Additionally I developed an
extension for reading information from commons description pages. [6]

It combines information from both the file's metadata, and from any
extensions. For example, if the Exif data has an author specified
(Artist in exif speak), and the commons description page also has
one specified, the description page takes precedence, under the
assumption its more reliable. The module outputs html, since that's
the type of data stored in the image description page (Except that it
uses full urls instead of local ones).

The downside to this is in order to effectively get metadata out of
commons given the current practises, one essentially has to screen
scrape and do slightly ugly things (Look ahead for a brighter tomorrow
with wikidata!)

As an example, given a query like
api.php?action=queryprop=imageinfoiiprop=extmetadatatitles=File:Schwedenfeuer_Detail_04.JPGformat=xmlfmiiextmetadatalanguage=en
 it would produce something like [7]

So thoughts? /me eagerly awaits mail tearing my plans apart :)

[1] https://www.mediawiki.org/wiki/Multimedia/Media_Viewer
[2] https://commons.wikimedia.org/wiki/Commons:Wikidata_for_media_info
[3] https://gerrit.wikimedia.org/r/#/c/81598/
[4] https://gerrit.wikimedia.org/r/#/c/78162/
[5] https://gerrit.wikimedia.org/r/#/c/78926/
[6] https://gerrit.wikimedia.org/r/#/c/80403/
[7] http://pastebin.com/yh5286iR

--
Bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A metadata API module for commons

2013-09-04 Thread Brian Wolff
On 8/31/13, James Forrester jforres...@wikimedia.org wrote:
 However, how much more work would it be to insert it directly into Wikidata
 right now? I worry about doing the work twice if Wikidata could take it now
 - presumably the hard work is the reliable screen-scraping, and building
 the tool-chain to extract from this just to port it over to Wikidata in a
 few months' time would be a pity.


Part of this is meant as a hold over, until Wikidata solves the
problem in a more flexible way. However, part of it is meant to still
work with wikidata. The idea I have is that this api could be used by
any wiki (the base part is in core), and then various extensions can
extend it. That way we can make extensions (or even core features)
relying on this metadata that can work even on wikis without
wikidata/or the commons meta extension I started. The basic features
of the api would be available for anyone who needed metadata, and it
would return the best information available, even if that means only
the exif data. It would also mean that getting the metadata would be
independent of the backend used to extract/get the metadata. (I would
of course still expect wikidata to introduce its own more flexible
APIs).

 This looks rather fun. For VisualEditor, we'd quite like to be able to
 pull in the description of a media file in the page's language when it's
 inserted into the page, to use as the default caption for images. I was
 assuming we'd have to wait for the port of this data to Wikidata, but this
 would be hugely helpful ahead of that. :-)


Interesting.

[tangent]
One idea that sometimes comes up related to this, is a way of
specifying default thumbnail parameters on the image description page.
For example, on pdfs, sometimes people want to specify a default page
number. Often its proposed to be able to specify a default alt text
(although some argue that would be bad for accessibility since alt
text should be context dependent). Another use, is sometimes people
propose having a sharpen/no-sharpen parameter to control if sharpening
of thumbnails should take place (photos should be sharpened, line art
should not be. Currently we do it based on file type).

It could be interesting to have a magic word like
{{#imageParameters:page=3|Description|alt=Alt text}} on the image
description page, to specify defaults. (Although I imagine the visual
editor folks don't like the idea of adding more in-page metadata).
[end not entirely fully thought out tangent]

-
--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A metadata API module for commons

2013-09-04 Thread Brian Wolff
On 9/1/13, Jean-Frédéric jeanfrederic.w...@gmail.com wrote:
[..]

 The downside to this is in order to effectively get metadata out of
 commons given the current practises, one essentially has to screen
 scrape and do slightly ugly things


 This [1] looks quite acrobatic indeed. Can’t we make better use of the
 machine-readable markings provided by templates?
 https://commons.wikimedia.org/wiki/Commons:Machine-readable_data

 [1] https://gerrit.wikimedia.org/r/#/c/80403/4/CommonsMetadata_body.php


It is using the machine readable data from that page. (Although its
debatable how much Look for a td with this id, and then look at the
contents of the next sibling td you encounter is).

I'm somewhat of a newb though with extracting microformat style
metadata, so its quite possible there is a better way, or some higher
level parsing library I could use (Something like xpath maybe,
although its not really xml I'm looking at).

-- 
-bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A metadata API module for commons

2013-09-09 Thread Brian Wolff
On 9/6/13, Daniel Kinzler dan...@brightbyte.de wrote:
 The only thing I'm slightly worried about is the data model and
 representation
 of the metadata. Swapping one backend for another will only work if they are
 conceptually compatible.

The data model I was using was simple key-value pairs. Specifically it
was using the various properties defined by Exif (and other metadata
things that MediaWiki extracts from files) as the key names. I imagine
wikidata would allow for much more complex types of metadata. I was
thinking this api module would serve to gather the basic
information, and wikidata would have its own querying endpoints for
the complex view of its metadata.


 Can you give a brief overview of how you imagine the output of the API would
 be
 structured, and what information it would contain?

As an example, for the url of the license:
LicenseUrl source=commons-desc-page translatedName=URL for
copyright license hidden=
xml:space=preservehttp://creativecommons.org/licenses/by-sa/3.0/at/deed.en/LicenseUrl

Which contains the key name (LicenseUrl), the place where the data
was retrieved from (commons-desc-page, as opposed to file-metadata
if it came from the CC:LicenseUrl property of XMP data embedded in the
file), the translated name of the key name ( URL for copyright
license, coming from MediaWiki:Exif-licenseurl message), whether or
not this property is hidden when displayed on image description page
(true in the example), and the value of the property
(http://creativecommons.org/licenses/by-sa/3.0/at/deed.en)



 Also, your original proposal said something about outputting HTML. That
 confuses
 me - an API module would return structured data, why would you use HTML to
 represent the metadata? That makes it a lot harder to process...

It does. Part of the reason, is I wanted something that could
instantly be displayed to the user, hence more user friendly than
machine friendly (For example human readable timestamps instead of iso
timestamps. Human readable flash firing values, vs constant). The
second reason is the source of the data. If we look at the description
field on a commons image page, we have things like:

Front and western side of the house located at 912 E. First Street in
{{w|Bloomington, Indiana|Bloomington}}, {{w|Indiana}}, {{w|United
States}}.  Built in 1925, it is part of the locally-designated Elm
Heights Historic District.

Which has links in it. There's a couple options for what we can do
with that. We can give it out as is, or we could expand templates and
return:

Front and western side of the house located at 912 E. First Street in
[[:w:Bloomington, Indiana|Bloomington]], [[:w:Indiana|Indiana]],
[[:w:United States|United States]].  Built in 1925, it is part of the
locally-designated Elm Heights Historic District.

Or we could return html:
Front and western side of the house located at 912 E. First Street in
a href=//en.wikipedia.org/wiki/Bloomington,_Indiana class=extiw
title=w:Bloomington, IndianaBloomington/a, a
href=//en.wikipedia.org/wiki/Indiana class=extiw
title=w:IndianaIndiana/a, a
href=//en.wikipedia.org/wiki/United_States class=extiw
title=w:United StatesUnited States/a.  Built in 1925, it is part
of the locally-designated Elm Heights Historic District.

Or we could ditch the html entirely:

Front and western side of the house located at 912 E. First Street in
Bloomington, Indiana, United States. Built in 1925, it is part of the
locally-designated Elm Heights Historic District.

I think returning the html is the option that is most honest to the
original data, while still being easy to process. Sometimes the
formatting in the description field is more complex than just simple
links.

Given that the use case of showing data to user and having metadata
that is easy to process for computers are slightly different, perhaps
it makes sense to have two different modules, one that returns html
(and human formatted things for timestamps, etc), and the other that
returns more machine oriented data (including perhaps the version of
the description tag with all html stripped out).

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [RFC]: Clean URLs- dropping /wiki/ and /w/index.php?title=..

2013-09-16 Thread Brian Wolff
On 2013-09-16 7:12 PM, Gabriel Wicke gwi...@wikimedia.org wrote:

 Hi,

 while tinkering with a RESTful content API I was reminded of an old pet
 peeve of mine: The URLs we use in Wikimedia projects are relatively long
 and ugly. I believe that we now have the ability to clean this up if we
 want to.

 It would be nice to

 * drop the /wiki/ prefix
   https://en.wikipedia.org/Foo instead of
   https://en.wikipedia.org/wiki/Foo

 * use simple action urls
   https://en.wikipedia.org/Foo?action=history instead of
   https://en.wikipedia.org/w/index.php?title=Fooaction=history

 The details of this proposal are discussed in the following RFC:



 I'm looking forward to your input!

 Gabriel

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Well I'm not particularly fond of this idea (probably because im stuck in
my ways more than anything else), I do think that making the
en.wikipedia.org/foo be an instant http redirect instead of did you
mean/redirecting in 5 seconds message we currently have might make sense.

Additionally there is some security issues in ie6 when doing foo?action=raw
if I recall.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Release workflow recommendation] A public releases JSON file.

2014-09-18 Thread Brian Wolff
On 9/18/14, Daniel Friesen dan...@nadir-seen-fire.com wrote:
 Every once in awhile I've found an idea for a service which would for
 one reason or another need to know what releases of MediaWiki exist and
 which ones are obsolete.

 As far as I know, we don't have any sort of API or machine readable
 metadata at all declaring this information for services to use.

Once upon a time we did:
https://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/MWReleases/?pathrev=81635
(I believe that was actually deployed around 1.17-ish).

I guess you could still just fetch
https://www.mediawiki.org/wiki/Template:MW_stable_release_number to
get the info, but having a json file somewhere sounds like a good idea
provided there's actually people wanting to use it and it wouldn't
inconvenience whomever is charged with updating it too much.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Come develop Video, *the* video embedding extension for MediaWiki!

2014-09-23 Thread Brian Wolff
On Sep 23, 2014 8:41 AM, Jack Phoenix j...@countervandalism.net wrote:

 tl,dr: MediaWiki needs a more human-friendly interface for using videos in
 wiki pages. https://www.mediawiki.org/wiki/Extension:Video significantly
 improves the video experience in MediaWiki. The extension is not
 feature-complete yet, but *you* can help!

 Now for the longer version:
 Basically all video player extensions for MediaWiki are parser tags, and
as
 such, they're not the easiest to use, and some of the less popular ones
 might have security issues and whatnot, given the lack of sufficient
 attention from skilled developers. It's not obvious to the layperson that
 youtubeoUCEN-XvC7g/youtube renders the YouTube video First hands-on
 with the Nokia X family, published by Nokia [1]. Better yet, choosing
 Embed on YouTube gives you the following code snippet, which --
obviously
 -- doesn't render the video on an average MediaWiki installation: iframe
 width=560 height=315 src=//www.youtube.com/embed/oUCEN-XvC7g
 frameborder=0 allowfullscreen/iframe

 David Pean (of ArmchairGM/social tools fame [2]) identified this problem
 back in 2007 and he wrote the Video extension to solve the problem.
Despite
 being enabled on ArmchairGM until AGM was migrated to the standard Wikia
 codebase (in late 2010/early 2011 [3]), this great extension was
 unfortunately never quite finished, and while the basic concept mostly
 works, plenty of areas could use developer attention to make this *the*
 best video embedding extension out there for MediaWiki.
 Unlike your average parser hook for embedding videos, the Video extension
 adds a new Video: namespace and two special pages for handling videos.
 Videos are added via Special:AddVideo, which doesn't actually upload the
 underlying .flv/.mpg/.avi/.mp4/.whatever, but rather stores some metadata
 about the video and uploader with the unique, user-supplied name.
Therefore
 after adding the video to the wiki via Special:AddVideo, [1] could be
 embedded via syntax such as [[Video:First hands-on with the Nokia X
 family|300px]] on a wiki page.

 In late 2011 I published a cleaned-up version of the Video extension, and
 John Du Hart signficantly improved the extension's architecture and
reduced
 code duplication. Despite this, the extension received basically only
minor
 maintenance commits until today/yesterday [4], when I got rid of some
 legacy code and further improved the video undeletion workflow [5].
Certain
 key elements are nevertheless missing from the extension's current
 implementation and I know I'm not able to code all of these on my own,
 which is precisely why I'm asking you to consider helping out with this
 project.
 Things such as Special:WhatLinksHere support for videos, better undeletion
 code (the current implementation is a very dirty core hack that does the
 job, but it's not what we want to aim for in the long run), more i18n,
 support for *your* favorite video service...and of course, bug fixes such
 as having the relevant caches correctly purged when a video is deleted so
 that deleted videos no longer show up on pages that embed them, for
example.

 Some of the related, FOSS-licensed extensions from which code could be
 borrowed include:
 * WikiVid [6], a similar special page based approach to video embedding,
 written by profilic MediaWiki developer Daniel Friesen (Dantman); never
 finished, supports embedding via wiki link syntax and has some code
related
 to tracking which pages use which videos (the download link doesn't appear
 to be working, but I have a copy of the source code if there's ever a need
 for that)
 * WikiaVideo [7], Wikia's older video extension which was at some point
 enabled on all Wikia wikis (before being deprecated in favor of newer
video
 extensions). It seemed to have been based on David's Video extension,
 though it required some core hacks [8] and whatnot. I'm not sure when it
 got removed from the repository and I was too lazy to dig up the precise
 date and/or commit, but the linked version should give you a general idea
 of how the extension worked. Unlike David's Video extension, this
extension
 required no custom database tables, but it instead reused core tables such
 as filearchive, image or oldimage for storing information about the
videos.
 An interesting approach, but nevertheless not the one I'd have gone with.
 Some parts of WikiaVideo have already been incorporated into Video, such
as
 Hulu provider code (/extensions/Video/providers/HuluVideo.php) or some of
 the Special:Undelete integration code
 (/extensions/Video/VideoPageArchive.php).

 Let's put the media back to MediaWiki!

 [1] https://www.youtube.com/watch?v=oUCEN-XvC7g
 [2] https://www.mediawiki.org/wiki/Social_tools
 [3] http://wikiindex.org/ArmchairGM
 [4] depending on your timezone, the position of the stars, and other
 factors to take into consideration when dealing with dates and times
 [5]


Re: [Wikitech-l] git review problems

2014-09-23 Thread Brian Wolff
On 9/23/14, Merlijn van Deen valhall...@arctus.nl wrote:
 On 23 Sep 2014 02:34, Isarra Yos zhoris...@gmail.com wrote:

 Apparently I spoke too soon - authentication fails with https too. So it
 really is unusable.


 Please note that the https password is *not* your normal wikitech password,
 but rather an application-specific password; you can find it under
 https://gerrit.wikimedia.org/r/#/settings/http-password .
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Oh cool. I always assumed that access via https password was just
totally broken.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.19.19, 1.22.11 and 1.23.4

2014-09-26 Thread Brian Wolff
On 9/26/14, David Gerard dger...@gmail.com wrote:
 On 24 September 2014 23:28, Markus Glaser gla...@hallowelt.biz wrote:

 Patch to previous version (1.19.18):
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-1.19.19.patch.gz



 So I downloaded and applied this. gunzipped it, got this:

 -rw-r--r--  1 root root  13663 Sep 24 21:35 mediawiki-1.19.19.patch

 Applied it, and got this (on two different wikis):

 # patch -p1 --dry-run  mediawiki-1.19.19.patch
 patching file includes/DefaultSettings.php
 patching file includes/Sanitizer.php
 patching file includes/upload/UploadBase.php
 patching file includes/XmlTypeCheck.php
 Hunk #1 FAILED at 20.
 1 out of 4 hunks FAILED -- saving rejects to file
 includes/XmlTypeCheck.php.rej
 patching file RELEASE-NOTES-1.19
 patching file tests/phpunit/includes/upload/UploadTest.php

 The wikis are 1.19.18; I forget what I installed them as, but I've
 been applying the differential patches as they came out. This is the
 first one I've had the slightest hiccup with.

 So is it just me?


 - d.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


I just tried the following:

bawolff@Bawolff-L:/var/www/w/git/includes$ git checkout 1.19.18
Checking out files: 100% (4472/4472), done.
Note: checking out '1.19.18'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at a107915... Updated release notes and version number to
MediaWiki 1.19.18
bawolff@Bawolff-L:/var/www/w/git/includes$ cd ../
bawolff@Bawolff-L:/var/www/w/git$ patch -p1  ~/mediawiki-1.19.19.patch
patching file includes/DefaultSettings.php
patching file includes/Sanitizer.php
patching file includes/upload/UploadBase.php
patching file includes/XmlTypeCheck.php
patching file RELEASE-NOTES-1.19
patching file tests/phpunit/includes/upload/UploadTest.php

-

So provided that the 1.19.18 tarball/patch gets thing up to precisely
where git has 1.19.18 tagged (a107915), then the patch seems to work
fine.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Outreach Program for Women/Round 9

2014-09-29 Thread Brian Wolff
On 9/29/14, Roxana Necula necula.roxan...@gmail.com wrote:
 Hello,

 My name is Roxana and I am an engineering student at the Polytechnic
 University of Bucharest, Romania.
 I would like to be part of MediaWiki open-source community and participate
 in the Outreach Program for Women round 9.
 The project that interests me is Wikipedia article translation metrics
 https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_9#Wikipedia_article_translation_metrics
 But before diving into the code and do the suggested micro-task, I first
 wanted to fix some annoying little bugs, starting with bug #25163
 https://bugzilla.wikimedia.org/show_bug.cgi?id=25163. So far everything
 is working fine in my local development environment, but I am still trying
 to familiarize myself with the code review / patching part.

 So, to wrap things up, my question is if I am heading towards the right
 direction, and if not, what advice do you have.

 Thank you,
 Roxana.
 https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_9#Wikipedia_article_translation_metrics
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Hi Roxana, thanks for your interest.

It does sound like you're heading in the right direction. Being able
to fix a small bug (any bug), is one of the most important steps for
GSOC/OPW potentials, as it demonstrates your interest in MediaWiki.
Submitting to code review (and gerrit in general) can be a little
confusing at first. Don't worry too much if you're not 100% sure if
you're submitting something correctly, everything is undo-able, so if
you accidentally submit something incorrectly, its very easy to fix.

The general process for submitting something (via git) is:
git checkout master
git pull
git checkout -b topicOfPatch
edit various files here
git commit -a
git show # This just shows what your patch looks like so you can check it
git review

There's pages on mediawiki.org that explain this in much more detail.
If you run into any trouble (Or if you just want to talk to MediaWiki
people) you can always get help with submitting patches in #mediawiki
or #wikimedia-dev irc channels on irc.freenode.net. I strongly
encourage you to try out irc, as it can really help to have real time
communication when learning things.

I also would encourage you to discuss the project you want to do with
the people offering to mentor it (For the one you linked to, that
would be Amir, who uses the nickname aharoni on irc)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-09-30 Thread Brian Wolff
On 9/30/14, Derric Atzrott datzr...@alizeepathology.com wrote:
 Alright, this is a long email, and it acts to basically summarise all of the
 discussions that have already happened on this topic.  I'll be posting a
 copy
 of it to Mediawiki.org as well so that it will be easier to find out about
 what has already been proposed in the future.

 There is a policy side to this, Meta has the No open proxies policy, which
 would need to be changed, but I doubt that such policies will be changed
 unless those of us on this list can come up with a good way to allow Tor
 users
 to edit.  If we can come up with a way that solves most of the problems the
 community has, then I think there is a good chance that this policy can be
 changed.


I'd like to add an idea I've been thinking about to make TOR more acceptable.

A big part of the problem is that there are hundreds (thousands?) of
exit nodes, so if someone is being bad, they just have to wait 5
minutes to get a new one, making it very hard to block them.

So what we could do, is map all tor connections to appear (To MW) as
if they are coming from a few private IP addresses. This way its easy
to block temporarily (in case of a whole slew of vandalism comes in),
the political decision on whether to block or not becomes a local
problem (The best kind of solution to a problem is the type that makes
it somebody else's problem ;) I would personally hope that admins
would only give short term block to such an address during waves of
vandalism, but ultimately it would be up to them.

To be explicit, the potential idea is as follows:
 *User access via tor
*MediaWiki sees its a tor request
*Try to do limited browser fingerprinting, to perhaps mitigate the
affect of an unclued user not using tor browser being bad ruining it
for everyone. Say take a hash of the user-agent and various accept
headers, and turn it into a number between 1 and 16.
*Make MW think the IP is 172.16.0.number from previous step

Then all the tor edits are all together, and easy to notice if
somebody is abusing them, and easy for a local admin to block all at
once if need be.

This would also make most of the rate limiting apply against all
people accessing via tor instead of doing rate limiting per exit node,
which is probably a good thing, and would prevent repetitive abuse,
people registering 10 billion accounts, etc. If we did this, we may
also want to make pretty much every action trigger a captcha for those
addresses (perhaps even if you are logged in from those addresses),
instead of the current lax captcha triggering (On the bright side, our
captchas are actually readable by people, unlike say cloudflare's
(recaptcha) which I can't make heads or tails of).

If there are further concerns about potential abuse, we could tag all
edits coming from TOR (including if user is logged in) with an edit
tag of tor (Although that might be in violation of privacy policy by
exposing how a logged in user is accessing the site).

Thoughts? Would this actually make TOR be acceptable to the Wikipedians?

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-09-30 Thread Brian Wolff

 We need to transition away from a framework where IP addresses are our only
 means to block problematic editors and towards a framework where we can do
 so via other less intrusive means.


And use what instead? Identities based on proof of possession of a
phone numbers? Surety bonds paid in bitcoin? Faxing a drivers license
to the foundaion? PKI? Web of trust system where existing wikipedians
can invite people in?

As Tyler said, it is all about the collateral., Well not using IPs
is great in principle, I'm not seeing anything equivalent to IP
addresses that we could use instead of IPs.

--bawolff

p.s. That Nymble thing is cool.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wrapping signatures with a span for discoverability

2014-09-30 Thread Brian Wolff
On 9/30/14, Brion Vibber bvib...@wikimedia.org wrote:
 Some folks in #wikimedia-parsoid are still real excited about the idea
 though, so a couple more notes should people decide they like it anyway. :)

 * Consider either a wikitext wrapper like {{#sig:Username}} (my preference)
 or a markup tag like sigUsername/sig (Gabriel's preference I think?) to
 go in the wikitext; this will make the resulting sigs look less crufty in
 talk and vote pages (which do still exist, alas!)

 * Use a span as the actual HTML rendering that can be parsoid-friendly and
 thus VisualEditor-friendly.

 * Consider though whether the HTML should be spoofable and what happens
 if you do.

 * Consider that old revisions and archived talk pages will not have this
 markup, so there could be inconsistency. Beware what you use it for etc.

 -- brion

 On Tue, Sep 30, 2014 at 2:28 PM, Brion Vibber bvib...@wikimedia.org wrote:

 Please don't; signatures belong as a feature of the discussion and voting
 systems and don't belong in wikitext. They're crufty enough as is and I'd
 recommend against making them cruftier.

 -- brion

 On Tue, Sep 30, 2014 at 1:57 PM, Erik Bernhardson 
 ebernhard...@wikimedia.org wrote:

 There is currently a patch in gerrit,
 https://gerrit.wikimedia.org/r/#/c/130094/ , that has been hanging around
 for a few months.  To me it seems like an easy patch with some obvious
 benefits.

 JackMcbarn suggested this might need wider discussion/notice so putting
 it
 up here to get a little more visibility.

 Erik B.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Would this mean that if people had a fancy sig, and they changed it,
it would automatically update everywhere with this magic tag instead
of just applying to new signatures? (Which might be cool)

Downside to that you might have some tricky issue where people change
their sig after the fact to be something malicious (For some
definition of malicious), and then all the old sigs change without an
edit to track it and generally be a vehicle for mass vandalism.
(Didn't that use to be an issue on /. ?)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-10-01 Thread Brian Wolff
On Oct 1, 2014 10:55 AM, Risker risker...@gmail.com wrote:

 This is something that has to be discussed *on the projects themselves*,
 not on mailing lists that have (comparatively) very low participation by
 active editors.

Unless people want to trial on mw.org (assuming there is dev buy in, not
sure we are there yet)

 There also needs to be a good answer to the attribution problem that has
 long been identified as a secondary concern related to Tor and other proxy
 systems.  The absence of a good answer to this issue may be sufficient in
 itself to derail any proposed trial.

Which problem is that?


 Not saying a trial can't happenjust making it clear that it's not
 something that is within the purview of developers (volunteer or staff)
 because the blocking of Tor has always been directly linked to behaviour
 and core policy, not to technical issues.

I agree that any such trial should have local community buy in.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-10-01 Thread Brian Wolff
On Oct 1, 2014 11:40 AM, Brad Jorsch (Anomie) bjor...@wikimedia.org
wrote:

 On Wed, Oct 1, 2014 at 10:29 AM, Brian Wolff bawo...@gmail.com wrote:

  On Oct 1, 2014 10:55 AM, Risker risker...@gmail.com wrote:
  
   This is something that has to be discussed *on the projects
themselves*,
   not on mailing lists that have (comparatively) very low participation
by
   active editors.
 
  Unless people want to trial on mw.org (assuming there is dev buy in, not
  sure we are there yet)
 

 Does mw.org receive the level of vandalism and other unhelpful edits
(where
 people would like to use Tor to avoid IP blocking in making those edits)
 that it would make for a useful test?

If we are testing something potentially very disruptive, no harm starting
small. At the very least it would show if we could enable tor on mw.org.
The results could help decide if further testing on more real wikis is
justified.


There also needs to be a good answer to the attribution problem
that
  has
   long been identified as a secondary concern related to Tor and other
  proxy
   systems.  The absence of a good answer to this issue may be
sufficient in
   itself to derail any proposed trial.
 
  Which problem is that?
 

 If I understand it correctly, right now we attribute edits made without an
 account to the IP address. Allowing edits via Tor should probably not be
 attributing such edits to the exit node's IP.


This quite frankly seems like a contrived problem. A random (normal) ip
address hardly associates an edit to a person unless you steal an isps
records. Wait a year and it would probably be impossible to figure out who
owned some random dynamic ip address no matter how hard you tried. I dont
think attributing edits to an exit node introduces any new attribution
issues that are not already present.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-10-01 Thread Brian Wolff


 I wish it was a contrived problem.  However, this is the conceit by which
 the edits are attributed for licensing purposes, and it's a non-trivial
 matter.  While I'm fully supportive of finding another way to do this, it
 is a fundamental issue that would require fairly extensive
 legal consultation to change, given that we've been using IP address as
 assigned to a specific individual as the licensee for...what, almost 14
 years?

 We know that Tor exit nodes are (by definition) not IP addresses assigned
 to the contributor, and there is no reasonable prospect of tracing back to
 the original IP address (unlike many other anonymising proxies).  Thus the
 attribution issue.

Realistically there is no reasonable prospect of tracing back an
individual IP to a real person 80% of the time without a court order,
which is extremely unlikely to ever happen. Even then you can only
really link the IP to who's paying the bill, which is only weakly
circumstantially related to who really owns the edit.

If we're going to consider the theoretical possibility that we can
might be able to link back an IP to a person with certainly, we might
as well start considering that we might be able to do the same if we
get everyone in the tor circuit to collude...

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-10-01 Thread Brian Wolff
On Oct 1, 2014 3:56 PM, Derric Atzrott datzr...@alizeepathology.com
wrote:

 Another idea for a potential technical solution, this one provided
 by the user Mirimir on the Tor mailing list.  I thought this was
 actually a pretty good idea.

  Wikimedia could authenticate users with GnuPG keys. As part of the
  process of creating a new account, Wikimedia could randomly specify the
  key ID (or even a longer piece of the fingerprint) of the key that the
  user needs to generate. Generating the key would require arbitrarily
  great effort, but would impose negligible cost on Wikimedia or users
  during subsequent use. Although there's nothing special about such GnuPG
  keys as proof of work, they're more generally useful.

 As a proof of work I think it works out pretty well.  The cost of creating
 a key with a given fingerprint is non-trivial, but low enough that
 someone wishing to create an account to edit might well go through with
 it if they knew it would only be a one-time thing.

 This doesn't completely eliminate the issue of socks, but honestly if we
 make the key generation time reasonably long, it would probably deter
 most socks as they might as well just drive to the nearest Starbucks.

 Someone else on the Tor mailing list suggested that we basically relax
 IPBE, which while not on topic for this list, I thought I'd mention
 just because it has been mentioned.  They actually basically
 described our current system, except with the getting the IPBE stage
 a lot easier.

 The following was also pointed out to me:

  [I]t's also trivial to evade using proxies, with or without Tor.
  Blocking Tor (or even all known proxies) only stops the clueless.
  Anyone serious about evading a block could just use a private proxy
  on AWS (via Tor). [snip] The bottom line is that blocking Tor harms
  numerous innocent users, and by no means excludes seriously malicious
  users.

 I did respond to this to explain our concerns, which is what netted
 the GPG idea.  Does anyone see any glaringly obvious problems with
 requiring an easily blockable and difficult to create proof of work
 to edit via Tor?

 Thank you,
 Derric Atzrott


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

The problem with proof of work things is that they kind of have the wrong
kind of scarcity for this problem.

*someone legit wants to edit, takes them hours to be able to. (Which is not
ideal)
*someone wants to abuse the system, spend a couple months before hand
generating the work offline, use all at once for thousand strong sock
puppet army. (Which makes the system ineffective at preventing abuse)

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.19.20, 1.22.12 and 1.23.5

2014-10-01 Thread Brian Wolff
On 10/1/14, Markus Glaser gla...@hallowelt.biz wrote:
 Hello everyone,

 I would like to announce the release of MediaWiki 1.19.20, 1.22.12 and
 1.23.5. This is a security release. Download links are given at the end of
 this email.

 == Security ==
 * (bug 70672) SECURITY: OutputPage: Remove separation of css and js module
 allowance.


Hmm. Lots of third parties use CSS in MediaWiki:Common.css to make
significant theming customizations without making a real skin.
Perhaps the release notes should mention that users who do this will
have their log in page suddenly look out of place.

Given that this change really only makes it mildly harder for a novice
attacker to do something evil, and there exists potential use cases it
breaks perhaps it should be behind a config variable defaulting to the
more secure setting. (A moderately skilled attacker should easily be
able to think of ways around this to steal users passwords. Once an
attacker can get javascript inserted, its pretty much game over.
Trying to limit damage of a malicious user modifying site js, is
like trying to unbreak an egg. Once the egg is broken, well you know
the story about humpty dumpty)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor and Anonymous Users (I know, we've had this discussion a million times)

2014-10-02 Thread Brian Wolff
On 10/2/14, Kevin Wayne Williams kwwilli...@kwwilliams.com wrote:
 Derric Atzrott schreef op 2014/09/30 6:08:
 Hello everyone,
 [snip]
 There must be a way that we can allow users to work from Tor.
 [snip more]

 I think the first step is to work harder to block devices, not IP
 addresses. One jerk with a cell phone cycles through so many IP
 addresses so quickly in such active ranges that our current protection
 techniques are useless. Any child can figure out how to pull his cable
 modem out of the wall and plug it back in.

 Focusing on what signature we can obtain from (or plant on) the device
 and how to make that signature available to and manageable by admins is
 the key. Maybe we require a WMF supplied app before one can edit from a
 mobile device. Maybe we plant cookies on every machine that edits
 Wikipedia to allow us to track who's using the machine and block access
 to anyone that won't permit the cookies to be stored. There are probably
 other techniques. The thing to remember is that the vast majority of our
 sockpuppeteers are actually fairly stupid and the ones that aren't will
 make their way past any technique short of retina scanning. It doesn't
 matter whether a blocking technique allows a tech-savvy user to bypass
 it somehow. Anything is better than a system that anyone can bypass by
 turning his cable modem off and on.

 Once we have a system that allows us to block individual devices
 reasonably effectively, it won't matter whether those people are using
 Tor to get to us or not.

 KWW

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


So all we need is either:
A) Magic browser fingerprinting with no (or almost no) false positives
when used against everyone in the world. With the fingerprinting code
having at most access to javascript to run code (but preferably not
even needing that) and it has to be robust in the face of the user
being able to maliciously modify the code as they please.
B) tamper proof modules inside every device to uniquely identify it.
(Can we say police state?)

Arguably those aren't making the assumption that [users] are actually
fairly stupid. But even a simplified version of those requirements,
such as, must block on per device basis, must involve more work than
unpluging a cable modem to get unblocked, dwells into the territory of
absurdly hard.

Although perhaps there are some subset of the population we could use
additional methods on. Cookies are pretty useless (If you think
getting a new IP is easy, you should see what it takes to delete a
cookie). Supercookies (e.g. Evercookie ) might be more useful, but
many people view such things as evil. Certain browsers might have a
distinctive enough fingerprint to block based on that, but I doubt
we'd ever be able to do that for all browsers. These things are also
likely to be considered security vulnrabilities, so probably not
something to be relied on over long term as people fix the issues that
allow people to be tracked this way.


 Once we have a system that allows us to block individual devices
 reasonably effectively, it won't matter whether those people are using
 Tor to get to us or not

If you can find a way to link a tor user to the device they are using,
then you have essentially broken Tor. Which is not an easy feat.

--bawolff

p.s. Obligatory xkcd https://xkcd.com/1425/

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor exit node rejects port 443 of WM, but it is disabled for editing

2014-10-02 Thread Brian Wolff
On 10/2/14, Tyler Romeo tylerro...@gmail.com wrote:
 Well you can also edit from port 80 (unless we deployed site-wide SSL
 without me knowing).


Not to mention 198.35.26.96 and 2620:0:863:ed1a::1.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] {{TemplatesThatWorkOnMobile}}

2014-10-07 Thread Brian Wolff
On Oct 7, 2014 10:03 PM, Dan Garry dga...@wikimedia.org wrote:

 Jon,

 Regarding editing specific templates, I'll echo Brion here. If a template
 is causing difficulties for your work and you feel like you know enough to
 fix it, then fix it! That's the wiki way. If you don't have the rights to
 edit the template (e.g. because it's protected), then speak to Tomasz and
 he will try to get you the rights that you need.

Umm, is that really a good idea? Protected templates are protected for a
reason usually (at least on projects that ive worked on. Can't speak for
enwiki). {{Editprotected}} seems like a better option for those, even if
simply for political reasons, and if you find yourself making a lot of such
requests  why not ask the community how it feels about giving out advanced
rights.

Imho, wmf granted advanced rights should be reserved for more serious
issues than purely asethetic ones.

I agree with the setiment of just being bold for unprotected templates.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] {{TemplatesThatWorkOnMobile}}

2014-10-07 Thread Brian Wolff
On Oct 7, 2014 10:03 PM, Dan Garry dga...@wikimedia.org wrote:

 Jon,

 Regarding editing specific templates, I'll echo Brion here. If a template
 is causing difficulties for your work and you feel like you know enough to
 fix it, then fix it! That's the wiki way. If you don't have the rights to
 edit the template (e.g. because it's protected), then speak to Tomasz and
 he will try to get you the rights that you need.

Umm, is that really a good idea? Protected templates are protected for a
reason usually (at least on projects that ive worked on. Can't speak for
enwiki). {{Editprotected}} seems like a better option for those, even if
simply for political reasons, and if you find yourself making a lot of such
requests  why not ask the community how it feels about giving out advanced
rights.

Imho, wmf granted advanced rights should be reserved for more serious
issues than purely asethetic ones.

I agree with the setiment of just being bold for unprotected templates.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcement: Bartosz Dziewoński joins Wikimedia as a Features Contractor

2014-10-08 Thread Brian Wolff
On 10/7/14, James Forrester jforres...@wikimedia.org wrote:
 Hello all,

 It is my great pleasure to announce that Bartosz Dziewoński[0] has joined
 the Wikimedia Foundation as a contractor in Features Engineering.


 Bartosz has worked as a volunteer developer for several years, specialising
 in front-end improvements to skins and core functionality, helping gadget
 authors, skin developers and extension maintainers alike, as well as being
 an active member of the Polish Wikipedia community. Bartosz is currently a
 student in Poland, studying Computer Science at the AGH University of
 Science and Technology in Kraków[1].

 Bartosz will be working as part of the Editing team[2] to continue our work
 extending the various editing tools, from WikiEditor and Poem to
 VisualEditor and Cite, and on improving the front-end infrastructure for
 skins, extensions and gadgets, as well as fixes to and support for
 MediaWiki core and other extensions as needed.

 Please join me in welcoming Bartosz to the Wikimedia Foundation.

 [0]: [[mw:User:Matma_Rex https://www.mediawiki.org/wiki/User:Matma_Rex]],
 now also [[mw:User:Bartosz_Dziewoński_(WMF)
 https://www.mediawiki.org/wiki/User:Bartosz_Dziewo%C5%84ski_(WMF)]]

 [1]: [[en:AGH University of Science and Technology
 https://en.wikipedia.org/wiki/AGH_University_of_Science_and_Technology]]
 [2]: [[mw:Editing https://www.mediawiki.org/wiki/Editing]]

 Yours,

 --
 James D. Forrester
 Product Manager, Editing
 Wikimedia Foundation, Inc.

 jforres...@wikimedia.org | @jdforrester
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Congrats!

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tech- and Tools-related IEG proposals

2014-10-09 Thread Brian Wolff
On 10/10/14, Patrick Earley pear...@wikimedia.org wrote:
 *(cross-posted to wikimedia-l)*

 Hello all,

 For our second round of Individual Engagement Grant applications in 2014,
 we have a great crop of ideas. Wikimedians have dropped by to offer
 feedback, support, or expertise to some of the proposals, but many
 proposals have not been reviewed by community members.  Over half of these
 proposals involve new tools, new uses of our databases, or have other
 technical elements. Some will be hosted on Labs if approved.

 Members of this list may have key insights for our proposers.  If there is
 an open proposal that interests you, that you have concerns about, or that
 involves an area where you have experience or expertise, please drop by the
 proposal page to share your views.  This will help the proposers better
 hone their strategies, and will assist the IEG Committee in evaluating some
 of these fresh new ideas to improve the Wikimedia projects.  Working with
 an IEG proposal may even inspire you to serve as a project advisor, or to
 propose one of your own for the next cycle!  Comments are requested until
 October 20th.

 Tools IEG proposals:


- IEG/Semi-automatically generate Categories for some small-scale 
medium-scale Wikis

 https://meta.wikimedia.org/wiki/Grants:IEG/Semi-automatically_generate_Categories_for_some_small-scale_%26_medium-scale_Wikis
- IEG/WikiBrainTools
https://meta.wikimedia.org/wiki/Grants:IEG/WikiBrainTools
- IEG/Dedicated Programming Compiler

 https://meta.wikimedia.org/wiki/Grants:IEG/Dedicated_Programming_Compiler
- IEG/Gamified Microcontributions
https://meta.wikimedia.org/wiki/Grants:IEG/Gamified_Microcontributions
- IEG/Enhance Proofreading for Dutch

 https://meta.wikimedia.org/wiki/Grants:IEG/Enhance_Proofreading_for_Dutch
- IEG/Tamil OCR to recognize content from printed books

 https://meta.wikimedia.org/wiki/Grants:IEG/Tamil_OCR_to_recognize_content_from_printed_books
- IEG/Easy Micro Contributions for Wiki Source

 https://meta.wikimedia.org/wiki/Grants:IEG/Easy_Micro_Contributions_for_Wiki_Source
- IEG/Citation data acquisition framework

 https://meta.wikimedia.org/wiki/Grants:IEG/Citation_data_acquisition_framework
- IEG/Global Watchlist
https://meta.wikimedia.org/wiki/Grants:IEG/Global_Watchlist
- IEG/Automated Notability Detection

 https://meta.wikimedia.org/wiki/Grants:IEG/Automated_Notability_Detection
- IEG/Piłsudski Institute of America GLAM-Wiki Scalable Archive Project

 https://meta.wikimedia.org/wiki/Grants:IEG/Pi%C5%82sudski_Institute_of_America_GLAM-Wiki_Scalable_Archive_Project
- IEG/Revision scoring as a service

 https://meta.wikimedia.org/wiki/Grants:IEG/Revision_scoring_as_a_service


 Full list:

- IEG Grants/Review
https://meta.wikimedia.org/wiki/Grants:IEG#ieg-reviewing

 Regards,



 wikitec...@wikipedia.org

 --
 Patrick Earley
 Community Advocate
 Wikimedia Foundation
 pear...@wikimedia.org
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

A lot of these proposals seem poorly written from the perspective of a
technical proposal. Many appear to be more like sales pitches intended
for a non-technical audience (Which I suppose kind of makes sense, the
people who get lots of wikimedians to endorse them, win).

I'm generalizing here, as it seems there's a lot of variation, but
there's a lot of what I am going to fix, not how am I going to do
it. They mostly don't have mock-up screenshots for the one's who
propose new user facing things, there is largely no schedule of
milestones, or even concrete minimum viable product specifications. If
they were GSOC proposals, they would largely be rejected gsoc
proposals.

For example 
[[meta:Grants:IEG/Tamil_OCR_to_recognize_content_from_printed_books]]
you can't even tell that they intend to create a website instead of a
desktop app, unless you read the talk page.

Second, its hard to comment on the appropriateness of scope, since
there's not really any set criteria (That I've seen). In particular
its unclear what is considered an appropriate asking amount for a
given amount of work. For example,
https://meta.wikimedia.org/wiki/Grants:IEG/Global_Watchlist asks for
$7000, which seems excessive to essentially make a user script that
has a for loop to get the user's watchlist on various wikis. That's
the sort of thing which I would expect to take about a week. A very
experienced developer might be able to pull it off in a day provided
the interface elements were minimalist. (Although that proposal has a
small little note about being able to mute/unmute (non-flow) threads
on a per thread basis, which depending where you go with that, could
be the hardest aspect of the project).

Similarly, people asking thousands of dollars so they can get
computers to test the user script in different OS environments seems
like an odd 

Re: [Wikitech-l] Introduction for OPW (Collaborative spelling dictionary building tool)

2014-10-12 Thread Brian Wolff
On Oct 12, 2014 6:54 AM, Quim Gil q...@wikimedia.org wrote:

 Hi Ankita,

 On Fri, Oct 10, 2014 at 11:06 PM, Ankita Shukla ankitashukla...@gmail.com

 wrote:

  Hello everyone!
 
  I am Ankita Shukla and am a Computer Science and Engineering student
  pursuing
  the junior year of Bachelor of Technology.
 
  I am interested in working on the project: Collaborative spelling
  dictionary
  building tool, as given on the Featured Projects Section of MediaWiki
OPW
  page.
 


https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_9#Collaborative_spelling_dictionary_building_tool
 is mentored by Kartik and Amir (explicitly CCed above). There should have
 been a link to a Bugzilla report where you and other candidates interested
 could ask questions directly and discuss the project. I have created that
 report now:

 https://bugzilla.wikimedia.org/show_bug.cgi?id=71973

 Please follow up there.

 Let me also paste the description of that project idea, just in case other
 contributors in this list want to help defining it:

 There are extensive spelling dictionaries for the major languages of the
 world: English, Italian, French and some others; at various degrees of
 coverage, Mozilla has over a hundred
 https://addons.mozilla.org/firefox/dictionaries/,LibreOffice dozens
 https://wiki.documentfoundation.org/Language_support_of_LibreOffice.
They
 help make Wikipedia articles in these languages more readable and
 professional and provide an opportunity for participation in improving
 spelling. Many other languages, however, don’t have spelling dictionaries.
 One possible way to build good spelling dictionaries would be to employ
 crowdsourcing, and Wikipedia editors can be a good source for this, but
 this approach will also require a robust system in which language experts
 will be able to manage the submissions: accept, reject, filter and build
 new versions of the spelling dictionary upon them. This can be done as a
 MediaWiki extension integrated with VisualEditor, and possibly use
Wikidata
 as a backend.

- Skills: PHP, Web frontend. Bonus: Familiarity with VisualEditor and
Wikidata; experience in an existing dictionary-building community.
- Mentors: Amir Aharoni
https://wikimediafoundation.org/wiki/User:Aaharoni, Kartik Mistry
https://www.mediawiki.org/wiki/User:KartikMistry
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

How would such a thing interact with the wiktionary community. Would it
harvest data from them or be a sub project somehow, or is it planned to be
entirely separate?

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Php files under the test directory

2014-10-12 Thread Brian Wolff
On Oct 12, 2014 10:13 AM, Divyanshi Kathuria divyanshikathu...@gmail.com
wrote:

 On Thu, Oct 09, 2014 at 09:54:03PM +0200, dan entous wrote:
  this would be the best way to achieve what's needed. in any case, i
believe this is the general idea antoine is getting at.

 Thank You so much for your help. I have tried to incorporate data
providers according to the changes you mentioned.
 Here is the file :
https://gist.github.com/divyanshikathuria/e5a09b50f65779f92c69
 Please see if it is correct and suggest the necessary changes.

 --
 Divyanshi Kathuria
 divyanshikathuria27.wordpress.com


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

No, that's not quite right (indentation is wrong, the @dataProvider
annotation needs to take the name of the function that is the provider not
just Provider). I suggest you try looking at other code that uses
@dataProvider to try to find an example, or read through the section of the
PHPUnit docs on data providers.

For this type of review like question, its probably better to ask on the
bug instead of the mailing list unless noone is responding on the bug.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FOSS OPW Mentor Contact

2014-10-16 Thread Brian Wolff
On Oct 16, 2014 7:02 PM, E.C Okpo eco...@gmail.com wrote:

 Hello, I am working on my application for the FOSS Outreach Program, but I
 am having some trouble getting in contact with the mentor for my chose
 project - Ori Livneh https://www.mediawiki.org/wiki/User:Ori.livneh. I
 have idled on IRC for quite a while trying to get in contact, but no luck.
 Would anyone know of an alternate means of contact?

 Thanks,
 Christy
 

https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_9#Wikimedia_Performance_portal
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Isnt there some giant meeting today and tomorrow of wmf platform people?
Maybe try again on monday?

Otherwise i would suggest email. You can email him by using
Special:emailuser on wiki, or find his email by looking it up in git,
gerrit, bugzilla or old mailing list posts. (There is a good chance he may
even be reading this)

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator for code review (defining the plan)

2014-10-19 Thread Brian Wolff
On Oct 19, 2014 11:52 AM, Rjd0060 rjd0060.w...@gmail.com wrote:

 On Sun, Oct 19, 2014 at 10:44 AM, MZMcBride z...@mzmcbride.com wrote:

  And, even though it should go without saying, Bugzilla will need to
remain
  online in a read-only format indefinitely post-migration.
 

 Why would this be necessary, assming everything is properly imported to
 Phab?

 Will *every* detail of a BZ ticket be moved?  Comments, attachments,
 history, etc?  If so, I wouldn't see a need to keep it laying around.  Is
 there one?

 --

 Ryan
 User:Rjd0060
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Well there are a lot of links to it first of all. And there is always the
possibility  that something will be missed in the migration. Personally i
wouldnt call having a read only bugzilla stay up a hard requirement, but
its something i would certainly want.

I still refer to Special:code sometimes (most recently a couple days ago)
even though we are long dince migrated.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Looking for project status updates

2014-10-23 Thread Brian Wolff

 Terry wrote a piratey blog post about his departure here:
 http://terrychay.com/article/fair-winds-and-godspeed-me-hearties.shtml


Oh wow.

If Terry is still reading this list, I want to take this opportunity
to wish him best of luck in whatever he has planned next.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Technical Debt

2014-10-23 Thread Brian Wolff

 Some more examples that might be useful in some form:
 - Global variable or function should never be used
 (This is probably the only really useful one, because removing global
 functions/variables would result in better testable code.)
 - PHP debug statements found

In maintenance scripts and things like eval.php that's fine. Using it
in the debug api format also seems fine imo.

 - Logical operators should be avoided

I'll give it that one as legitimate.

 - sleep() should not be used

Its reasonable in a maintenance script which is what was flagged.

 - Source code should not contain FIXME comments

Better than just not documenting things needing to be fixed. (However,
arguably the report is listing stuff that should be fixed, so its
appropriate to list that, I just don't like the shoot-the-messenger
way its phrased. Code marked FIXME is better than the same code with
no indication it needs to be fixed.)

Also the time estimates to fix issues that it flagged (supposing they
are legit issues), seem rather high.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Looking for project status updates

2014-10-23 Thread Brian Wolff
On 10/23/14, Dan Garry dga...@wikimedia.org wrote:
 Pine,

 Do you read the monthly engineering reports? They're useful to give you a
 high-level insight into the engineering efforts going on at the Wikimedia
 Foundation. For example, the September report is currently being written
 here:
 https://www.mediawiki.org/wiki/Wikimedia_Engineering/Report/2014/September

 Thanks,
 Dan


To be fair, he's asking about a team disbanded in October, and the
september report isn't even published yet. I love the monthly reports,
but perhaps not the most timely source of information.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Requiring PHP = 5.3.3 for MediaWiki core

2014-10-23 Thread Brian Wolff
On 10/24/14, Daniel Friesen dan...@nadir-seen-fire.com wrote:
 On 2014-10-23 7:55 PM, MZMcBride wrote:
 Are there statistics about what versions of PHP exist in the wild among
 MediaWiki users or users of other large PHP applications (Drupal,
 WordPress, etc.)?
 https://wikiapiary.com/wiki/PHP_Versions
 https://wikiapiary.com/wiki/PHP_Versions/non-wmf

 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://danielfriesen.name/]


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Specifically
https://wikiapiary.com/w/index.php?title=Special:SearchByPropertyoffset=0limit=500property=Has+PHP+Versionvalue=5.3.2
specificly suggests there's currently about 489 such wikis that this
change could potentially affect (Unclear how many of those are active
or how many of those use even remotely modern versions of MW. My
SMW-fu is not strong enough to figure out how to query that)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How to add a new heading row in file history tables, server-side?

2014-10-26 Thread Brian Wolff
On 10/26/14, Ricordisamoa ricordisa...@openmailbox.org wrote:
 There is the ImagePageFileHistoryLine
 https://www.mediawiki.org/wiki/Manual:Hooks/ImagePageFileHistoryLine
 hook already, but it is for data rows only.
 Any help appreciated.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Well if you want to be really hacky, you can do something like:

onImagePageFileHistoryLine( $this, $file, $row, $rowClass ) {
 if ( /* some condition that I want to insert header here */ ) {
  $row .= 'thheader col 1/ththheader col 2/th.../trtr';
 }
 return true;
}

Since the $row variable is considered raw html to insert between a
tr,/tr pair.

Alternatively you could hook into ArticleFromTitle, replace ImagePage
with some subclass which overrides imageHistory method, then possibly
make some different subclass of ImageHistoryPseudoPager which
overrides  getBody() method.

If neither of those two methods work for you, we could perhaps
introduce a new hook. However I think I would recommend the
ArticleFromTitle hook class overriding method if possible.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] thank vs. like

2014-10-26 Thread Brian Wolff
On 10/26/14, Ricordisamoa ricordisa...@openmailbox.org wrote:
 Il 26/10/2014 20:45, Amir E. Aharoni ha scritto:
 In the Hebrew Wikipedia there's a discussion about the Thanks feature,
 which raises the following confusion among other things: Why does the
 person who is sending the thank-you gets a message saying $1 was notified
 that you liked his/her edit., and the person who receives the thank-you
 notification sees a message that uses the verb thank?

 The difference is in the original message in English, and I translated
 them
 accordingly, but I am wondering: Is this really good? Maybe both should
 use
 the same verb - thank?

 I can just send a Gerrit patch or open a bug, but it may be worth to
 discuss it a bit on the wide community level and not only with tech people
 :)

 --
 Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
 http://aharoni.wordpress.com
 ‪“We're living in pieces,
 I want to live in peace.” – T. Moore‬
 Let's use thank, please. We're not Farcebook.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

I'd recommend the verb appreciated.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Obsolete MediaWiki extensions

2014-10-27 Thread Brian Wolff
If the old maintainer is still around, ask them if they are ok with you
taking over. ( just to avoid stepping on toes)

If they cant be reached, ask at
http://www.mediawiki.org/wiki/Gerrit/Project_ownership explaining why you
want to take over maintinance of the extension. Helps if you have pre
existing commits to said extension you can point to (or even pending
commits), but generally the barrier should be pretty low for non-wmf
deployed extensions that are clearly abandoned. (Imo).

--bawolff

On Oct 27, 2014 1:58 PM, James Montalvo jamesmontal...@gmail.com wrote:

 If a project appears to be abandoned but is in Gerrit already, what is the
 process to become the maintainer?

 On Mon, Oct 27, 2014 at 11:38 AM, Ricordisamoa 
ricordisa...@openmailbox.org
  wrote:

  Il 27/10/2014 17:03, Siebrand Mazeland ha scritto:
 
   On Mon, Oct 27, 2014 at 8:53 AM, Ricordisamoa 
  ricordisa...@openmailbox.org
  wrote:
 
   Il 27/10/2014 16:38, Andre Klapper ha scritto:
 
   On Mon, 2014-10-27 at 16:08 +0100, Ricordisamoa wrote:
 
   There are currently several hundreds extensions hosted on SVN
  https://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/ and
  many
  of them are still working according to www.mediawiki.org.
  However, a quick glance shows that many of them are overlapping each
  other and likely to be unmaintained.
  It should be of interest to the MediaWiki Cooperation
  https://www.mediawiki.org/wiki/MediaWiki_Cooperation group to
merge
  extensions where appropriate and move them to git if still useful.
 
   If nobody volunteers to maintain them, what's the gain of
mass-moving?
 
  andre
 
   Of course they should be maintained!
  But if they're moved to Git, their i18n can be hosted on TranslateWiki
  and
  benefit from regular updates.
 
 
   Sometimes code, MediaWiki extensions that are no longer maintained
  included, and haven't been for years, should just rot and die.
  Unusable/unused code shouldn't be localised. No one should spend
valuable
  time getting code they don't want to use out of Subversion and into
  git/Gerrit. If someone REALLY has a use case for something that's
still in
  Subversion, they'll make themselves known.
 
  If you want to maintain a particular extension that is in Wikimedia's
  read-only Subversion, please request it to be moved to Gerrit, do the
work
  on it to bring it up to par with the current code of code, and update
the
  extension documentation page on MediaWiki.org. Please don't just dump
code
  from one place where it's not maintained into another place where it's
not
  maintained. Extension maintenance is not trivial, so you shouldn't
assume
  that someone will just volunteer to start maintaining tens or
hundreds
  of
  extensions no one worked on for years.
 
  Siebrand
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  I think I was misunderstood. Most of them are just rubbish with no use
  cases, but I bet a few still have one.
  Unfortunately, if a user is running outdated code and can't just afford
to
  update it, they would stick with an outdated MediaWiki. This can block
  further uses of the software outside of the WMF. I keep seeing lots of
  similar cases in the Support desk https://www.mediawiki.org/
  wiki/Project:Support_desk.
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Including security fixes in MediaWiki

2014-11-02 Thread Brian Wolff
On Nov 1, 2014 8:52 PM, Mark A. Hershberger m...@nichework.com wrote:


 After some discussion in September, Quim created T480 in Phabricator[1].
 Markus polished up the Security Release section of the Release
 checklist[2] and we agreed to use it as the process for security
 releases from now on.

 Footnotes:
 [1]  https://phabricator.wikimedia.org/T480

 [2]
https://www.mediawiki.org/wiki/Release_checklist#Security_Release_.28minor_version_release.29

 --
 Mark A. Hershberger
 NicheWork LLC
 717-271-1084

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

What about marking the bugs as public? That is a step that is often missed
and should be done just prior to sending release announcement.

From the list:
 Check for vulnerabilities

That could use clarification - does it mean check which branches need to be
patched? does it mean verify that the exploit doesnt work on newly patched
branches? Or perhaps it could refer to some automated testing tool?

Given we want to minimize time between vulnrability being public and
release, id reccomend adding a step of run unit tests locally in case they
fail, before making jenkins do it publically.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-07 Thread Brian Wolff
On 11/7/14, Federico Leva (Nemo) nemow...@gmail.com wrote:
 +1 on Tim. FancyCaptcha is worse than useless.

 Nemo

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Literally an anti-captcha. Letting bots in and keeping humans out.

Its like security through obscurity, minus the obscurity bit since its
been publicly talked about how weak it is for at least 3 years now -
https://www.elie.net/go/p22a

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feature request.

2014-11-08 Thread Brian Wolff
Honestly i dont think anyone's even tried to improve the conflict screen.
There's probably a lot of low hanging fruit on the usability of edit
conflicts which could be persued that have nothing to do with the hard,
real time editing solutions (as cool as those are).

If someone is intrested in trying to improve edit conflicts, id reccomend
starting with:
*only showing relavent parts. If your conflict is in a section, and you
were doing a section edit, dont ask user to resolve entire page (this is
particularly painful on VP type pages).
Furthermore: find some way to present only the conflicted lines (ie what
conflict markers show in a source control system) in a user friendly way.

*better conflict resolution algorithms. This would require experimentation,
but im sure there exists something better for merging natural language
documents than just shoving it to gnu diff3. Perhaps even just adding a
newline after every sentence (period) so that it merges on a more fine
grained level would be appropriate. I imagine there must be papers written
on this subject we could look to for advice.

I'm unfamilar with what wordpress does for this. Do you have a link
describing its process?

--bawolff
On Nov 8, 2014 4:31 PM, Nkansah Rexford nkansahrexf...@gmail.com wrote:

 One session I really looked forward to at the Wikimania was the one on
 Visual Editor and the talk on RealTime Editing (the one presented by
Erik).
 Of course, I enjoyed, seeing some of the nice future goals of how realtime
 editing could be possible, however  with very strong huddles to overcome.

 One slide pointed out the number of edit conflicts that happened in the
 month of July only:
 https://plus.google.com/107174506890941499078/posts/NCPzu4G5cbP

 There were *120k edit conflicts of about 23k registered user accounts*
 (anonymous conflicts might be higher or around this same value, or even
 less)

 The proposals and design concepts of how the concurrent editing could be
on
 Wikimedia projects were/are nice to see, and very hi-tech. However, I
 considered these proposals and design concepts to be far fetched, at
least,
 at least, looking at how pressing the issue of edit conflicts are.

 I might be the only person that suffers from that problem, thus I ask
 about. The simple solution to edit conflict in my own opinion isn't that
 complex, as at least, there's a living example of how it could be.

 The WordPress* way of resolving edit conflicts, for me, at this point in
 time, will look more promising and do much better than the highly advanced
 concepts that were presented, which are not even near to realization, at
 least for the next 5 years.

 Until those concepts presented are completed, how many more edit conflicts
 should be suffered? Losing lots (or even a line of edit) of edits because
 of editing conflict isn't something to take easily, at least when one has
 limited time and resources, but voluntarily decided to add content to an
 article.

 rexford

 **I'm no wordpress dev, neither have I ever even done coding in php. I
 mention the wordpress here because, as someone who uses wordpress a lot,
 and have seen how its editing conflict has been, in a typical way,
 resolved, I find it impressive, and think Wikipedia of all websites,
should
 have implement that approach long time ago, if not just after wordpress
did
 it.*
 On Aug 9, 2014 11:58 AM, Pine W wiki.p...@gmail.com
 javascript:_e(%7B%7D,'cvml','wiki.p...@gmail.com'); wrote:

  Rexford, it happens that there are 2 Wikimania sessions about concurrent
  editing starting at 17:00 today in Auditorum I.
 
  Pine
  On Tue, Aug 5, 2014 at 10:38 PM, Quim Gil q...@wikimedia.org
  javascript:_e(%7B%7D,'cvml','q...@wikimedia.org'); wrote:
   On Mon, Aug 4, 2014 at 9:50 PM, Pine W wiki.p...@gmail.com
  javascript:_e(%7B%7D,'cvml','wiki.p...@gmail.com'); wrote:
  
   I am asking Quim to provide us an update.
  
  
   Me? :) I'm just an editor who, like many of you, has suffered this
  problem
   occasionally.
  
   On Mon, Aug 4, 2014 at 10:02 AM, rupert THURNER 
  rupert.thur...@gmail.com
  javascript:_e(%7B%7D,'cvml','rupert.thur...@gmail.com');
   wrote:
  
   that would be a hullarious feature! which is btw available in some
  other
   opensoure and proprietory wikis.
  
  
   TWiki is an open source wiki and also has (had?) a concept of
blocking a
   page while someone else is editing. This feature might sound less than
   ideal in the context of, say, Wikipedia when a new Pope is being
  nominated,
   but I can see how many editors and MediaWiki adminis have missed such
   feature at some point.
  
   If I understood correctly, VisualEditor already represents an
improvement
   vs Wikitext because the chances of triggering conflicting edits are
   smaller, because of the way the actual content is modified and
updated in
   every edit.
 
  i'd have strong doubts here, from a technical standpoint :)
 
   Rupert, in any case you see that the trend is going in the direction
of
   being more 

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-08 Thread Brian Wolff

 For some more background, when we proposed something like that to Chris
 Steipp he was pretty iffy about it, and he's not wrong. At other sites
that
 don't have a CAPTCHA on signup (like Facebook, Quora, others) they avoid a
 spam problem in part because they require an email address and
 confirmation. For some irrational reason, even in the era of throwaway
 email accounts from web services, not requiring email is some kind of
 sacred cow among Wikimedians, even if it would make it an easy choice to
 throw away our wretched CAPTCHA.

Because spam bots can't answer email?

 If we want to avoid spam bots signing up, there is going to be a hit in
 ease of use somewhere. Our network of sites is just too large to avoid
 being a target. It's just a matter of testing to see how much we can
reduce
 that hit, and which method might be less easy.

There is really no evidence that the current hit in ease of use has any
affect on warding off spam bots.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Our CAPTCHA is very unfriendly

2014-11-09 Thread Brian Wolff
On Nov 9, 2014 5:39 AM, David Gerard dger...@gmail.com wrote:

 On 9 November 2014 09:27, aude aude.w...@gmail.com wrote:
  On Sun, Nov 9, 2014 at 8:51 AM, Pine W wiki.p...@gmail.com wrote:

  I'm curious, Risker: if you don't mind my asking, what about being
required
  to supply a throwaway email address would have discouraged you from
opening
  a Wikimedia account?

  I understand that it can be a bit scary to have to provide this
information
  (what about a throwaway?), but yet is a tradeoff.


 I realise it'd be a pile of extra work, and I'm not sure how to
 present it clearly - but what about a choice between solving the
 captcha or supplying a working email address?


 - d.



Does anyone have any attack scenario that is remotely plausible which
requiring a verified email would prevent?

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feature request.

2014-11-12 Thread Brian Wolff
On Nov 12, 2014 9:44 AM, James Forrester jforres...@wikimedia.org wrote:

 On 8 November 2014 22:01, Brian Wolff bawo...@gmail.com wrote:

  Honestly i dont think anyone's even tried to improve the conflict
screen.
  There's probably a lot of low hanging fruit on the usability of edit
  conflicts which could be persued that have nothing to do with the hard,
  real time editing solutions (as cool as those are).
 
  If someone is intrested in trying to improve edit conflicts, id
reccomend
  starting with:
  *only showing relavent parts. If your conflict is in a section, and you
  were doing a section edit, dont ask user to resolve entire page (this is
  particularly painful on VP type pages).
 

 ​Yes, though this is normally triggered because the section isn't called
 what it used to be; if you're appending a new section to the end of the
 page I think it works fine.​

I think there is some cases where if someone adds a new section while you
are editing the last line of the previously last section, it will conflict.
I guess more research is needed to even enumerate all the common edit
conflicts.


  Furthermore: find some way to present only the conflicted lines (ie what
  conflict markers show in a source control system) in a user friendly
way.
 

 ​The normal way to solve this UX problem is three column diff​, but that
 (a) isn't remotely good for mobile interfaces, and (b) adds Yet Another
 Interface which may confuse as much as it assists. We'd need a lot of
 painful UX research and a huge amount of developer time here, I feel.


I think you're right if we really want to do it well. But this might be one
of those cases where we can make it suck much less without quite making it
good, which might be worthwhile in this case. Maybe.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] changing edit summaries

2014-11-13 Thread Brian Wolff
On Nov 13, 2014 11:43 AM, Derric Atzrott datzr...@alizeepathology.com
wrote:

  Indeed - I am somewhat surprised by James's firm opposition.

 I tend to agree with James on this one in that if the edit summaries
 are to be modified then they need a revision history.

  Typos in edit summary are fixed by releasing an errata corrige in a
  subsequent dummy edit.

 I question whether or not the ability to change edit summaries is
 really a needed feature though.  I would prefer the approach that
 Nemo recommend of making a dummy edit.

 For me it's less about vandalism et al. and more about the principle
 of revision tracking and audit trails.  When you make an edit that
 revision is fixed and should not be able to be modified.  This is
 one of the core principles that makes wikis work.

 Thank you,
 Derric Atzrott




+1. An edit summary represents something at a specific point in time. Its
important to know the context of an edit at that time. Editing edit
summaries allows someone to revise the context.

For comparision, how many revision control systems allow editing commit
messages.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] changing edit summaries

2014-11-13 Thread Brian Wolff
On Nov 13, 2014 12:45 PM, Nathan nawr...@gmail.com wrote:

 I can see it being useful in two circumstances:

 1) As part of the oversight right, in order to edit an edit summary
without
 hiding the entire revision
 2) A right of a user to edit their own edit summaries, if the edit summary
 is blank

 Since it's possible and at least some people are interested in it, I don't
 see the downside of making it available in MediaWiki even if most
Wikimedia
 projects might not use it.


That sounds more like a good argument for making it an extension, rather
than a core feature.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator repository callsigns

2014-11-13 Thread Brian Wolff
On 11/13/14, Chad innocentkil...@gmail.com wrote:
 Please help me draft some guidelines for Phabricator repo callsigns.

 https://www.mediawiki.org/wiki/Phabricator/Callsign_naming_conventions

 The subpage on naming our existing repos should be especially fun:

 https://www.mediawiki.org/wiki/Phabricator/Callsign_naming_conventions/Existing_repositories

 Bikeshedding on the second hardest problem in CS? Who on this list can
 pass up a chance to join in there? ;-)

 -Chad
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Do we get full unicode including astral characters? If so, I vote
MediaWiki be   (U+1f33b).

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.23.7, 1.22.14 and 1.19.22

2014-11-27 Thread Brian Wolff
On 11/26/14, Markus Glaser gla...@hallowelt.biz wrote:
 Hello everyone,

 I would like to announce the release of MediaWiki 1.23.7, 1.22.14 and
 1.19.22. This is a regular security and maintenance release. Download links
 are given at the end of this email.

 == Security fixes ==
 * (bugs 66776, 71478) SECURITY:  User PleaseStand reported a way to inject
 code into API clients that used format=php to process pages that underwent
 flash policy mangling. This was fixed along with improving how the mangling
 was done for format=json, and allowing sites to disable the mangling using
 $wgMangleFlashPolicy.
 https://phabricator.wikimedia.org/T68776
 https://phabricator.wikimedia.org/T73478

 * (bug 70901) SECURITY: User Jackmcbarn reported that the ability to update
 the content model for a page could allow an unprivileged attacker to edit
 another user's common.js under certain circumstances. The user right
 editcontentmodel was added, and is needed to change a revision's content
 model.
 https://phabricator.wikimedia.org/T72901

 * (bug 7) SECURITY: User PleaseStand reported that on wikis that allow
 raw HTML, it is not safe to preview wikitext coming from an untrusted source
 such as a cross-site request. Thus add an edit token to the form, and when
 raw HTML is allowed, ensure the token is provided before showing the
 preview.  This check is not performed on wikis that both allow raw HTML and
 anonymous editing, since there are easier ways to exploit that scenario.
 https://phabricator.wikimedia.org/T73111

 * (bug 7) SECURITY: Do not show log action when the entry is revdeleted
 with DELETED_ACTION. NOTICE: this may be reverted in a future release
 pending a public RFC about the desired functionality. This issue was
 reported by user Bawolff.
 https://phabricator.wikimedia.org/T74222


 == Bugfixes ==
 * (bug 71621) Make allowing site-wide styles on restricted special pages a
 config option.
 https://phabricator.wikimedia.org/T73621

 * (bug 42723) Added updated version history from 1.19.2 to 1.22.13
 https://phabricator.wikimedia.org/T44723

 * $wgMangleFlashPolicy was added to make MediaWiki's mangling of anything
 that might be a flash policy directive configurable.

 Full release notes for 1.23.7:
 https://www.mediawiki.org/wiki/Release_notes/1.23

 Full release notes for 1.22.14:
 https://www.mediawiki.org/wiki/Release_notes/1.22

 Full release notes for 1.19.22:
 https://www.mediawiki.org/wiki/Release_notes/1.19

 Public keys:
 https://www.mediawiki.org/keys/keys.html

 **
 1.23.7
 **
 Download:
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-1.23.7.tar.gz
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-core-1.23.7.tar.gz

 Patch to previous version (1.23.6):
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-1.23.7.patch.gz

 GPG signatures:
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-core-1.23.7.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-1.23.7.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.23/mediawiki-1.23.7.patch.gz.sig


 **
 1.22.14
 **
 Download:
 https://releases.wikimedia.org/mediawiki/1.22/mediawiki-1.22.14.tar.gz

 Patch to previous version (1.22.13):
 https://releases.wikimedia.org/mediawiki/1.22/mediawiki-1.22.14.patch.gz

 GPG signatures:
 https://releases.wikimedia.org/mediawiki/1.22/mediawiki-core-1.22.14.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.22/mediawiki-1.22.14.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.22/mediawiki-1.22.14.patch.gz.sig


 **
 1.19.22
 **
 Download:
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-1.19.22.tar.gz

 Patch to previous version (1.19.21):
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-1.19.22.patch.gz

 GPG signatures:
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-core-1.19.22.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-1.19.22.tar.gz.sig
 https://releases.wikimedia.org/mediawiki/1.19/mediawiki-1.19.22.patch.gz.sig

 Mark Hershberger and Markus Glaser
 (Wiki Release Team)

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Several of these bugs are still marked as security restricted. Now
that the release has been made, can they be made public?

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki 1.24.0 released

2014-11-28 Thread Brian Wolff
 == Preferences made easier ==
 MediaWiki is known to be extremely flexible and customisable, but few
users use its full potential. In 1.24, we aim to make dozens obscure
preferences easily discoverable and obvious to use.


Umm, what does this mean? Were there code changes to make prefs more
obvious or is this a documentation project on mw.org? Something else?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feature request.

2014-11-29 Thread Brian Wolff

 ​No. The Drafts extension (and any feature that puts hidden content on the
 servers) was veto'ed years ago by Legal. We need to stop beating this dead
 horse.

 J.

Err, what? Quick don't tell legal about Special:UploadStash, or the
userjs- options api.

--

As a quick hack I made a little user script that can do auto-merging
of edit conflicts. It adds a button to the edit conflict page to
auto-merge the conflict.

To try it, add the following to [[Special:MyPage/common.js]] or
[[meta:Special:Mypage/global.js]]:

importScriptURI('https://meta.wikimedia.org/w/index.php?title=User:Bawolff/EditConflictAutoMerge.jsaction=rawctype=text/javascript');

The script is extremely basic and rather unpolished. I'd be
interested to hear if even something as basic as this is useful to
users. Remember if you try and test an edit conflict, you cannot
conflict with yourself, so when testing edit conflicts you need to use
a different user (or an anon) to conflict with you.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator outage due to network issues, 11/29

2014-11-30 Thread Brian Wolff
Thanksgiving is only celebrated at this time in the US. Many of us dont
celebrate it.

That said downtime happens, and its a non-essential service during non
working hours. Well it may be frustrating, its not the end of the world. If
anyone is despretely looking for a bug to fix, they can ask on irc, im sure
the regulars can think of a hundred bugs off the top of their head.

I dont think Peachy was grumbling so much as looking for an accurate time
frame for the solution.

Re selveta's comment about distributing the db: there is standard ways of
doing that (e.g. simplest would be to just use db replication, and switch
master on failure), im not sure if phab is important enough to warrant
that. I would probably lean to no it isnt personally. Obviously that would
be an operations call.

--bawolff
On Nov 30, 2014 5:13 AM, Gerard Meijssen gerard.meijs...@gmail.com
wrote:

 Hoi,
 Right ? so it is thanksgiving et al.. Be thankful that it is seen, It is
 not Wikipedia or any of the projects...so relax.. eat some left over
 turkey..
 Thanks,
  GerardM

 On 30 November 2014 at 09:59, K. Peachey p858sn...@gmail.com wrote:

  ASAP? when it's already hitting approx. five hours of down time?
 
  On 30 November 2014 at 18:14, Erik Moeller e...@wikimedia.org wrote:
 
   As noted in the server admin log [1], Phabricator is currently down
due
  to
   a network outage impacting one of our racks in the Ashburn
data-center.
   We're investigating and will aim to restore service ASAP.
  
   Erik
  
   [1] https://wikitech.wikimedia.org/wiki/Server_Admin_Log
  
   --
   Erik Möller
   VP of Product  Strategy, Wikimedia Foundation
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator outage due to network issues, 11/29

2014-11-30 Thread Brian Wolff
On Nov 30, 2014 2:07 PM, Gerard Meijssen gerard.meijs...@gmail.com
wrote:

 Hoi,
 The argument about non-working hours is problematic. When the only thing
 that counts are the working hours of staff in the USA you may be right. As
 it is, WIkimedia Germany has staff working at other times and they are
 affected. Affected are the non-professionals as well..

 My advise, do not go there. It is broken and it needs fixing.
 Thanks,
   GerardM


I just simply meant that it didnt happen in the middle of the normal day of
work for the people responsible for fixing it (not neccesarily the people
affected), so there is probably going to be less relavent people around
(given its a weekend and a US holiday, although i imagine there are still
people on-call) hence we should be gracious with our expectations. I in
no way meant to suggest it shouldnt be fixed or that it shouldnt be fixed
quickly (in fact i was trying to argue against the go eat turkey setiment)

I didnt think WM-DE had people working on satutdays... but volunteers
certainly do work on saturdays and were affected.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Community Suggestions Regarding the Project

2014-12-02 Thread Brian Wolff
On 12/2/14, Ankita Shukla ankitashukla...@gmail.com wrote:
 Hello!

 I am an OPW Intern for round#09 and will be working on a spelling
 dictionary project, the proposal of which is available here
 https://www.mediawiki.org/wiki/User:Ankitashukla/Proposal.
 Also, we'd be using this
 https://github.com/ankitashukla/spelling-dictionary-opw github repo for
 version controlling.

 Before we start off with the coding part, my mentors Kartik and Amir, and I
 thought it would be a great idea to have suggestions from everyone that
 might turnout to be very useful for us during the development of the
 project.
 We welcome all ideas of what your expectations are from the project, any
 specific design advice, any particular implementation or any advice, big or
 small, that might be useful to us.


 Thanks and regards,
 Ankita Shukla
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

The most immediate thing that comes to mind is why create a new
interface where users can add words, instead of just scrapping
wiktionary? (I take it from your proposal you plan to create a new
project where users can submit words for consideration for inclusion
into the dictionary).

Additionally as for experts rejecting or accepting words:
*Is that actually needed?
*Do experts actually exist who would be willing to do that sort of
thing? (This varries depending on your definition of expert. For
example, if you mean people with PhD's in said language who will
verify the word is proper, the answer would be no. If you mean people
who are XX-3 or XX-N in the language then maybe, but I'm not really
sure how much of a benefit the review would provide relative to the
costs)

I recognize scrapping is difficult for a whole host of reasons (Mostly
the fact its semi-unstructured turns it into an NLP project, and that
standards aren't consistent cross languages - However, in this case it
seems like the information needed would not be that hard [famous last
words] to extract simply by looking at categories). It seems like
making users add data to a new project is duplicating effort going on
in wiktionary.

Even if this project can't use wiktionary for some reason, it seems
slightly overlapping with either wikidata or omegawiki, and could
perhaps re-use some work for those projects in terms of storing data.

Last of all, In your proposal you give some potential db schemas. I
imagine the schema should have a language column for what language the
word is for (Not to mention things get more complicated with related
languages e.g. EN vs EN-US vs EN-CA vs EN-GB)). Also words can have
multiple meanings, perhaps you might want to split up meaning from the
word. Its not really needed if the meaning is immutable, but if
meanings can be modified, you may want some way to be able to identify
which individual meaning was edited (And then there's issues with
history, etc, which again leads back to see if you can have an
existing project that has already solved those issues for where the
data comes from, instead of making a new one)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia tool framework (php)

2014-12-08 Thread Brian Wolff
On Dec 8, 2014 5:10 PM, svetlana svetl...@fastmail.com.au wrote:

 In my view, if a Labs tool is a success, it should be written as a wiki
Extension and deployed to relevant wikis. What you're saying essentially
means that there is a need to make the wiki truly extensible without much
pain.

Thats not neccesarily true. There are many different types of tools with
different criteria for success. Some may be appropriate to be turned into
an extension, but some arent.

A wrapper library that exports a mediawiki like interface for tools so that
a tool could easily be ported to an extension would be really cool. However
that's not neccesarily what petrb is proposing (although that is one way to
implement what he's proposing)

 - I believe Wikibase (Wikidata) is meant to make database operations
easier. I have no idea what database toolkits they provide (you may want to
check with wikidata-l).

 --

Surely generic db abstraction layer is not wikibase's primary function...

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A new extension of content tree about Wikipedia

2014-12-09 Thread Brian Wolff
On Dec 9, 2014 1:35 PM, Eallan h.yi...@gmail.com wrote:

  Hi all!
 I am thinking about a new idea about a content tree extension of
  web/desktop/computer Wiki. When I use mobile phone to access website of
 Wikipedia, I can fold and unfold a content item of the content tree.
That's
 very convenient. However, when I use computer to access Wiki, there is no
 such feature, I can not fold or unfold a content item. So shall we add
this
 feature from mobile Wiki to the web/computer Wiki? In other words, let the
 user able to fold and unfold any content item at any level of the content
 tree. For example, I search Taipei, I get 1. History, 2. Geography, 1.1
 First settlements, and 1.2 Japaneses rule ect. So I would like to able to
 unfold 1. History, and 1.1 First settlements, or I just unfold 1. History
 and fold 1.1 First Settlements to see 1.2 Japanese rule which is unfolded.
 So I would like to fold and unfold any level content items to further the
 visit quality. The mobile side wiki also can improve, since it can only
 fold and unfold the first level content item.

 Also,add a go-back button which let the user go back to the content tree
 from any content item of any level in the content tree.

 Eallan
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Im unsure that would be wanted on the desktop wiki. Desktop doesnt have the
space constraints of mobile, and extra clicks should usually be avoided if
possible.

--bawilff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Visibility of action in API for deleted log entries

2014-12-09 Thread Brian Wolff
There's actually a good bit of information that is available from the API
 that isn't in the web UI (or isn't very visible there). For example,
 history pages only display timestamps to the minute while the API gives
 resolution to the second.

You can actually get up to the second resolution if you change your prefs.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] All non-api traffic is now served by HHVM

2014-12-09 Thread Brian Wolff
On 12/3/14, Giuseppe Lavagetto glavage...@wikimedia.org wrote:
 Hi all,

 it's been quite a journey since we started working on HHVM, and last
 week (November 25th) HHVM was finally introduced to all users who didn't
 opt-in to the beta feature.

 Starting on monday, we started reinstalling all the 150 remaining
 servers that were running Zend's mod_php, upgrading them from Ubuntu
 precise to Ubuntu trusty in the process. It seemed like an enormous task
 that would require me weeks to complete, even with the improved
 automation we built lately.

 Thanks to the incredible work by Yuvi and Alex, who helped me basically
 around the clock,  today around 16:00 UTC we removed the last of the
 mod_php servers from our application server pool: all the non-API
 traffic is now being served by HHVM.

 This new PHP runtime has already halved our backend latency and page
 save times, and it has also reduced significantly the load on our
 cluster (as I write this email, the average cpu load on the application
 servers is around 16%, while it was easily above 50% in the pre-HHVM era).

 The API traffic is still being partially served by mod_php, but that
 will not be for long!

 Cheers,

 Giuseppe
 --
 Giuseppe Lavagetto
 Wikimedia Foundation - TechOps Team

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Awesome.

Any chance the video scalars could be put near the top of the list for
servers to upgrade Ubuntu on? The really old version of libav on those
servers is causing problems for people uploading videos in certain
formats.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.24.1, 1.23.8, 1.22.15 and 1.19.23

2014-12-17 Thread Brian Wolff

 == Security fixes in 1.24.1, 1.23.8, 1.22.15 and 1.19.23 ==
 * (bug T76686) [SECURITY] thumb.php outputs wikitext message as raw HTML,
   which could lead to xss. Permission to edit MediaWiki namespace is
 required
   to exploit this.

Really? That's stretching the definition of a security bug.

(Remember that mediawiki:copyright is a raw html message, that's
included on many more pages. Not to mention the whole
MediaWiki:Common.js thing)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.24.1, 1.23.8, 1.22.15 and 1.19.23

2014-12-18 Thread Brian Wolff


 Not entirely. Unlike message copyright, the message used on thumb.php
 (badtitletext) is not a raw html message. It is meant to be parsed and
 displayed regularly. And always was. Except it was re-used for thumb.php,
 and
 forgotten to be parsed there. I won't go into details, but it's exploitable
 under the right circumstances.

 -- Krinkle

I don't disagree that its a bug, but in order to exploit user would have to:
*Convince user to go rather obscure thumb.php page
*already have the ability to add javascript to any page on wiki

In which case, why wouldn't evil malicious user just insert javascript
on the normal page everyone is looking at. That's both more effective,
and probably less noticeable. Thus I don't see how it exposes any new
security issues that aren't already present. Of course I may simply
just be missing the nature of the circumstances that you reference
in your comment.

--bawolff

p.s. Given there is now a fix released, I think its important to be
able to have frank discussions about security issues. After all, the
best way to prevent future security issues is to make sure everyone
understands the past issues, so that people don't make the same
mistake again.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-22 Thread Brian Wolff

 I would also suggest that an effort be made to find community members
 who are not WMF employees to participate in the ArchCom and then to have
 their voices heard during in quarterly planning.

I dont know if this is practical. As Chad noted earlier, WMF hires the best
and the brightest. Even if the entire arch comitte was hit by a bus, the
people who i think would logically be next in line are still employed by
wmf. Even if those people got hit by a bus, i still think their logical
replacements would largely be wmf employees.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GPL upgrading to version 3

2015-02-07 Thread Brian Wolff
On Feb 7, 2015 3:57 PM, Thomas Mulhall thomasmulhall...@yahoo.com wrote:

  Hi should we upgrade GPL to version 3 since version 3 is more modern
then version 2. Should it be updated in extensions, skins and MediaWiki.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

There was a comment in 2008 that gplv3 might be problematic for some
commercial users -
https://lists.wikimedia.org/pipermail/mediawiki-l/2008-June/027552.html . I
have no idea what sort of problems the user is reffering to, but something
to keep in mind.

I personally see nothing wrong with GPLv2.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GPL upgrading to version 3

2015-02-08 Thread Brian Wolff
On Feb 8, 2015 8:17 AM, Tim Landscheidt t...@tim-landscheidt.de wrote:

 Tyler Romeo tylerro...@gmail.com wrote:

  One thing to point out is that:

  1) Even right now, under the GPL, if extensions do qualify as
“derivative works” or w/e, they do have to be GPL licensed.
  2) Source code only has to be provided to users of the
  program. So presuming this is some private wiki with a
  secret extension, source code does not have to be provided
  or published to the general public.

  [...]

 And if it is a non-private wiki?

 I think the general disadvantage of AGPL is that it forces
 you in a contract with your audience (who may be evil, or
 just obnoxious).  With the AGPL, you can't just customize or
 develop extensions without thinking about how to publish it,
 thus raising the bar for setting a up a wiki with MediaWiki.
 Even security fixes would need to be published immediately.

 Tim




This.

Furthermore i think a not insignificant portion of current reusers make
minor modifications to mediawiki core code (no matter how much we
discourage it) and dont publish it (because they figure probably nobody
cares if you change a single condition check on line 1646 of some file).
They would be in violation of an agpl licensed mediawiki.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video Uploads to Commons

2015-02-03 Thread Brian Wolff
On Feb 3, 2015 8:43 PM, Nkansah Rexford seanmav...@gmail.com wrote:

 I couldn't find a tool to convert my videos from whatever format into .ogv
 outside my PC box before pushing to Commons. I guess there might exist
 something like that, but perhaps I can't find it. But I made one for
 myself. Maybe it might be useful to others too.

 I call it CommonsConvert. Upload any video format, enter username and
 password, and get the video converted into .ogv format plus pushed to
 Commons all in one shot. You upload the original video in whatever format,
 and get it on Commons in .ogv

 Some rough edges currently, such as can't/don't know/no available means to
 append license to uploaded video among other issues. Working on the ones I
 can asap.

 It uses mwclient module via Django based on Python. Django gives user info
 to Python, Python calls avconv in a subprocess, and the converted file is
 relayed to Commons via mwclient module via Media Wiki API.

 I think not everyone has means/technical know how/interest/time converting
 videos taken on their PC  to an Ogg-Vorbis-whatever format before
uploading
 to Commons.

 Doing the conversion on a server leaves room for user to focus on getting
 videos than processing them.

 I don't know if this is or will be of any interest that someone might
wanna
 use, but I personally would enjoy having a server sitting somewhere
convert
 my videos I want to save onto Commons, than using my local computer doing
 that boring task.

 In an email to this list a week or so ago, I 'ranted' about why commons
 wants a specific format (which if not for commons, I never come across
that
 format anywhere), but has no provision for converting any videos thrown at
 it into that format of its choice. Well

 Tool can be found here: khophi.co http://khophi.co/commonsconvert/
 http://khophi.co/commonsconvertcommonsconvert
 http://khophi.co/commonsconvert

 And this is sample video uploaded using the tool.
 https://commons.m.wikimedia.org/wiki/File:Testing_file_for_upload_last.ogv
 (will be deleted soon, likely)

 What I do not know or have not experimented yet is whether uploading using
 the api also has the 100mb upload restriction.

 Will appreciate feedback.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Cool. Thanks for working on this sort of thing. The uploading videos
process certainly benefit from some love.

May i suggest using webm (of the vorbis/vp8 variety) as the output format?
Webm will give higher quality videos at lower file size, so is probably a
better choice if converting from another format.

For the 100mb thing - there are multiple ways to upload things with the
api. The chunked method has a file size limit of 1gb. All the other methods
have the 100mb limit.

If you havent already, id encourage mentioning this on [[Commons:VP]].
Real users would probably be able to give much more specific feedback.

Cheers,
Bawolff

P.s. im not sure, but i think user:Prolineserver might have been working on
something similar, in case you are looking for collaborators.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Getting the full URL of an image

2015-01-20 Thread Brian Wolff
On 1/19/15, Jeroen De Dauw jeroended...@gmail.com wrote:
 Hey,

 On my local wiki I have a page with the name File:Blue marker.png. The
 following code returns false:

 $title = Title::makeTitle( NS_FILE, $file );
 $title-exists();

 That used to return true in the past. Not sure what is broken - my wiki or
 MediaWiki itself.


You're sure about that? I think you're just mixing up $title-exists()
and $title-isKnown().

$title = Title::makeTitle( NS_FILE, $filename );
$file = wfLocalFile( $title );
$file-getUrl();

That's generally right, but depending on the exact context, the
following might be more appropriate:

$title = Title::makeTitle( NS_FILE, $filename );
$file = wfFindFile( $title );
$file-getFullUrl(); // or $file-getCanonicalUrl(); if you need the protocol.

wfLocalFile will always return a file object even if there is no file
by that name (And it will take its best guess at the url even for
non-existent files). If the name given would normally be a foreign
file, wfLocalFile( 'Foo.jpg' )-getUrl() would return what the url
would be if there was a local file of that name. On the other hand,
wfFindFile will return the appropriate (possibly foreign) file for
that name or false if none exists.

In almost all cases you would probably want the wfFindFile behavior
and not wfLocalFile().

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki 2.0 (was: No more Architecture Committee?)

2015-01-20 Thread Brian Wolff
On Jan 20, 2015 4:22 PM, James Forrester jforres...@wikimedia.org wrote:

 On 20 January 2015 at 12:04, Jeroen De Dauw jeroended...@gmail.com
wrote:

  ​​
 - ​Get rid of wikitext on the server-side.
 - HTML storage only. Remove MWParser from the codebase. All
 extensions that hook into wikitext (so, almost all of them?)
will
   need to
 be re-written.
  
 
  Just to confirm: this is not actually on the WMF roadmap right? :)
 

 ​It's certainly not what I'm working on for the next year or so. It is
 unlikely to be something we do for WMF usage; it's more valuable to people
 that want VisualEditor but want PHP-only (or don't want Node) for the
 server.

 J.
 --

Hypothetically how would this work? Wouldnt you still need parsoid to
verify the html corresponds to some legit wikitext? Otherwise how would you
know its safe?

Since we are somewhat having a discussion about this (i recognize that this
isnt a real discussion in the sense that there isnt a full technical
proposal, or any concrete plans to actually do it in near future, just a
wild idea that some people like), one of the reasons im somewhat skeptical
about this idea, is if there is an xss issue, you seem much more screwed
with html storage, since now the bad stuff is in the canonical
representation, instead of wikitext storage where you can just fix the
parser, potentially purge parser cache, and then you are 100% certain your
wiki is clean.

My second reason for being skeptical is im mostly unclear on what the
benefits are over wikitext storage (this is the first time ive heard of the
ve w/o parsoid thing. Are there other benefits? Simplifying parser cache by
not having parser cache?)

I may be misinterpreting how such a thing is proposed to work. Im not very
familar with parsoid and associated things.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki 2.0 (was: No more Architecture Committee?)

2015-01-20 Thread Brian Wolff
On Jan 20, 2015 5:53 PM, Brian Wolff bawo...@gmail.com wrote:


 On Jan 20, 2015 4:22 PM, James Forrester jforres...@wikimedia.org
wrote:
 
  On 20 January 2015 at 12:04, Jeroen De Dauw jeroended...@gmail.com
wrote:
 
   ​​
  - ​Get rid of wikitext on the server-side.
  - HTML storage only. Remove MWParser from the codebase. All
  extensions that hook into wikitext (so, almost all of them?)
will
need to
  be re-written.
   
  
   Just to confirm: this is not actually on the WMF roadmap right? :)
  
 
  ​It's certainly not what I'm working on for the next year or so. It is
  unlikely to be something we do for WMF usage; it's more valuable to
people
  that want VisualEditor but want PHP-only (or don't want Node) for the
  server.
 
  J.
  --

 Hypothetically how would this work? Wouldnt you still need parsoid to
verify the html corresponds to some legit wikitext? Otherwise how would you
know its safe?

 Since we are somewhat having a discussion about this (i recognize that
this isnt a real discussion in the sense that there isnt a full technical
proposal, or any concrete plans to actually do it in near future, just a
wild idea that some people like), one of the reasons im somewhat skeptical
about this idea, is if there is an xss issue, you seem much more screwed
with html storage, since now the bad stuff is in the canonical
representation, instead of wikitext storage where you can just fix the
parser, potentially purge parser cache, and then you are 100% certain your
wiki is clean.

 My second reason for being skeptical is im mostly unclear on what the
benefits are over wikitext storage (this is the first time ive heard of the
ve w/o parsoid thing. Are there other benefits? Simplifying parser cache by
not having parser cache?)

 I may be misinterpreting how such a thing is proposed to work. Im not
very familar with parsoid and associated things.

 --bawolff

And the other thread which i hadn't read at the time of writing this
answers my question in that html verification is something yet to be
determined :)

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Brion's role change within WMF

2015-01-20 Thread Brian Wolff
On Jan 20, 2015 12:34 PM, Brion Vibber bvib...@wikimedia.org wrote:

 Quick update:

 I've had a great experience working on our mobile apps, but it's time to
 get back to core MediaWiki and help clean my own house... now that we've
 got Mobile Apps fully staffed I'm leaving the mobile department and will
be
 reporting directly to Damon within WMF Engineering.

 First -- huge thanks to Monte and Dan and Kristen and Dmitry and Bernd and
 of course Tomasz!! and everybody else who's been awesome in Mobile Apps --
 and also to the rest of the mobile team, who have become too many to list
 in a concise email. :)


 For the moment I'm going to get back up to speed with the Architecture
 Committee and push at general MediaWiki issues. As we determine the fate
of
 committees and narrow down what are our priority projects, my focus may
 narrow a to getting some particular things done over the next months.


 A few general notes:

 * Working in mobile apps reminded me how important our APIs are -- our
 ability to make flexible interfaces that work in different form factors
and
 different technology stacks is dependent on maintaining a good API. This
 needs work. :)

 This doesn't just mean interacting with api.php -- we need clean
 configuration, data, etc interfaces as well, especially if we want people
 to contribute in ways other than raw text editing. There's a lot to clean
 up...

 * Mobile mobile mobile! I've heard some folks complain that while there's
a
 lot of talk about mobile-first and similar there aren't always concrete
 explanations yet of what that means. I hope to bring some of the
excitement
 we've seen in Mobile about Wikidata, better queries, better visual/user
 interaction design, and generally making things *work for users*.

 * Breaking or working around the PHP barrier for third-party MediaWiki
 users: I hope to get the question of services resolved one way or another
 -- either by us officially dropping shared PHP hosting support or by
 making sure we have pure PHP implementations of things that are required
 to operate -- which is mostly dependent on having good interfaces and APIs
 so that multiple implementations can be written and maintained
compatibly...

 * Future stuff -- new media types, narrow-field editing, natural language
 queries? WHO KNOWS! I'll be researching more crazy stuff in my additional
 time.


 I'll see many of you at the Dev Summit soon enough -- don't be shy about
 pestering me with concerns and ideas about priorities. :)

 -- brion
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Exciting times! Im sure this will result in many new great things for
MediaWiki.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wfRunHooks deprecation

2015-01-21 Thread Brian Wolff
On Jan 21, 2015 1:40 PM, Jeroen De Dauw jeroended...@gmail.com wrote:

 Hey,

 Does the new syntax offer any advantage over the old one?
  Assuming that we want to switch to non-static function calls eventually
  (which I hope is the case), wouldn't it be friendlier towards extension
  maintainers to only deprecate once we are there, instead of forcing
them to
  update twice?
 

 Good points and questions. While this deprecation is not as problematic as
 simply ditching the current hook system altogether, it does indeed seem a
 bit of busy work.

 The Hooks class has this comment Used to supersede $wgHooks, because
 globals are EVIL., which is quite amusing if you consider all fields and
 methods are static. So it's a switch from a global var to a global field,
 thus adding a second global to get rid of the first one. I have this
 presentation on static code which has a screenshot of this comment and
 class in it :)

 Cheers

 --
 Jeroen De Dauw - http://www.bn2vs.com
 Software craftsmanship advocate
 Evil software architect at Wikimedia Germany
 ~=[,,_,,]:3

Ill be honest i dont understand the point of deprecating that. As you say
the evil globalness is the same amount of evil regardless of the type of
global symbol. And really i dont think global hooks causes too many
problems.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-15 Thread Brian Wolff
On Jan 16, 2015 2:49 AM, Max Semenik maxsem.w...@gmail.com wrote:

 On Thu, Jan 15, 2015 at 10:38 PM, Bryan Davis bd...@wikimedia.org wrote:

 
  One of the bigger questions I have about the potential shift to
  requiring services is the fate of shared hosting deployments of
  MediaWiki. What will happen to the numerous MediaWiki installs on
  shared hosting providers like 1and1, Dreamhost or GoDaddy when running
  MediaWiki requires multiple node.js/java/hack/python stand alone
  processes to function? Is the MediaWiki community making a conscious
  decision to abandon these customers? If so should we start looking for
  a suitable replacement that can be recommended and possibly develop
  tools to easy the migration away from MediaWiki to another monolithic
  wiki application? If not, how are we going to ensure that pure PHP
  alternate implementations get equal testing and feature development if
  they are not actively used on the Foundation's project wikis?
 
 
 This is not even about shared hostings: it is pretty obvious that running
a
 bunch of heterogenous services is harder than just one PHP app, especially
 if you don't have dedicated ops like we at WMF do. Therefore, the question
 is: what will we gain and what will we lose by making MediaWiki unusable
by
 95% of its current user base?


 --
 Best regards,
 Max Semenik ([[User:MaxSem]])


Im quite concerned by this. Well shared hosting might be on the decline, I
still feel like most third part users dont have the competence to do much
more than ftp some files to their server and run the install script.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-16 Thread Brian Wolff
On Jan 16, 2015 9:21 AM, Mark A. Hershberger m...@nichework.com wrote:

 Ori Livneh o...@wikimedia.org writes:

  The model I do think we should consider is Python 3. Python 3 did not
  jettison the Python 2 codebase. The intent behind the major version
change
  was to open up a parallel development track in which it was permissible
to
  break backward-compatibility in the name of making a substantial
  contribution to the coherence, elegance and utility of the language.

 I like the idea, but this makes it sound like we have some commitment
 in the current co-debase to backwards compatibility.

 Currently, though, just as Robla points out that there is no clear
 vision for the future, there is no clear mandate to support interfaces,
 or what we usually call backwards compatibility.

 So, yes, let's have a parallel MW 2.0 development track that will allow
 developers to try out new things.  But let that be accompanied with a MW
 1.0 track so that makes stability a priority.

 Now, the question is: what will Wikipedia run: MW 2.0 or MW 1.0?  And,
 if they focus on MW 2.0 (My sense is that is where the WMF devs will
 want to be), then how do those of us with more conservative clients keep
 MW 1.0 viable?

 Mark.

 --
 Mark A. Hershberger
 NicheWork LLC
 717-271-1084



This seems a solution in  search of a problem. Does anyone actually have
anything they want that is difficult to do currently and requires a mass
compat break? Proposing to rewrite mediawiki because we can without even a
notion of what we would want to do differently seems silly.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-16 Thread Brian Wolff
On Jan 16, 2015 11:07 AM, Brad Jorsch (Anomie) bjor...@wikimedia.org
wrote:

 On Fri, Jan 16, 2015 at 2:49 AM, Stas Malyshev smalys...@wikimedia.org
 wrote:

  However, with clear API architecture we could maybe have
  alternatives - i.e. be able to have the same service performed by a
  superpowered cluster or by a PHP implementation on the URL on the same
  host.


 The problem there is that the PHP implementation is likely to code-rot,
 unless we create really good unit tests that actually run for it.

 It would be less bad if the focus were on librarization and daemonization
 (and improvement) of the existing PHP code so both methods could share a
 majority of the code base, rather than rewriting things from scratch in an
 entirely different language.

 and maybe reduced feature set.


 That becomes a hazard when other stuff starts to depend on the non-reduced
 feature set.


 --
 Brad Jorsch (Anomie)
 Software Engineer
 Wikimedia Foundation


I like this approach. After all it worked well enough for image
thumbnailing and job queue.

Writing things in different languages just reminds me of what a mess math
support used to be.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki 2.0 (was: No more Architecture Committee?)

2015-01-17 Thread Brian Wolff
On Jan 16, 2015 1:05 PM, James Forrester jforres...@wikimedia.org wrote:

 [Moving threads for on-topic-ness.]

 On 16 January 2015 at 07:01, Brian Wolff bawo...@gmail.com wrote:

  Does anyone actually have
  anything they want that is difficult to do currently and requires a mass
  compat break?


 ​Sure.

 ​Three quick examples of things on the horizon (I'm not particularly
saying
 we'd actually do these for Wikimedia's use, but if you're going to ask for
 straw man arguments… :-)):

- ​Get rid of wikitext on the server-side.
   - HTML storage only. Remove MWParser from the codebase. All
   extensions that hook into wikitext (so, almost all of them?) will
need to
   be re-written.
- Real-time collaborative editing.
   - Huge semantic change to the concept of a 'revision'; we'd probably
   need to re-structure the table from scratch. Breaking change for
 many tools
   in core and elsewhere.
- Replace local PHP hooks with proper services interfaces instead.​
- Loads of opportunities for improvements here (anti-spam tools 'as a
   service', Wordpress style; pre-flighting saves; ), but again,
pretty much
   everything will need re-writing; this would likely be progressive,
   happening one at a time to areas where it's
 useful/wanted/needed, but it's
   still a huge breaking change for many extensions.




Woo strawmen for me to shoot down! :)

Actually, The revision thing is fair. Its pretty engrained that a pages
have linear revisions, and each one has a single author, allowing multiple
authors or branching and remerging would probably be a big enough change to
warrant calling it 2.0. And it kind of fits, since last time structure of
revisions was really rearranged afaik was 1.5.

Without going into the merits/drawbacks of html storage - i dont see that
being rewrite worthy. Whether the blob of data in the text table/es is html
or wikitext doesnt really matter to mw. Especially with content handler.

Hooks are a very active area of code. The sort of area where i would guess
that adding an extra 20ms of time per each hook invocation to call an
external service would not be ok (since there are hundreds of hook calls in
a request). I dont think the notion of a service fits with how hooks are
used, at least for many hooks. Of course someone could make a heavier
weight version of hooks for where its a good idea. Or even just a wrapper
object that forwards things to a service. I dont think this is worthy of a
2.0.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-17 Thread Brian Wolff
On Jan 16, 2015 5:14 PM, Ryan Lane rlan...@gmail.com wrote:

 On Fri, Jan 16, 2015 at 1:05 PM, Brad Jorsch (Anomie) 
bjor...@wikimedia.org
  wrote:

  On Fri, Jan 16, 2015 at 12:27 PM, Ryan Lane rlan...@gmail.com wrote:
 
   What you're forgetting is that WMF abandoned MediaWiki as an Open
Source
   project quite a while ago (at least 2 years ago).
 
 
  {{citation needed}}
 

 There was a WMF engineering meeting where it was announced internally. I
 was the only one that spoke against it. I can't give a citation to it
 because it was never announced outside of WMF, but soon after that third
 party support was moved to a grant funded org, which is the current status
 quo.

 Don't want to air dirty laundry, but you asked ;).

 - Ryan


The mw release whatever its called is hardly at the centre of mediawiki the
open source project. They do regular releases - which is great but that's
essentially all they do (which is fine, that is their function). They arent
handling very much third party support (eg on irc) or doing much in way of
third party dev work or even planning. But yet these things still happen.
When it does happen its usually volunteers but also some wmf staff (perhaps
in a volunteer capacity) who do them.

MediaWiki as an open source project may be in tough times in many ways, but
it is not dead (yet).

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-22 Thread Brian Wolff
On Jan 22, 2015 6:43 PM, Brian Wolff bawo...@gmail.com wrote:


 On Jan 22, 2015 2:08 PM, Tyler Romeo tylerro...@gmail.com wrote:
 
  I think that’s kind of insulting to those of us who don’t work at the
WMF. Just because they hire the “best and the brightest” does not mean
there are not people out there who are just as intelligent, if not more,
but do not or cannot work for the WMF for whatever reason. Restricting
Archcom to WMF employees is just about the stupidest thing you could do for
an open source software project. It defeats the entire purpose of MediaWiki
being open-source.
 

I apologize, i didnt mean to imply non wmf employees are any less bright
than wmf employees.

What i more meant to say (which i didnt express very well) is that the arch
comitte (essentially bdfl by comittee in my understanding. Not just about
architecture but also vision for mediawiki) should be composed of leaders
of the community who have been in the mediawiki community a long time, and
have fairly universal respect due to demonstrating wisdom over the long
term.

I dont think arch comitte should be composed solely of wmf'ers, i think
selection should be made entirely independent of affiliation (so working
for wmf should not disqualify someone). It just happens that the people who
i think are likely candidates all happen to currently work for the
wmf/wm-de.

This assumes of course that wmf wont force its employees to have certain
opinions. I dont think they have any intention of doing so.

After all, look at the current dev summit attendence list. How many people
on that list:
*has been fairly regularly active devs for at least 5 years
*has demonstrated wisdom (however you define that)
*does not currently work for wmf

Otoh perhaps other people have a different conception of what the arch
comitte should be or what the criteria for membership should be.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] need review and co-mentor volunteers for GSoC Accuracy review proposal

2015-02-12 Thread Brian Wolff
On 2/12/15, James Salsman jsals...@gmail.com wrote:
 I invite review of this preliminary proposal for a Google Summer of
 Code project:
  http://www.mediawiki.org/wiki/Accuracy_review

 If you would like to co-mentor this project, please sign up. I've been
 a GSoC mentor every year since 2010, and successfully mentored two
 students in 2012 resulting in work which has become academically
 relevant, including in languages which I can not read, i.e.,
 http://talknicer.com/turkish-tablet.pdf .) I am most interested in
 co-mentors at the WMF or Wiki Education Foundation involved with
 engineering, design, or education.

 Synopsis:

 Create a Pywikibot to find articles in given categories, category
 trees, and lists. For each such article, add in-line templates to
 indicate the location of passages with (1) facts and statistics which
 are likely to have become out of date and have not been updated in a
 given number of years, and (2) phrases which are likely unclear. Use a
 customizable set of keywords and the DELPH-IN LOGIN parser
 [http://erg.delph-in.net/logon] to find such passages for review.
 Prepare a table of each word in article dumps indicating its age.
 Convert flagged passages to GIFT questions
 [http://microformats.org/wiki/gift] for review and present them to one
 or more subscribed reviewers. Update the source template with the
 reviewer(s)' answers to the GIFT question, but keep the original text
 as part of the template. When reviewers disagree, update the template
 to reflect that fact, and present the question to a third reviewer to
 break the tie.

 Possible stretch goals for Global Learning Xprize Meta-Team systems
 [http://www.wiki.xprize.org/Meta-team#Goals] integration TBD.

 Best regards,
 James Salsman

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Have you run this by Wikipedians? (I'm assuming enwikipedia would be
your target audience). I would recommend making sure that enwikipedia
is politically ok with this first, since it involves adding a bunch of
templates to articles, as it would suck for a gsoc student if their
work wasn't used due to politics happening at the end.

Prepare a table of each word in article dumps indicating its age.

This in itself is a non-trivial problem (for a gsoc student anyways),
assuming you need it for the entire enwikipedia, and you need it up to
date as soon as people edit. Even getting the student sufficient
storage and CPU resources to actually compute that could potentially
be difficult (maybe?)

Convert flagged passages to GIFT questions for review and present them to one 
or more subscribed reviewers

Wouldn't you want to give the reviewers an actual form where they can
fill out the questions, not something in a markup language (Unless you
mean you want them to store it in that form internally,which seems
like a rather minor implementation detail)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Technical question about FQDN

2015-01-28 Thread Brian Wolff
On Jan 28, 2015 6:18 AM, Ilario Valdelli valde...@gmail.com wrote:

 May someone help me?

 I have read the short documentation of variables like:

 $wgInternalServer
 $wgServer

 But I don't understand how combine them to get the Mediawiki accessible
 internally with a local IP AND externally with a FQDN without broking the
 absolute links of the CSS and javascripts.

 Thank you

 --
 Ilario Valdelli
 Wikimedia CH
 Verein zur Förderung Freien Wissens
 Association pour l’avancement des connaissances libre
 Associazione per il sostegno alla conoscenza libera
 Switzerland - 8008 Zürich
 Wikipedia: Ilario https://meta.wikimedia.org/wiki/User:Ilario
 Skype: valdelli
 Facebook: Ilario Valdelli https://www.facebook.com/ivaldelli
 Twitter: Ilario Valdelli https://twitter.com/ilariovaldelli
 Linkedin: Ilario Valdelli http://www.linkedin.com/profile/view?id=6724469

 Tel: +41764821371
 http://www.wikimedia.ch
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

$wgInternalServer is for what squid/varnish see mediawiki as, never what
any user sees it as.

Try having neither in LocalSettings.php . The autodetected defaults will
probably do what you want.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Stance on Social Media

2015-01-09 Thread Brian Wolff
On Jan 9, 2015 3:24 PM, Rob Moen rm...@wikimedia.org wrote:

 Currently our approach on social media is that Social media websites
 aren't useful for spreading news and reaching out to potential users and
 contributors. [1] I challenge this though.  Is it really true?   Twitter
 has 254 million active monthly users, with 500 million tweets sent per day
 [2], Facebook has 1.35 billion active monthly users users. [3]


 I know there are many active Wikipedians who frequent both of these sites.
 Should we be more actively encouraging people to share?


 The history of the Social Media page indicates this was added as an
‘initial
 dump’ back in November of 2011. [4]  But I wonder if it might be worth
 revisiting or refreshing this decision in light of the current world we
 live in.


 What do others think?   What would the reaction be to a sharethis.com type
 service where any site could engage ?   Would this be more valuable on
 mobile than than desktop?



 1: https://www.mediawiki.org/wiki/Social_media

 2: https://about.twitter.com/company

 3: http://newsroom.fb.com/company-info

 4:

https://www.mediawiki.org/w/index.php?title=Social_mediadiff=1318877oldid=458857


I think its important to separate  two types of social media interaction:
*allowing people to post their favourite article (share this links)
*meta level interaction (stuff about the community)

Nobody objects to the second afaik. The first is like proposing nsfw
filters on commmons (ie get ready for the pitchforks).

As far as i can tell, the arguments (on enwiki) usually boil down to:
*providing a share this link is a tacit endorsement/free advertisement of a
website we dont like. Selecting who to show could present neutrality issues
*privacy concerns (this is usually a knee jerk reaction. I think that many
of our users have some notion that third party cookies and remote
javascript loading = bad, without entirely understanding how those things
work and not realizing that any proposal would almost certainly not involve
the common approach of loading external resources)
*contradicting the serious tone
In my experiance, some wikipedians (esp. On enwiki) feel the wiki should
have a very formal tone, and that share this links are out of place. Ive
always wondered if thats partially in response to all the wikipedia is
unreliable talk from academics when 'pedia first became popular causing
people to want wikipedia to have a dry academic feel associated with
reliability.

Anyhow, this list is not the one you have to convince and i believe that
historically user opinion has varried significantly from developer opinion
on this issue.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: Unsolicited digital currency donations

2015-01-09 Thread Brian Wolff
 Your current balance is 0.11 XPM.

Woo, a whole eighth of a cent (USD). Soon you'll be able to buy a little
pseudo-brass token, then a private floating island!

(Personally i think the tipping for commits is an interesting idea.
Although tiping for bugs would seem better as those are concrete issues
where a commit could be anything. But this is literally throwing pennies at
people, except pennies are more valuable. Requiring people to claim in 30
days, especially when the user hasnt opted in to recieve tips, seems
sketchy).

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New feature: tool edit

2015-02-14 Thread Brian Wolff

 -- Every registered user would be able to flag edit as tool edit (bot
 needs special user group)

Vandals would have fun with that, but bot group could be set up like that
(e.g. flood group)

 -- The flag wouldn't be intended for use by robots, but regular users
 who used some automated tool in order to make the edit

Semantics of flags are up to wiki communities. They can make it mean
whatever they desire.

 -- Users could optionally mark any edit as tool edit through API only

So like the bot flag (if they have the rights) :p

 other suggestion that was somewhere else in thread about retroactively
matking rdits

Sounds kind of like the little known bot rollback feature minus the
rollback aspect.

--
This sounds either like you are proposing the bot flag, with a minor
varation in the user given semantics. Or are proposing multiple levels of
bot flaggedness so that tool edits could be independently hidden in rc
separate from tool edits.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Use of hreflang in the head

2015-02-09 Thread Brian Wolff
On Feb 9, 2015 7:07 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 On 2015-02-09 2:16 AM, Erik Moeller wrote:
  This of course would add some additional payload to pages with lots of
  language links, but could help avoid results like [2] where the English
  language version of an article is #1 and the Indonesian one makes no
  appearance at all. Results vary greatly and it's hard to say how big a
  problem this is, but even if boosts discoverability of content in the
  user's language by only 10% or so, that would still be a pretty big win
for
  local content.
 I think we used to have links in the head (though the exact
 implementation may have been wrong), if it wasn't a problem then it
 wouldn't be one now.


I think we still do for lang converter.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] need review and co-mentor volunteers for GSoC Accuracy review proposal

2015-02-13 Thread Brian Wolff

 Furthermore, the initial limited subtask would be much more difficult
 to evaluate as a strategy without a working prototype, including by
 the Bot Approvals Group which demands working code before making a
 final decision on implementation. Trying to second guess the BAG is
 presumptuous.
es

Im not saying you need formal approval from BAG approval before you begin.
Im saying you should have an informal discussion on VP or somewhere to make
sure that relavent stakeholders think the idea would potentially be useful
in principle.

Six years is a long time, people change, things change, especially for a
proposal that while didnt garner a lot of opposisition, didnt garner people
jumping up and down in support either.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Boil the ocean, be silly, throw the baby out with bathwater, demolish silos, have fun

2015-02-13 Thread Brian Wolff
On Feb 13, 2015 6:15 PM, Max Semenik maxsem.w...@gmail.com wrote:

 Sorry for all the trolling, but why instead of discussing how we need a
 responsive skin for MediaWiki and waiting for Winter to come don't we just
 do it:
 * Move Minerva out of MobileFrontend
 * Leave all mobile-specific improvements, improvements and hacks in MF
 * Polish Minerva to do everythig a normal desktop skin does
 * Bundle it with MW by default

 
 [0] https://en.wikipedia.org/wiki/James_Randi?useskin=minerva

 --
 Best regards,
 Max Semenik ([[User:MaxSem]])
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Seems reasonable (particullarly in a perfect is the enemy of good way).
But can we keep vector as default please (or at least until there has been
a couple releases with minivera as an option so we can evaluate if people
are switching to it and its meeting peoples needs as a desktop skin).

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What does it take to have a project hosted on the Wikimedia git server?

2015-03-13 Thread Brian Wolff
Yuvi used to have a bot i thought...

--bawolff
On Mar 13, 2015 4:10 PM, Florian Schmidt 
florian.schmidt.wel...@t-online.de wrote:

 Iirc, but i'm not sure, actually it's not possible to merge pull requests
from github :/

 Florian

 -Ursprüngliche Nachricht-
 Von: Strainu [mailto:strain...@gmail.com]
 Gesendet: Freitag, 13. März 2015 19:48
 An: florian.schmidt.wel...@t-online.de; Wikimedia developers
 Betreff: Re: [Wikitech-l] What does it take to have a project hosted on
the Wikimedia git server?

 2015-03-13 14:27 GMT+02:00 florian.schmidt.wel...@t-online.de
 florian.schmidt.wel...@t-online.de:
  WMF git repos are already (automatically) mirrored to github :) See,
  e.g. https://github.com/wikimedia/mediawiki-extensions-MobileFrontend
  as a mirror of
  https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/extensions/M
  obileFrontend (also available on git.wikimedia.org
  https://git.wikimedia.org/summary/mediawiki%2Fextensions%2FMobileFront
  end )

 I knew about those, but I always assumed these are downstream-only (e.g.
only pull from Gerrit, but no push). If we could accept pull-requests from
github and have them pushed to Gerrit we would basically have the best of
both worlds. :)

 Strainu

 
  Best,
  Florian
 
  Freundliche Grüße
  Florian Schmidt
  -Original-Nachricht-
  Betreff: Re: [Wikitech-l] What does it take to have a project hosted on
the Wikimedia git server?
  Datum: Fri, 13 Mar 2015 13:16:06 +0100
  Von: Strainu strain...@gmail.com
  An: Wikimedia developers wikitech-l@lists.wikimedia.org
 
  2015-03-13 13:53 GMT+02:00 Magnus Manske magnusman...@googlemail.com:
  Why not github, or bitbucket?
 
  They're on the list as well, we're exploring all our options. Any
  special reason why you'd prefer those over Wikimedia?
 
  2015-03-13 13:54 GMT+02:00 Brian Wolff bawo...@gmail.com:
  See https://www.mediawiki.org/wiki/Gerrit/New_repositories
 
  Basically you just have to ask.
 
  I think its a nice thing to keep wiki related code, including bots in
  our git repos as that makes it easier for others to find.
 
  I am personally a bit worried about the complexity of the process on
  gerrit, but I hope that as long as we don't require formal code review
  it should be as simple as git pull/git push, right?
 
  Another related question would be: how hard is it to maintain a github
  mirror of a WMF repository?
 
  Thanks,
 Strainu
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor proxy with blinded tokens

2015-03-10 Thread Brian Wolff
On Mar 10, 2015 10:21 PM, Risker risker...@gmail.com wrote:

 Thanks, Chris.  But if the account is obviously not a normal account, I'd
 suspect that this special  kind of user account would quickly become very
 obvious to those who snoop and would actually increase the level of
 scrutiny on the account, both internally and externally. I'm not really
all
 that sure it's an overall improvement in safety.

 Risker/Anne


That's going to depend on your threat model

If secret agents are watching you through binoculurs then nothing is going
to save you.

If gov wants to track down all users who are suspicious (not very far
fetched in the current political climate), then yes using tor may make you
stand out. This is probably the case already for anyone using tor at all
(esp. If not using a bridge if i understand things correctly)

If your use case is you want to upload pictures of a pro democracy protest
in some fascist country where the pictures are likely to get you arrested,
and fascist gov has a list of all ips accessing wikimedia servers for the
specific time period, then tor might help you (emphasis on the maybe. If
you are the only person in the country at that time using tor and they are
able to detect your using tor then your dead. or if you are in the picture,
or the rest of a long list of operational security details the paranoid
have to deal with)

re kevin's comment about worth the risk
Whether or not its worth the risk is the perogative of the person taking
the risk. Maybe they even consider whatever they are doing important enough
that they would still do it even without the protection of tor if tor is
not an option. Not that long ago thousands of people were taking risks by
buying illicit drugs on the silk road using tor for protection. I find it
easy to imagine that many people in repressive places would consider
spreading information a much more neccesary risk than what silk road
patrons thought was an acceptable risk in using that service.

Or perhaps tor users have nothing to hide and simply feel that what they do
online is nobody's bussiness. Or maybe they want to increase the
annonyminity pool for those who really do have legitament reason to hide.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What does it take to have a project hosted on the Wikimedia git server?

2015-03-13 Thread Brian Wolff
On Mar 13, 2015 6:05 AM, Strainu strain...@gmail.com wrote:

 Hi,

 The Romanian Wikipedia has a code repository used mainly for robots of
 interest to the local community. So far, it has been hosted on Google
 Code. Since that site is closing, we are considering replacements and
 one of the possibilities was the Wikimedia git server.

 We would like to know what are the steps to follow in order to have a
 project created?

 Thanks,
Strainu

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

See https://www.mediawiki.org/wiki/Gerrit/New_repositories

Basically you just have to ask.

I think its a nice thing to keep wiki related code, including bots in our
git repos as that makes it easier for others to find.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Org changes in WMF's Platform Engineering group

2015-03-25 Thread Brian Wolff
On Mar 25, 2015 1:18 PM, Gilles Dubuc gil...@wikimedia.org wrote:

 
  Does this mean there will no longer be a multimedia team/nobody
responsible
  or working on what the multimedia team was working on previously?
 

 We'll keep supporting the extensions that multimedia used to cover, in
 terms of addressing emergencies. This will probably be done by *the people
 formerly known as multimedia*, because for now there isn't any better pick
 than people who've worked with those extensions as first responders on
 breaking issues. That might change when a reactor team is formed and
 staffed later down the line.

 As far as active development goes, some things are still in the pipeline,
 like addressing technical issues that affect UploadWizard's funnel. This
is
 assigned to the API team but isn't the top priority for the coming quarter
 (I think it's 3rd in the list, but someone might know better).

 More ambitious projects like structured data on commons will need a
 dedicated team. Resourcing for such projects will depend on
 organization-wide priorities.

 My humble opinion is that it's a good thing to have less catch-all teams
 like core and multimedia and rather have teams focused on narrower,
 well-understood scopes. Multimedia was a vague term and it made us spread
 ourselves thin across many unrelated extensions and projects. It also gave
 the illusion that we were going to take care of everything, but we were
 really too small to undertake ambitious things like bringing video support
 into the modern era or making commons data structured. As much as it must
 feel disappointing that these projects are on the backburner, they already
 were, because of how small the multimedia team was. Maintaining the
 illusion wasn't a good thing, I think.


I'm a little sad to hear that, but I agree 100% with what you are saying.
As the saying goes, its better to do a couple things well then to do too
many things but poorly.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Extension:Gather launching on beta for English WP mobile users.

2015-03-26 Thread Brian Wolff

 It seems only fair that people actually try the feature before ripping it
 to shreds. Right?

In principle, no. People should rip things to shred early and often. After
extension is done is the worst time to fix errors in the requirements
phase. (Although if you mean this particular case, where demo exists, then
im more inclined to agree, but the demo link was posted after the more rip
to shed emails)

We've had extensions to create alternative content
 curation features in the past, such as books, and this is hardly the first
 time an experimental feature was launched on mobile first. So hardly seems
 like time to grab the pitchforks before you even give it a fair shot.


Just because we have done it before, doesnt neccesarily imply we should do
it again. Wikimedia has done lots of things that were bad ideas, especially
in retrospect.

That said, given the existence of an iteration, i agree we should all try
it before engaging in strong criticism.

--bawolff

 On Thu, Mar 26, 2015 at 8:00 AM Jon Robson jrob...@wikimedia.org wrote:

  I should add you can opt into mobile beta using this link:
  http://en.m.wikipedia.beta.wmflabs.org/wiki/Special:MobileOptions
 
  The design still has kinks and the extension still needs work before.
  Releasing to mobile beta will get us an audience to identify those
issues
  and fix them. I'd be keen to get this as a desktop beta feature too if
  anyone is willing to help me.
  On 26 Mar 2015 7:41 am, Jon Robson jrob...@wikimedia.org wrote:
 
   A few notes:
   * lists are public in the first version but there is APIs to make them
   private. Public lists is something that will have moderation problems
and
   interesting to explore.
   * the feature is _launching_ on mobile. It's built to work on desktop
and
   with a tiny bit of work I can turn it on as a beta feature on desktop
  (the
   issue is how to replace the existing watchstar on desktop).
   * We considered doing it straight in core based on the watchlist code
but
   we figured it would be more responsible to experiment in an extension,
  fine
   tune it against a completely different use case to watchlist and then
  make
   a proposal to move the good parts/all parts into core. I'm still
  personally
   dedicated to resolving the RFC [1]. We have worked hard so that the
api
  to
   gather is backwards compatible with watchlist methods.
   * you can play with it on betalabs:
   ** http://en.m.wikipedia.beta.wmflabs.org/wiki/Special:
  Gather/by/Jdlrobson
   ** http://en.m.wikipedia.beta.wmflabs.org/wiki/Special:Gather
   * I'm personally excited to make multiple watchlists a reality using
this
   extension at Lyon if anyone is keen to help me there. The
infrastructure
   required is all in Gather.
  
   [1]
   https://m.mediawiki.org/wiki/Requests_for_comment/Support_
  for_user-specific_page_lists_in_core
   On 26 Mar 2015 7:20 am, Brian Wolff bawo...@gmail.com wrote:
  
   On Mar 26, 2015 11:04 AM, Brian Wolff bawo...@gmail.com wrote:
   
   
On Mar 26, 2015 9:58 AM, MZMcBride z...@mzmcbride.com wrote:

 Hi.

 Moushira Elamrawy wrote:
 The Extension will keep the name Gather and internally the team
was
   more
 inclined to name the feature Stacks. However, a survey study
has
   been
 carried out by the design research team and Collections, as a
name
   for
   a
 feature, scored far better than the other suggested
alternatives.
   Full
 survey information and results are documented here
 
   https://www.mediawiki.org/wiki/Extension%CB%90Gather/renaming_survey
.

 Right... in the January 2015 thread you linked, it was quickly
  pointed
   out
 that Extension:Collection already exists. The mobile team, in
  typical
 form, decided to ignore any previous work and instead make its
own
 project. At least we were able to shout loudly enough to stop
this
 functionality from being part of the MobileFrontend extension.

   
Hey, count your blessings its not called collections with just
an s
  at
   the end to distinguish it...
   
 This is a new experiment in content curation, which hopefully
helps
   with
 learning new users behavior on mobile web. We are looking
forward
  to
 learning awesome lessons from this beta launch.

 As was also previously pointed out, we've had curation support
for a
   long
 time in the form of categories (another feature that could have
been
 improved rather than making a new extension). Or making a list of
   pages
 using wikilinks. Or tagging pages with templates, which
  auto-generates
   an
 index. Perhaps you can explain why this new feature is limited to
   mobile?

   
I dont know if this criticism is fair. Many users have been asking
for
   multiple watchlist type functionality for years despite the option of
   creating a subpage or category and throwing
special:recentchangeslinked.
   Categories dont really have per user namespace, and i think its
  important

Re: [Wikitech-l] Transfering domain cswp.cz to WMF

2015-03-03 Thread Brian Wolff
On Feb 28, 2015 10:21 PM, Krinkle krinklem...@gmail.com wrote:

 I don't think Wikimedia is currently looking at maintaining a url
shortening service.

 However, a redirect domain (distinguished by whether or not the target
url can be trivially derived from the request url) seems much more
feasible. Especially considering Wikimedia is actually maintaining dozens
(if not, hundreds) of those already.


https://github.com/wikimedia/operations-puppet/blob/12617ad/modules/mediawiki/files/apache/sites/redirects/redirects.dat

https://github.com/wikimedia/operations-puppet/blob/12617ad/modules/mediawiki/files/apache/sites/redirects.conf

 It'd be a a two-line redirect using RewriteRule.

 — Krinkle


Well, extension:ShortUrl is deployed on a couple wikis (but not cs as of
yet)

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] post project funding

2015-02-22 Thread Brian Wolff


 Couple quick clarifications:
 1. There have been many IEGs that focus on tool development, including
 those from the most recent round
 https://meta.wikimedia.org/wiki/Grants:IEG#ieg-engaging. There's no
 tradition of denying software projects: they're quite well represented
 among completed IEG projects too
 https://meta.wikimedia.org/wiki/Category:IEG/Proposals/Completed. In the
 past, there have been concerns from members of Product/Engineering that
 IEGs would divert resources from established development priorities, so
 projects that rely on MediaWiki integration were sometimes a tough sell.

Im aware there are tool projects and gadget projects. While these are
important, and can potentially have a big impact, they are ultimately a
side show to our main technology (hopefully no one takes that the wrong
way. Our tool creators do amazing things). My post is concerning mediawiki
related projects. The problem is not that they are a tough sell. The
problem is that they are categorically rejected regardless of how much
sense they may or may not make.

And yes, the original thread was about a tool. I suppose I've totally
hijacked this thread...

 2. IEG accepts applications twice a year; this coming round (April) the
 focus will be on gender-gap themed projects. The focus of the September
 2015 round, if there is one, has not been established yet. But it's
 unlikely to be gender gap.

I apologize, i was relying on rumour. I should have verified. Nonetheless
if every period has a theme, it makes it difficult for people to get
funding to do a specific project that inherently interests them. However i
suppose that's going off topic

--
Bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Possible parser test breakage in cases using gallery

2015-02-26 Thread Brian Wolff
On Feb 26, 2015 2:03 PM, Brion Vibber bvib...@wikimedia.org wrote:

 The gallery tag generation has been updated to include srcset attributes
 for high-density displays: https://phabricator.wikimedia.org/T64709

 An unfortunate consequence is that if extensions have parser test cases
 including a gallery they will need to be updated for the new HTML.

 The only one I noticed on my setup with in Cite, so I submitted a patch to
 fix that; but keep an eye out for surprise test failures elsewhere.


 It's an easy fix -- run the parser tests, find the failing test, and
 copy-paste the 'srcset' attribute from the generated HTML into the test
 case expected output.

 -- brion
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Oh i should have totally thought of that.

DynamicPageList (Wikimedia) also has gallery tests, but its test suite is
probably pretty broken altogether.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Types of allowed projects for grant funding (renamed)

2015-02-21 Thread Brian Wolff
On 2/21/15, Pine W wiki.p...@gmail.com wrote:
 (Now continuing this discussion on Wikimedia-l also, since we are
 discussing grant policies.)

 For what it's worth, I repeatedly advocated for allowing IEG to support a
 broader range of tech projects when I was on IEGCom. I had the impression
 that there was a lot of concern about limited code review staff time, but
 it serms to me that WMF has more than enough funds to to pay for staffing
 for code review if that is the bottleneck for tech-focused IEGs (as well as
 other code changes).

 I also think that the grant scope policies in general seem too conservative
 with regard to small grants (roughly $30k and under). WMF has millions of
 dollars in reserves, there is plenty of mission-aligned work to be done,
 and WMF itself  frequently hires contractors to perform technical,
 administrative, communications, legal and organizing work. It seems to me
 that the scope of allowed funding for grants should be similar to the scope
 of allowed work for contractors, and it would serve the purposes that
 donors have in mind when they donate to WMF if the scope of allowed
 purposes for grants is expanded, particularly given WMF's and the
 community's increasing skills with designing and measuring projects for
 impact.

That's actually debatable. There's grumbling about WMF code review
practices not being sufficient for WMFs own code (or as sufficient as
some people would like), and code review is definitely a severe
bottleneck currently for existing volunteer contributions.

However that's not a reason to have no IEG grants for tech projects
ever, its just a reason for code review to be specifically addressed
in the grant proposal, and for the grantee to have a plan. Maybe that
plan involves having a (volunteer) friend who has +2 do most of the
code review. Maybe that plan involves a staff member getting his
manager to allow him/her to have 1 day a week to review code from this
grant (Assuming that the project aligns with whatever priorities that
staff member's team has, such an arrangement does not seem
unreasonable). Maybe the grant includes funds for hiring code review
resources (ie non-wmf people with +2. We exist!). Maybe there is some
other sort of arrangement that can be made that's specific to the
project in question. Every project is different, and has different
needs.

I do not think expecting WMF engineering to devote significant
resources to IEG grants is viable, as I simply doubt its something
that WMF engineering is willing to do (And honestly I don't blame
them. They have their own projects to concentrate on.). IEG's are
independent projects, and must be able to stand mostly on their own
with minimal help. I do think getting WMF to perform the final once
over for security/performance of a project prior to deployment, at the
end, is reasonable (provided the code follows MW standards, is clean,
and has been mostly already reviewed for issues by someone in our
community). At most, I think bringing back 20% time, with that time
devoted to doing code review of IEGs, would be the most that we could
reasonably expect WMF to devote (but even if they didn't want to do
that, I don't think that's a reason not to do IEG tech grants).

Code review is an inherent risk to project success, much like user
acceptability. It should be planned around, and considered. We should
not give up just because there is risk. There is always risk. Instead
we must manage risk as best we can.


--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] post project funding

2015-02-21 Thread Brian Wolff
On 2/21/15, Pine W wiki.p...@gmail.com wrote:
 In general WMF has a conservative grant policy (with the exception of IEG,
 grant funding seems to be getting more conservative every year, and some
 mission-aligned projects can't get funding because they don't fit into the
 current molds of the grants programs). Spontaneous cash awards for previous
 work are unlikely. However, if there is an existing project that could use
 some developer time, it may be possible to get grant funding for future
 work.


[Rant]

I find this kind of doubtful when IEG's (which for an individual
developer doing a small project is really the type of funding that
applies) have been traditionally denied for anything that even
remotely touches WMF infrastructure. (Arguably the original question
was about toollabs things, which is far enough away from WMF
infrastructure to be allowed as an IEG grant, but I won't let that
stop my rant...). Furthermore, it appears that IEGs now seem to be
focusing primarily on gender gap grants.

I find it odd, that we have grants through GSOC and OPW to people who
are largely newbies (although there are exceptions), and probably
not in a position to do anything major. IEG provides grants as long
as they are far enough away from the main site to not actually change
much. But we do not provide grants to normal contributors who want to
improve the technology of our websites, in big or important ways.

Ostensibly this is done in the name of:
Any technical components must be standalone or completed on-wiki. Projects are
completed without assistance or review from WMF engineering, so MediaWiki
Extensions or software features requiring code review and integration cannot be
funded. On-wiki tech work (templates, user scripts, gadgets) and completely
standalone applications without a hosting dependency are allowed.

Which on one hand is understandable. WMF-tech has its own priorities,
and can't spend all its time babysitting whatever random ideas get
funded. So I understand the fear that brought this about. On the other
hand it is silly, since a grant to existing tech contributors is going
to have much less review burden than gsoc/opw, and many projects might
have minimal review burden, especially because most review could
perhaps be done by non-wmf employees with +2, requiring only a final
security/performance sign off. In fact, we do already provide very
limited review to whatever randoms submit code to us over the internet
(regardless of how they are funded, or lack thereof). If IEG grants
were allowed in this area, it would be something that the grantee
would have to plan and account for, with the understanding that nobody
is going to provide a team of WMF developers to make someone else's
grant happen. We should be providing the same amount of support to IEG
grantees that we would to anyone who submitted code to us. That is,
not much, but perhaps a little, and the amount dependent on how good
their ideas are, and how clean their code is.


[End rant]

Politically, I think its dangerous how WMF seems to more and more
become the only stakeholder in MediaWiki development (Not that there
is anything wrong with the WMF, I just don't like there being only 1
stakeholder). One way for there to be a more diverse group of
interests is to allow grants to groups with goals consistent with
Wikimedia's. While not exactly super diverse (all groups have similar
goals), at least there would then be more groups, and hopefully result
in more interesting and radical projects.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Extension:Gather launching on beta for English WP mobile users.

2015-03-26 Thread Brian Wolff
On Mar 26, 2015 9:58 AM, MZMcBride z...@mzmcbride.com wrote:

 Hi.

 Moushira Elamrawy wrote:
 The Extension will keep the name Gather and internally the team was more
 inclined to name the feature Stacks. However, a survey study has been
 carried out by the design research team and Collections, as a name for a
 feature, scored far better than the other suggested alternatives.  Full
 survey information and results are documented here
 https://www.mediawiki.org/wiki/Extension%CB%90Gather/renaming_survey.

 Right... in the January 2015 thread you linked, it was quickly pointed out
 that Extension:Collection already exists. The mobile team, in typical
 form, decided to ignore any previous work and instead make its own
 project. At least we were able to shout loudly enough to stop this
 functionality from being part of the MobileFrontend extension.


Hey, count your blessings its not called collections with just an s at
the end to distinguish it...

 This is a new experiment in content curation, which hopefully helps with
 learning new users behavior on mobile web. We are looking forward to
 learning awesome lessons from this beta launch.

 As was also previously pointed out, we've had curation support for a long
 time in the form of categories (another feature that could have been
 improved rather than making a new extension). Or making a list of pages
 using wikilinks. Or tagging pages with templates, which auto-generates an
 index. Perhaps you can explain why this new feature is limited to mobile?


I dont know if this criticism is fair. Many users have been asking for
multiple watchlist type functionality for years despite the option of
creating a subpage or category and throwing special:recentchangeslinked.
Categories dont really have per user namespace, and i think its important
to have interfaces that encourage users to do this sort of thing rather
then making them figure out that they are physically able to and allowed to.

I do agree that its odd that this isnt developed in core for all users. The
faq entry is unconvincing.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Monthly MediaWiki releases

2015-03-26 Thread Brian Wolff
On Mar 26, 2015 5:47 PM, Greg Grossmeier g...@wikimedia.org wrote:

 quote name=Thomas Mulhall date=2015-03-26 time=20:30:30 +
   Hi it has been 3-4 months since the last mediawiki releases when will
they have a new release since it has been a long time.

 Sorry about the delay; the Wikimedia Foundation had a third-party
 security audit of our site and code (including MediaWiki). This turned
 up a number of security issues that we have been/are fixing. Some are
 taking longer than expected.

 The stated understanding before the security audit was to:
 * Have the audit completed (Done)
 * Address all issues found (in-progress)
 * Release both a security release of MW (and extensions) along with the
   security audit for public consumption.


Oh cool. That should be an interesting read.

--bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Org changes in WMF's Platform Engineering group

2015-03-25 Thread Brian Wolff
On Mar 24, 2015 10:45 PM, Rob Lanphier ro...@wikimedia.org wrote:

 Hi folks,

 First things first:  I'm not burying the lede in this email, so if you
 aren't interested in the inner workings of WMF's Platform Engineering
team,
 feel free to ignore the rest of this.  :-)

 We're making a few changes effective in April for Platform Engineering,
 which you all care deeply about because you're still reading.

 We're looking to give the teams a little more clarity of scope.
 Previously, among other teams in Platform Engineering, we had a large
 MediaWiki Core team, and a smaller Multimedia team.  We played a big game
 of musical chairs, and everyone from those teams is part of a new team.
 Additionally, the Parsoid team got into the fun, getting a new member as a
 result.


-

Performance - This team is shooting for all page views in under 1000ms

https://docs.google.com/presentation/d/1MtDBNTH1g7CZzhwlJ1raEJagA8qM3uoV7ta6i66bO2M/present?slide=id.g3eb97ca8f_10
.
The team plans to establish key frontend and backend performance
metrics
and assume responsibility for their curation and upkeep, and get a
handle
on web page rendering performance. Right now, it's all about
VisualEditor,
but over time, this is going to be a more generalized function.
-

   Members: Ori Livneh, Gilles Dubuc (soon!), now hiring!
   -

Availability - Make MediaWiki backend failures diminishingly
infrequent,
and prevent end users from noticing the ones that do by making
recovery as
easy and automated as possible. This team does ops facing work that
contributes to the overall stability and maintainability of the system.
Things like multi-datacenter support, and migrating off of outdated
technology to newer, more reliable tech.
-

   Members: Aaron Schulz and Gilles Dubuc (for now, until he wraps up
   work on multi datacenter)
   -

MediaWiki API -  This team's goal wil be make user interface
innovation+evolution easier and make life easier for our sites' robot
overlords by making all business logic for MediaWiki available via well
specified API. Some APIs will be in PHP and some external over HTTP
depending on the needs of other teams.
-

   Members: Brad Jorsch, Kunal Mehta, Gergo Tisza, Mark Holmquist. Stas
   Malyshev plans to join this team when his work on Wikidata Query
 wraps up. Bryan
   Davis plans to join as soon as his role as interim Product Manager
for
   Platform wraps up.
   -

Search -  Provide unique and highly relevant search results on
Wikimedia
sites, increasing the value of our content to readers and providing
tools
that help editors make our content better. The team will continue
working
on existing backlog of the CirrusSearch/Elasticsearch bugs and
improvements, plus Wikidata Query
-

   Members: Nik Everett, Stas Malyshev (for now...), James Douglas,
also
   now hiring!
   -

Security - making life hard for the people that want to do harm to our
sites or the people that use them.
-

   Members: Chris Steipp, also now hiring!
   -

Programs support - support our non-tech programs with tools that
delight
our users and maintain the privacy and security of our community,
providing
infrastructure for things like Wikimania scholarships, grant program
applications, and ContactForm.
-

   Members: Niharika Kohli, Bryan Davis(20%)
   -

Parsing (renamed from Parsoid) - There are a number of changes to our
PHP parser that would make things easier for VisualEditor and Parsoid,
while at the same time offering a more powerful and easy-to-use
authoring
environment for our editors (even those using wikitext).  Having Tim
on a
rebranded “Parsing” team gives that team agency to start evolving
wikitext
again, in a way that is supported by Parsoid HTML from day one.
-

   Members: Existing Parsoid team (Subbu Sastry, Marc Ordinas i Llopis,
   Arlo Brenault, and C. Scott Ananian), plus (new) Tim Starling


 You'll notice that some of these teams are pretty small, especially given
 their scope.  This is likely to be at least a little fluid for a while as
 we make sure we have the balance of work right and as we figure out the FY
 2015-16 budget.

 Let us know if you have any questions about this.  I say us because I'll
 actually be traveling shortly.  Feel free to ask the individual members of
 the teams what's up, or if you don't know who to go to, Bryan Davis will
be
 filling in for my duties while I'm out.

 Thanks
 Rob
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Thanks for sending this, its always good to be able to figure out who is
working on what.

Does this mean there will no longer be a multimedia team/nobody responsible
or working on what the multimedia team was 

<    1   2   3   4   5   6   7   >