Re: [Wikitech-l] Alternative editing interfaces using write API (was: Re: Watchlistr.com, an outside site that asks for Wikimedia passwords)

2009-07-23 Thread Brianna Laugher
2009/7/23 Alex mrzmanw...@gmail.com:
 The OAuth provider typically has a page on the service (say en.wp)
 that lists all the third party apps you have granted authorisation to.
 This authorisation can be time-limited in itself, but if an app starts
 misbehaving (say, doing edits you didn't tell it to do), you can
 revoke its authorisation from the service directly (rather than having
 to change your password to stop it).
 That doesn't greatly reduce the level of trust you'd need to have in a
 service to authorize it to edit under your name.  Oh, great, if it
 goes rogue it can get my account desysopped/blocked and seriously
 confuse or annoy a large number of people who know me, but at least I
 won't have to change my password!

 I imagine you could also have it so that actions made via the API
 identify where they are made from. (a la Twitter's from web, from
 twhirl etc)

 In that case, if that information was exposed in the UI, it would be
 easy to identify rogue applications and block them completely across
 the site.

 The damage is still done. There might be hundreds of edits to clean up,
 accounts that need to be unblocked, emails wondering why dozens of
 high-profile articles are filled with shock porn, etc.

Then we use something like Special:Nuke to mass-undo edits according
to some criteria (like if they came from a particular Oauth-API-using
app).

All the potential problems posed are ones that Wikipedia faces every
day just because it lets people edit, period. I don't see how doing it
via an API adds some massive new risk.

 In fact that would be far better than the case where you just hand
 over your password, and there is zero information about where that
 edit really came from.

 Or people could just do neither.

So, if someone builds a cool, *useful* 3rd party app, users are just
going to not use it. Right.

If we provide the write API, surely we are implicitly saying to third
parties, It is OK to build an app that uses this. Why else would we
provide it?

 Well it sounds to me like you are opposed to the whole principle of
 having a write API. No?

 The write API has plenty of valid uses that don't require users to hand
 partial control of their account to 3rd parties.

Really, what are they?

Probably it's good for bots. But that is really limited, compared to
what might be possible.

IIRC the write API was originally developed for/by a phone company to
develop a mobile editing platform. Is that acceptable?


2009/7/23 Aryeh Gregor simetrical+wikil...@gmail.com:
 On Thu, Jul 23, 2009 at 3:21 AM, Brianna
 Laugherbrianna.laug...@gmail.com wrote:
 The value is that you don't train your users that it's OK to give
 their password away to random 3rd parties.

 No, instead you train them to give away the ability to edit using
 their account to random third parties, without giving away their
 password.

Yes, and you put controls around it, so that the potential for damage
is limited and controllable.

At least most people have had Don't ever tell any third
 party your password drilled into their head enough that they'll think
 twice before doing it.

Right... so you never received any of that social networking spam,
just because one of your email contacts put his Hotmail/Yahoo/Gmail
password into some random site just so it could look for additional
contacts?

If the thing is useful enough, people will give away their password.
And currently they don't even have a choice not to.

 I imagine you could also have it so that actions made via the API
 identify where they are made from. (a la Twitter's from web, from
 twhirl etc)

 In that case, if that information was exposed in the UI, it would be
 easy to identify rogue applications and block them completely across
 the site.

 Okay, so you'd be able to identify the source.  The fact that it's
 possible at all for a third party to create such chaos is still
 unacceptable.  Even worse would be more subtle impersonation, which
 isn't obviously linked to the service (i.e., where the user would be
 suspected of having authorized it even if it was known that it was
 done through the service).

It's not unacceptable... it's how Wikipedia works!

But that is even *more* likely if you don't have OAuth and people have
to hand over their passwords.

 In fact that would be far better than the case where you just hand
 over your password, and there is zero information about where that
 edit really came from.

 False dichotomy.  The legitimate alternatives I presented are
 client-side apps, MediaWiki enhancements, and toolserver tools, not
 handing out your password.  Any site found harvesting Wikipedia users'
 passwords should be immediately blocked at the server level.

So it's OK for a desktop (client-side) app to harvest passwords, but
not a web app. Why?

MediaWiki enhancements - there is necessarily a high barrier to having
an extension accepted into MediaWiki. The type of application I
mentioned, which is only applicable to one topic area, is not likely

Re: [Wikitech-l] Translate extension, jqery

2009-07-23 Thread Tisza Gergő
Glanthor glanthor at gmail.com writes:

 the another big question is that why don't include JQuery to load
 automatically with _every_ pages? Now at least two wikis load JQuery
 v1.3.2 from common.js (see
 http://zh.wikipedia.org/wiki/MediaWiki:Common.js,
 http://hu.wikipedia.org/wiki/MediaWiki:Common.js), the
 UsabilityInitiative extension use JQuery v1.3.2 with cookie and
 browser plugins, and the collection extension use JQuery v1.3.2. Three
 JQuery, two version. This is not god...

Collection uses 1.2.5. I suppose we are waiting for ScriptLoader to be ready,
but it would be really helpful if it would be included normally for the meantime
so that site admins can start rewriting site-wide scripts. It has a very small
footprint (20k minified and gzipped), it is unobtrusive (AFAIK 1.3 doesn't even
conflict with other jQuery versions) and cannot be loaded through importScript()
due to a bug in older Explorers, which leaves including it full in
MediaWiki:Common.js which is not a nice sight...


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Incorporating Third Party Search into MW

2009-07-23 Thread Tod
On 7/17/2009 11:25 AM, Brion Vibber wrote:
 Tod wrote:
 This is my first post and I think I've selected the appropriate list. 
 Please let me know if there is a better place to post my question.

 I have a client with an installed search engine that they don't want to 
 part with.  I have used it to index their installed instance of MW.  Is 
 there a way to integrate the searching of their repository into the 
 onboard MW search?
 
 You can write a SearchEngine plugin which will call out to your search 
 backend and display results within Special:Search on the wiki; see the 
 MWSearch extension for an example -- that's what we use on Wikipedia to 
 hit our Lucene-based search servers.
 
 http://www.mediawiki.org/wiki/Extension:MWSearch
 
 
 Alternatively you can set $wgSearchForwardUrl to simply pass search reqs 
 on to an outside URL. However, search results then won't be shown in 
 MediaWiki user interface, and won't be available through the API interface.
 
 http://www.mediawiki.org/wiki/Manual:$wgSearchForwardUrl
 
 -- brion
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 

Thanks Brion, just what I was looking for.

- Tod

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Known to fail interactions with compare and record

2009-07-23 Thread dan nessett

Thanks. Just to clarify, I am not changing --fuzz. I am testing --ktf-to-fail 
in conjunction with other parserTests options to ensure there is no 
interference. The chances of such interference is very small, but since I have 
been preaching the importance of regression testing, I thought I should eat my 
own dog-food.

--- On Wed, 7/22/09, Tim Starling tstarl...@wikimedia.org wrote:

 From: Tim Starling tstarl...@wikimedia.org
 Subject: Re: [Wikitech-l] Known to fail interactions with compare and record
 To: wikitech-l@lists.wikimedia.org
 Date: Wednesday, July 22, 2009, 10:17 PM
 dan nessett wrote:
  Well, it isn't all that clear to me, but I really
 don't care. I'll
  change it to whatever people want. Call me anything
 you like, but
  don't call me late for dinner.
  
  Can someone tell me how the --fuzz option is supposed
 to behave? I
  am cross-testing the new parserTests parameter in
 conjunction with
  its other parameters. I have tested --quick and
 --quiet. They seem
  to work fine with ktf-to-fail. When I test --fuzz,
 parserTests
  seems to go on walkabout in the Great Australian
 desert
  periodically spewing out stuff like:
  
  100: 100/100 (mem: 36%) 200: 200/200 (mem: 37%) 300:
 300/300 (mem:
  37%) 400: 400/400 (mem: 37%) 500: 500/500 (mem: 38%)
 600: 600/600
  (mem: 38%) 
  
  Is this expected behavior? Is parserTests supposed to
 finish when
  you use --fuzz or is this some kind of stress test
 that the never
  finishes?
 
 It runs forever, unless it runs out of memory or hits a
 fatal PHP
 error. It's not a stress test, it's a fuzz test, hence the
 name. It
 logs exceptions generated by the parser for random input.
 
 Maybe if there's an undocumented option that you don't
 understand, you
 should leave it alone. Otherwise some day your wiki will
 end up with
 all its articles deleted, or with all the text converted
 to
 ISO-2022-JP or something.
 
 -- Tim Starling
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

A fundamental principle of medicine is do no harm. It has a long history and 
you can find it in the Hippocratic oath with a slightly different wording.

This is also an important principle of software development. If you add a new 
feature or fix a bug, make sure the resulting code isn't worse off than before. 
Do no harm is the basic motivation behind regression testing.

I have been thinking about Brion's suggestion of fixing the bug in 
WebRequest::extractTitle(). It is a reasonable point. Don't just whine about a 
problem. Fix it. He even provided the best strategy for accomplishing this. 
Make sure $wgScriptPath gets properly escaped when initialized. I am sure 
doing this would not require a significant amount of coding. But, how would 
changing the way $wgScriptPath is formatted affect the rest of the code base?

I decided to do a multi-file search for $wgScriptPath in phase3 and extensions 
[r53650]. There are 439 references to it in phase3 and extensions combined. In 
phase3 alone, there are 47 references. Roughly 1/3 of these are in global 
declarations, so phase3 has about 30 active references and in phase3 and 
extensions combined there are roughly 300. [By active I mean references in 
which the value of $wgScriptPath affects the code's logic.]

So, if I were to change the formatting of $wgScriptPath, there potentially are 
30 places in the main code and 300 places in extensions where problems might 
occur. To ensure the change does no harm, I would have to observe the effect of 
the change on at least 30 and up to 330 places in the distribution. This is 
pretty onerous requirement. My guess is very few developers would take the time 
to do it.

On the other hand, if there were regression tests for the main code and for the 
most important extensions, I could make the change, run the regression tests 
and see if any break. If some do, I could focus my attention on those problems. 
I would not have to find every place the global is referenced and see if the 
change adversely affects the logic.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread Gregory Maxwell
On Thu, Jul 23, 2009 at 11:07 AM, dan nessettdness...@yahoo.com wrote:
[snip]
 On the other hand, if there were regression tests for the main code and for 
 the most important extensions, I could make the change, run the regression 
 tests and see if any break. If some do, I could focus my attention on those 
 problems. I would not have to find every place the global is referenced and 
 see if the change adversely affects the logic.

This only holds if the regression test would fail as a result of the
change. This is far from a given for many changes and many common
tests.

Not to mention the practical complications— many extensions have
complicated configuration and/or external dependencies.  make
test_all_extensions is not especially realistic.

Automated tests are good, necessary even, but they don't relieve you
of the burden of directly evaluating the impact of a broad change.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do no harm

2009-07-23 Thread Aryeh Gregor
On Thu, Jul 23, 2009 at 11:07 AM, dan nessettdness...@yahoo.com wrote:
 On the other hand, if there were regression tests for the main code and for 
 the most important extensions, I could make the change, run the regression 
 tests and see if any break. If some do, I could focus my attention on those 
 problems. I would not have to find every place the global is referenced and 
 see if the change adversely affects the logic.

We are all aware of the benefits of regression tests.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] SVN help

2009-07-23 Thread jeroen De Dauw
Hey,

I'm one of the GSoC students for Wikimedia Foundation this
year, and have just released the first versions of my extensions [0, 1].

I do not know how to add them to the SVN repository though. (I have never 
worked with SVN before.)

My
mentor pointed out that I should place a request here [2], which I have
done. Since he's not really familiar with Windows (which I'm using), he
was not able to help me with the following things:
- What's the easiest way to generate an SSH public key on Windows?
- What's a good SVN client to use for Windows?

Any help with those would greatly be appreciated.

[0] http://www.mediawiki.org/wiki/Extension:Maps
[1] http://www.mediawiki.org/wiki/Extension:Semantic_Maps
[2] http://www.mediawiki.org/wiki/Commit_access_requests#Current_requests
 
Cheers,
De Dauw '[RTS]BN+VS*' Jeroen

 
Forum: code.bn2vs.com
Blog: blog.bn2vs.com

Xfire: bn2vs ; Skype: rts.bn.vs 

Don't panic. Don't be evil.
70 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69 66 65!


  
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

True. Regressions tests do not guarantee bug are not introduced by changes. 
However, they are a fundamental piece of the QA puzzle.

--- On Thu, 7/23/09, Gregory Maxwell gmaxw...@gmail.com wrote:

 From: Gregory Maxwell gmaxw...@gmail.com
 Subject: Re: [Wikitech-l] Do no harm
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Date: Thursday, July 23, 2009, 9:50 AM
 On Thu, Jul 23, 2009 at 11:07 AM, dan
 nessettdness...@yahoo.com
 wrote:
 [snip]
  On the other hand, if there were regression tests for
 the main code and for the most important extensions, I could
 make the change, run the regression tests and see if any
 break. If some do, I could focus my attention on those
 problems. I would not have to find every place the global is
 referenced and see if the change adversely affects the
 logic.
 
 This only holds if the regression test would fail as a
 result of the
 change. This is far from a given for many changes and many
 common
 tests.
 
 Not to mention the practical complications— many
 extensions have
 complicated configuration and/or external
 dependencies.  make
 test_all_extensions is not especially realistic.
 
 Automated tests are good, necessary even, but they don't
 relieve you
 of the burden of directly evaluating the impact of a broad
 change.
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

The reason I started this conversation is I want to write an extension. I also 
want to be a good citizen and do this in a way that doesn't break things (this 
would also have the desirable effect of making it more likely that some MW 
installation would use the extension).

So, since, as you point out, everyone agrees that regression tests are 
beneficial and since, except for parserTests, there doesn't seem to be any 
substantive regression tests available, what are some practical steps that 
would improve the situation?

--- On Thu, 7/23/09, Aryeh Gregor simetrical+wikil...@gmail.com wrote:

 From: Aryeh Gregor simetrical+wikil...@gmail.com
 Subject: Re: [Wikitech-l] Do no harm
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Date: Thursday, July 23, 2009, 9:51 AM
 On Thu, Jul 23, 2009 at 11:07 AM, dan
 nessettdness...@yahoo.com
 wrote:
  On the other hand, if there were regression tests for
 the main code and for the most important extensions, I could
 make the change, run the regression tests and see if any
 break. If some do, I could focus my attention on those
 problems. I would not have to find every place the global is
 referenced and see if the change adversely affects the
 logic.
 
 We are all aware of the benefits of regression tests.
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread William Allen Simpson
Here's what I do in similar circumstances.  Create another variable,
$wgScriptPathEscaped.  Then, gradually make the changes.  Wait for tests.
Change some more.  Eventually, most of the old ones will be gone.

By inspection, many of the uses will be terminal, not passed to other
routines, with no side effects.  Those should be done first.

Sure, it might take a month or three.  But wishing for some universal
regression suite is going to be about the same as waiting for the
single pass parser.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Alternative editing interfaces using write API (was: Re: Watchlistr.com, an outside site that asks for Wikimedia passwords)

2009-07-23 Thread Aryeh Gregor
On Thu, Jul 23, 2009 at 2:20 AM, Brianna
Laugherbrianna.laug...@gmail.com wrote:
 All the potential problems posed are ones that Wikipedia faces every
 day just because it lets people edit, period. I don't see how doing it
 via an API adds some massive new risk.

Well.  If you had some way to clearly distinguish which automated tool
made the edit, and a way for admins to block  all edits from a
specific tool as easily as they can currently block or revert all
edits from a specific user, and no way to take dangerous admin-only
actions (e.g. editing interface messages) using the tool -- then on
reflection, I'll grant that I don't see any problems with it.  The
only serious risk, then, would be to a user's reputation, if the tool
author is subtly malicious.  That only affects the specific user, and
is a risk they can decide to take or not.

That's a considerable amount of infrastructure that would be needed,
though.  I'm not sure it's worth the effort just for the sake of
enabling web-based editing tools.  Remember, for desktop tools this is
pointless.  They can already steal your password directly in a hundred
different ways, so letting them edit directly using your credentials
is as safe as running them at all.  There are plenty of desktop tools
that are already used as editing aids.  I doubt the gain from allowing
web-based tools as well would be worth implementing this whole
authentication system.

 So, if someone builds a cool, *useful* 3rd party app, users are just
 going to not use it. Right.

Sure they're not, if we block its IP address at the firewall as soon
as it's reported to us.  Practically speaking, I haven't heard of many
such services becoming widespread, despite the fact that they're
entirely possible if users can be persuaded to part with their
passwords.  It seems like MediaWiki enhancements *plus* toolserver
tools *plus* client-side code (including custom JavaScript) are enough
to keep pretty much everyone happy.  Each of the three has its own
limitations, but together they give fairly good coverage of the
features people want.

 IIRC the write API was originally developed for/by a phone company to
 develop a mobile editing platform. Is that acceptable?

Again, there's no increase in attack surface, because the one running
the service is your ISP.  They can already sniff your password unless
you use SSL, if they're malicious.  The problem is added points of
failure.  Currently the only way you could edit under someone else's
name would be either to compromise their desktop, compromise
Wikimedia, or compromise some party in between.  Anything that only
depends on the security of those three points is no worse than our
current security.

Giving anyone on the Internet the ability to gather massive amounts of
editing credentials adds a *new* point of failure.  Not only that, but
the new point of failure is much more serious than any of the existing
three.  We can (have to, really) assume that Wikimedia and large ISPs
are hard to compromise; and while a desktop might be easy to
compromise, it will have very limited access (to just one or a few
accounts).  A poorly-administered third-party site that has the
ability to edit as thousands of different established users could be
easy to compromise *and* have a big impact.

This is manageable if we allow such services to be monitored and
blocked easily, but not otherwise.  If you can't tell the third-party
service from normal edits, then you'd be forced to just block all the
misbehaving users -- but those might well include many of the admins
who would normally do the blocking!  That's why it's scary.  If you
can stop the service easily, then it becomes acceptable.  I personally
doubt it's worth the effort, but if someone's willing to do it, I
don't see any insurmountable problems.

 Right... so you never received any of that social networking spam,
 just because one of your email contacts put his Hotmail/Yahoo/Gmail
 password into some random site just so it could look for additional
 contacts?

I said think twice, not refrain from doing it in all cases.

 If the thing is useful enough, people will give away their password.

Except in practice, they don't do it very often at all, at least for
Wikipedia, and at least that I've heard of.  Do you have any
counterexamples beyond the one that triggered this thread?

 So it's OK for a desktop (client-side) app to harvest passwords, but
 not a web app. Why?

I already explained this in detail.  I'm not sure what part you don't
get.  A desktop app can impersonate you no matter what.  Giving your
password to it makes no appreciable difference to security.  Using
OAuth for desktop apps gives you no protection.

 Toolserver tools - as previously mentioned, these are not allowed to
 harvest login info, so I don't understand their relevance here. Anyone
 can write a non-login-info-using, API-using 3rd party app whether or
 not it is hosted by the toolserver.

No, because toolserver tools have direct database 

Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Tim Starling
Message from the developer. I will see if he's interested in
subscribing, but a forward will do for now.

 Original Message 
Subject: Re: Watchlistr
Date: Thu, 23 Jul 2009 11:20:19 -0500
From: Cody Jung funkyca...@gmail.com
To: Tim Starling tstarl...@wikimedia.org

Hey there Tim,
Apologies, I am not actually sure how to post to a mailing list; if
you would, could you post this for me?


I completely understand the hesitation (and, indeed, the outright
repulsion) to my application. Although I am confident in the security
of Watchlistr, I realize that, out of the blue, it seems very
suspicious. When I saw the post by MrZaius on the Wikipedia Bounty
Board I thought to myself Why hasn't anyone done this before? It
seems really easy to implement!

Now I see why.

Therefore, I would like to address several points brought up by the
Wikitech-l mailing list users. I will start at the top of the thread
and work down, address various comments as I go.

To Sage Ross:
Although I have very little editing experience, as far as the
Wikimedia projects go, anyway, when I saw the request for a transwiki
watchlist tool, I thought this is how I can help improve Wikipedia.
This is something I _know_ how to do, and well. I want to assure
everyone that my intentions were good (if not a little misguided), and
I have no intention of phishing for anyone's accounts.

To Michael Rosenthal:
I have looked at gWatch, but the fundamental issue I see with it is
the fact that you have to watch something twice -- you must manually
enter pages to watch, and that just seems a little silly.

To Gregory Maxwell and Aryeh Gregor:
Until such time as my application can be a) proven trustworthy, or b)
improved to *not* use passwords, I have removed all user accounts (all
4 of them...), and frozen registrations. I do, however, ask that you
_please_ do not block the the IP addresses at the server level. I am
on a shared hosting solution, and doing that could very well create
issues with other users with my host.

To help in the proving trustworthy, or else process, I have released
the source code of Watchlistr - please take a look at it. You will see
that I take the utmost care in securing user information. The wiki
logins are encrypted with AES in our database. The key used to encrypt
each user's login list is their site username, which is stored as a
SHA1 hash in our database. If a cracker were to, somehow, gain access
to the database, they would be left with a pile of garbage.

Here's how the site works:

User logs in - Their username is hashed and checked against the
database, if it matches - we make a session with that username as a
variable in it for later access.
When the user accesses their aggregate watchlist for the first time
each session, we take the username, decrypt the wiki list, and log
them in to their sites. The cURL cookies that result are then stored
above the web server, in a protected directory. The passwords do not
get used for the rest of the session (the stored cookies are used
instead).
When the user logs out, the session is destroyed and the cURL
cookiejar is deleted.

As for the other solutions that were presented - I was really trying
to create a cross-platform, cross-browser solution that would not
hinge on one particular technology. Javascript would be great, but
what if someone doesn't have JS enabled? OAuth and a read-only API
would be close-to-ideal, but they currently don't work with/don't
exist on the Wikimedia servers. I am, however, open to other workable
solutions that are presented - let me know.

Apologies once again for the uproar I have caused,
Cody Jung
Developer, Watchlistr


On Wed, Jul 22, 2009 at 10:48 PM, Tim
Starlingtstarl...@wikimedia.org wrote:
 Please comment on the wikitech-l discussion about whether or not to
 block watchlistr.com from Wikimedia servers:

 http://lists.wikimedia.org/pipermail/wikitech-l/2009-July/044238.html



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] SVN help

2009-07-23 Thread Aryeh Gregor
On Thu, Jul 23, 2009 at 1:00 PM, jeroen De Dauwjeroen_ded...@yahoo.com wrote:
 - What's the easiest way to generate an SSH public key on Windows?

PuTTY.

 - What's a good SVN client to use for Windows?

TortoiseSVN.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Aryeh Gregor
On Thu, Jul 23, 2009 at 1:37 PM, Tim Starlingtstarl...@wikimedia.org wrote:
 To help in the proving trustworthy, or else process, I have released
 the source code of Watchlistr - please take a look at it. You will see
 that I take the utmost care in securing user information. The wiki
 logins are encrypted with AES in our database. The key used to encrypt
 each user's login list is their site username, which is stored as a
 SHA1 hash in our database. If a cracker were to, somehow, gain access
 to the database, they would be left with a pile of garbage.

They would only have to get the site usernames to decrypt the login
info.  They could get those the next time each user logs in, if
they're not detected immediately.  There's no way around this; if your
program can log in as the users, so can an attacker who's able to
subvert your program.

 As for the other solutions that were presented - I was really trying
 to create a cross-platform, cross-browser solution that would not
 hinge on one particular technology. Javascript would be great, but
 what if someone doesn't have JS enabled? OAuth and a read-only API
 would be close-to-ideal, but they currently don't work with/don't
 exist on the Wikimedia servers. I am, however, open to other workable
 solutions that are presented - let me know.

I would suggest you apply for a toolserver account:

https://wiki.toolserver.org/view/Account_approval_process

Once you have a toolserver account, I'd be willing to work with you to
arrange for some form of direct access to all wikis' watchlist tables
(I'm a toolserver root).  You then wouldn't need to possess any login
info.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

Sounds like a plan. Be my guest.

--- On Thu, 7/23/09, William Allen Simpson william.allen.simp...@gmail.com 
wrote:

 From: William Allen Simpson william.allen.simp...@gmail.com
 Subject: Re: [Wikitech-l] Do no harm
 To: Wikimedia developers wikitech-l@lists.wikimedia.org
 Date: Thursday, July 23, 2009, 10:21 AM
 Here's what I do in similar
 circumstances.  Create another variable,
 $wgScriptPathEscaped.  Then, gradually make the
 changes.  Wait for tests.
 Change some more.  Eventually, most of the old ones
 will be gone.
 
 By inspection, many of the uses will be terminal, not
 passed to other
 routines, with no side effects.  Those should be done
 first.
 
 Sure, it might take a month or three.  But wishing for
 some universal
 regression suite is going to be about the same as waiting
 for the
 single pass parser.
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] [Foundation-l] Britain or Ukraine? What UK stands for in Wikimedia jargon

2009-07-23 Thread Tim Starling
Aryeh Gregor wrote:
 On Wed, Jul 22, 2009 at 7:08 PM, Thomas Daltonthomas.dal...@gmail.com wrote:
 2009/7/22 Pavlo Shevelo pavlo.shev...@gmail.com:
 There should not be any real problem to link wikimedia.org.uk directly
 to Wikimedia UK chapter wiki (wherever it's hosted).
 It depends on how the WMF has everything set up. They have a
 complicated setup for hosting multiple wikis, it may well be
 hard-coded that they all use the WMF domains. I'm cross-posting this
 to wikitech-l, hopefully someone there can clarify the situation. Can
 a wiki hosted on the WMF servers use a non-WMF domain?
 
 Of course it can.  There are plenty of domains for Wikimedia wikis,
 there's no reason you couldn't add as many more as you felt like.
 Likewise some subdomains of Wikimedia domains are hosted on
 non-Wikimedia servers.  You can point domain names however you like.
 (Disclaimer: I'm not a shell user, but this is still right.  :P)

Well, there's a policy question, which is potentially more complicated
than the technical question, but I asked it internally when Wikimedia
Australia was setting up and the answer was yes.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] What to do about --compare and --record. Second request for comments

2009-07-23 Thread dan nessett

So far no one has responded to my question about how --ktf-to-fail should 
interact with --compare and --record. Also, at least one commenter has 
suggested a different name for --ktf-to-fail. Before I open a bug and attach 
the patches, I would like to resolve these issues. Since Brion suggested this 
task, would he comment?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Alternative editing interfaces using write API (was: Re: Watchlistr.com, an outside site that asks for Wikimedia passwords)

2009-07-23 Thread Alex
Brianna Laugher wrote:
 2009/7/23 Alex mrzmanw...@gmail.com:
 The OAuth provider typically has a page on the service (say en.wp)
 that lists all the third party apps you have granted authorisation to.
 This authorisation can be time-limited in itself, but if an app starts
 misbehaving (say, doing edits you didn't tell it to do), you can
 revoke its authorisation from the service directly (rather than having
 to change your password to stop it).
 That doesn't greatly reduce the level of trust you'd need to have in a
 service to authorize it to edit under your name.  Oh, great, if it
 goes rogue it can get my account desysopped/blocked and seriously
 confuse or annoy a large number of people who know me, but at least I
 won't have to change my password!
 I imagine you could also have it so that actions made via the API
 identify where they are made from. (a la Twitter's from web, from
 twhirl etc)

 In that case, if that information was exposed in the UI, it would be
 easy to identify rogue applications and block them completely across
 the site.
 The damage is still done. There might be hundreds of edits to clean up,
 accounts that need to be unblocked, emails wondering why dozens of
 high-profile articles are filled with shock porn, etc.
 
 Then we use something like Special:Nuke to mass-undo edits according
 to some criteria (like if they came from a particular Oauth-API-using
 app).
 
 All the potential problems posed are ones that Wikipedia faces every
 day just because it lets people edit, period. I don't see how doing it
 via an API adds some massive new risk.
 

Really? When was the last time a large quantity of accounts belonging to
established users was hijacked and used for vandalism? Typically when
vandalism happens, its coming from a very new account, so people don't
think twice about it. If accounts belonging to established users start
to vandalize, its going to cause quite a bit of confusion. At least the
first couple times. I imagine after a few instances, communities may
start to prohibit people from using such services.

 In fact that would be far better than the case where you just hand
 over your password, and there is zero information about where that
 edit really came from.
 Or people could just do neither.
 
 So, if someone builds a cool, *useful* 3rd party app, users are just
 going to not use it. Right.
 
 If we provide the write API, surely we are implicitly saying to third
 parties, It is OK to build an app that uses this. Why else would we
 provide it?

So if people /really/ want a security hole, we should provide it for
them? I don't think so.

Just because we provide an API doesn't mean we're asking for this. Would
you say that because we allow anyone to edit, we're implicitly saying
Please, come vandalize our site? We provide the API so that
programmers can have a stable interface and their code won't break every
time there's a slight change to the UI. We're making no assumptions as
to who those programmers are.

 Well it sounds to me like you are opposed to the whole principle of
 having a write API. No?
 The write API has plenty of valid uses that don't require users to hand
 partial control of their account to 3rd parties.
 
 Really, what are they?
 
 Probably it's good for bots. But that is really limited, compared to
 what might be possible.
 
 IIRC the write API was originally developed for/by a phone company to
 develop a mobile editing platform. Is that acceptable?

Yes, its very good for bots. Its also used with JavaScript. You make it
sound like these are trivial uses.

A mobile editing platform is different from the applications being
discussed here. A mobile editing platform should not require you to give
any access to your account to a third party. All it should need is an
app, installed on the phone that basically just provides a simplified
editing interface. Or at worst, you're giving the information to a
company who has an obligation to keep your personal data secure, and who
you already trust with far more sensitive information, like your home
address and credit card number. So that's totally different from some
random website operated by some unknown person.

On a related note however, there's no reason why such an interface
should require a 3rd party at all. There's been a lot of work done
lately on a mobile version for Wikimedia sites. I believe its read-only
at the moment, but I imagine the eventual goal is to have most of the
capabilities of the regular site.


-- 
Alex (wikipedia:en:User:Mr.Z-man)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Cody Jung
 On Thu, Jul 23, 2009 at 1:37 PM, Tim Starlingtstarling at 
wikimedia.org wrote:
 
 They would only have to get the site usernames to decrypt the login
 info.  They could get those the next time each user logs in, if
 they're not detected immediately.  There's no way around this; if your
 program can log in as the users, so can an attacker who's able to
 subvert your program.

Wouldn't adding a salt fix this? They would have to have both the 
username, the database, and the salt value to decrypt the wiki list.

 
 I would suggest you apply for a toolserver account:
 
 https://wiki.toolserver.org/view/Account_approval_process
 
 Once you have a toolserver account, I'd be willing to work with you to
 arrange for some form of direct access to all wikis' watchlist tables
 (I'm a toolserver root).  You then wouldn't need to possess any login
 info.
 

I attempted to apply for a toolserver account, but it appears that the 
server at http://toolserver.org/accountrequest is down (as of 1:27pm CDT).

~Cody



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Happy-melon


Aryeh Gregor simetrical+wikil...@gmail.com wrote in message 
news:7c2a12e20907231051s638dd2f9v399ac2a79e185...@mail.gmail.com...
 On Thu, Jul 23, 2009 at 1:37 PM, Tim Starlingtstarl...@wikimedia.org 
 wrote:
 To help in the proving trustworthy, or else process, I have released
 the source code of Watchlistr - please take a look at it. You will see
 that I take the utmost care in securing user information. The wiki
 logins are encrypted with AES in our database. The key used to encrypt
 each user's login list is their site username, which is stored as a
 SHA1 hash in our database. If a cracker were to, somehow, gain access
 to the database, they would be left with a pile of garbage.

 They would only have to get the site usernames to decrypt the login
 info.  They could get those the next time each user logs in, if
 they're not detected immediately.  There's no way around this; if your
 program can log in as the users, so can an attacker who's able to
 subvert your program.

Or, since the set of registered Wikimedia users is both vastly smaller than 
the superset of all possible usernames (remember it's restricted to users 
with a global login AFAICT), and readily accessible through a 
high-throughput API, a brute-force attack would be, if not trivial, 
certainly extremely feasible.

 As for the other solutions that were presented - I was really trying
 to create a cross-platform, cross-browser solution that would not
 hinge on one particular technology. Javascript would be great, but
 what if someone doesn't have JS enabled? OAuth and a read-only API
 would be close-to-ideal, but they currently don't work with/don't
 exist on the Wikimedia servers. I am, however, open to other workable
 solutions that are presented - let me know.

 I would suggest you apply for a toolserver account:

 https://wiki.toolserver.org/view/Account_approval_process

 Once you have a toolserver account, I'd be willing to work with you to
 arrange for some form of direct access to all wikis' watchlist tables
 (I'm a toolserver root).  You then wouldn't need to possess any login
 info.

This looks like a *much* more acceptable system.  Although how would you 
authenticate without collecting proscribed data...?

--HM 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Marco Schuster
On Thu, Jul 23, 2009 at 8:50 PM, Happy-melon happy-me...@live.com wrote:



 Aryeh Gregor 
 simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote in message
 news:7c2a12e20907231051s638dd2f9v399ac2a79e185...@mail.gmail.com...
  On Thu, Jul 23, 2009 at 1:37 PM, Tim Starlingtstarl...@wikimedia.org
  wrote:
  To help in the proving trustworthy, or else process, I have released
  the source code of Watchlistr - please take a look at it. You will see
  that I take the utmost care in securing user information. The wiki
  logins are encrypted with AES in our database. The key used to encrypt
  each user's login list is their site username, which is stored as a
  SHA1 hash in our database. If a cracker were to, somehow, gain access
  to the database, they would be left with a pile of garbage.
 
  They would only have to get the site usernames to decrypt the login
  info.  They could get those the next time each user logs in, if
  they're not detected immediately.  There's no way around this; if your
  program can log in as the users, so can an attacker who's able to
  subvert your program.

 Or, since the set of registered Wikimedia users is both vastly smaller than
 the superset of all possible usernames (remember it's restricted to users
 with a global login AFAICT), and readily accessible through a
 high-throughput API, a brute-force attack would be, if not trivial,
 certainly extremely feasible.
 
  As for the other solutions that were presented - I was really trying
  to create a cross-platform, cross-browser solution that would not
  hinge on one particular technology. Javascript would be great, but
  what if someone doesn't have JS enabled? OAuth and a read-only API
  would be close-to-ideal, but they currently don't work with/don't
  exist on the Wikimedia servers. I am, however, open to other workable
  solutions that are presented - let me know.
 
  I would suggest you apply for a toolserver account:
 
  https://wiki.toolserver.org/view/Account_approval_process
 
  Once you have a toolserver account, I'd be willing to work with you to
  arrange for some form of direct access to all wikis' watchlist tables
  (I'm a toolserver root).  You then wouldn't need to possess any login
  info.

 This looks like a *much* more acceptable system.  Although how would you
 authenticate without collecting proscribed data...?


Let the user prove account ownership by a talk page edit. This was the way
Interiot used in his old edit counter... (is this one still active?)

Marco


-- 
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Translate extension, jqery

2009-07-23 Thread Brion Vibber
On 07/22/2009 05:45 AM, Glanthor wrote:
 the another big question is that why don't include JQuery to load
 automatically with _every_ pages?

Because the version of MediaWiki currently in production doesn't have or 
use jQuery, and the merging of the new-upload/scriptloader branch which 
begins to use and provide it is still in progress on dev trunk.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Brion Vibber
On 07/22/2009 05:11 PM, Ryan Lane wrote:
 On Wed, Jul 22, 2009 at 3:49 PM, Gregory Maxwellgmaxw...@gmail.com  wrote:
 If it has your credentials it can impersonate you, which is bad.

 It addressed by making it possible for the site to generate access
 cookies for particular resources which you could share.  I.e.
 generate a code that gives someone read only access to my watchlist.


 What about OpenID + OAuth?

In theory yes, I'd like to support that sort of thing.

(For those unfamiliar: this would allow third party tools or sites to 
request limited access on a user's behalf, without exposing the user's 
password credentials to that third-party tool. The user would need to 
agree to exactly which information would be provided to the tool, and 
would be able to revoke the access in the future.

This is broadly similar to the authorization for Flickr API clients and 
Facebook apps, but lots of sites are transitioning from their older 
proprietary protocols for this to OpenID+OAuth.)

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Brion Vibber
On 07/22/2009 06:39 PM, Aryeh Gregor wrote:
 On Thu, Jul 23, 2009 at 1:02 AM, Ryan Lanerlan...@gmail.com  wrote:
 Check out how the Flickr API works. Users can give web and desktop
 apps privileges (read/write/delete).

 It isn't really that bizarre of a concept.

 Read/write/delete access to what?  The only cases where read access
 would be relevant would be what, watchlist and preferences, pretty
 much?

At the moment, yes. However additional information is likely to end up 
existing in the future; some more social features (friend graph, 
mentor/mentee relationships, private messaging) would have obvious 
benefits to making new-user workflow smoother.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Alternative editing interfaces using write API (was: Re: Watchlistr.com, an outside site that asks for Wikimedia passwords)

2009-07-23 Thread Brion Vibber
On 07/22/2009 08:21 PM, Brianna Laugher wrote:
 I imagine you could also have it so that actions made via the API
 identify where they are made from. (a la Twitter's from web, from
 twhirl etc)

 In that case, if that information was exposed in the UI, it would be
 easy to identify rogue applications and block them completely across
 the site.

Exactly. :)

Permissions can be as fine grained as we want and it can be quite easy 
to revoke access on an individual or site basis.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] What to do about --compare and --record. Second request for comments

2009-07-23 Thread Brion Vibber
On 07/23/2009 11:00 AM, dan nessett wrote:

 So far no one has responded to my question about how --ktf-to-fail should 
 interact with --compare and --record. Also, at least one commenter has 
 suggested a different name for --ktf-to-fail. Before I open a bug and attach 
 the patches, I would like to resolve these issues. Since Brion suggested this 
 task, would he comment?

Offhand I'm not sure I see a need for a switch specifically.

Couple thoughts offhand:

* There appears to already be a disabled option which can be added to 
test cases. Since this already exists, it doesn't need to be developed 
and could simply be added to the tests we know don't currently work.

* If there's a desire to run those tests anyway, I'd probably call the 
option --run-disabled. This should be easy to add.

* Not sure there's any need for specific handling w/ compare and record; 
we can just record whatever we run.


If on the other hand we want to run and record these tests, but not 
whinge at the user about them, then we'd want another option on them. 
Probably just having another completion state for the output would do it 
(grouping known-to-fail tests separately from others that fail). I'm not 
sure how important that is, though.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Alternative editing interfaces using write API (was: Re: Watchlistr.com, an outside site that asks for Wikimedia passwords)

2009-07-23 Thread Brianna Laugher
2009/7/24 Aryeh Gregor simetrical+wikil...@gmail.com:
 On Thu, Jul 23, 2009 at 2:20 AM, Brianna
 Laugherbrianna.laug...@gmail.com wrote:
 All the potential problems posed are ones that Wikipedia faces every
 day just because it lets people edit, period. I don't see how doing it
 via an API adds some massive new risk.

 Well.  If you had some way to clearly distinguish which automated tool
 made the edit, and a way for admins to block  all edits from a
 specific tool as easily as they can currently block or revert all
 edits from a specific user, and no way to take dangerous admin-only
 actions (e.g. editing interface messages) using the tool -- then on
 reflection, I'll grant that I don't see any problems with it.  The
 only serious risk, then, would be to a user's reputation, if the tool
 author is subtly malicious.  That only affects the specific user, and
 is a risk they can decide to take or not.

Yay!

 That's a considerable amount of infrastructure that would be needed,
 though.  I'm not sure it's worth the effort just for the sake of
 enabling web-based editing tools.  Remember, for desktop tools this is
 pointless.  They can already steal your password directly in a hundred
 different ways, so letting them edit directly using your credentials
 is as safe as running them at all.  There are plenty of desktop tools
 that are already used as editing aids.  I doubt the gain from allowing
 web-based tools as well would be worth implementing this whole
 authentication system.

Well, I don't know that I agree with this argument we should just
assume desktops are already compromised, but I'm not that interested
in desktop applications so I will leave it aside.

Given that
* the write API has only been enabled on Wikimedia sites since August
2008 (less than a year)
* we don't do very much/any promotion of our API, and
* our data is extremely complex (especially compared to, say, Twitter),
I am not at all surprised that no web apps have yet spung up (or, only
Watchlistr). I don't think the fact that no web apps have been created
yet means that it has been judged as not-that-useful. I think it will
take a while, and a few examples, for developers to start to get the
idea of being creative with the MW API.


 I love the idea of the write API because it removes the necessity to
 have MediaWiki as the only way to interact with Wikimedia content. The
 write API lets us innovate at the interface level just as we
 collaboratively innovate at the content level.

 The write API doesn't allow anything new.  It just makes some things
 easier and more reliable.  Anything you could do with the write API,
 you could do by screen-scraping, just maybe less quickly and reliably.
  (With maybe a very small number of narrow exceptions.)

If you make something orders of magnitude easier, it is like a new thing.

Anyway I am glad that we have come to some kind of agreement. I
expanded some info at http://www.mediawiki.org/wiki/OAuth based on
this discussion.

cheers
Brianna

-- 
They've just been waiting in a mountain for the right moment:
http://modernthings.org/

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Watchlistr.com, an outside site that asks for Wikimedia passwords

2009-07-23 Thread Aryeh Gregor
On Thu, Jul 23, 2009 at 2:32 PM, Cody Jungfunkyca...@gmail.com wrote:
 Wouldn't adding a salt fix this? They would have to have both the
 username, the database, and the salt value to decrypt the wiki list.

In other words, they would have to have access to your server, nothing
more.  No, it wouldn't fix it.

After some discussion in #wikimedia-toolserver, Duesentrieb pointed
out that a) this issue would be solved if MediaWiki just allowed RSS
feeds for watchlists, and b) it would probably take less work for me
to add that feature to MediaWiki than to develop an authentication
framework that would allow users to securely permit toolserver apps
access to their watchlists.  MrZ-man helpfully pointed out that the
API already supports watchlist feeds, so I was able to hack on support
for token-based authentication pretty easily:

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/53703

Major limitations right now are 1) the default is an empty string,
which means don't use, so it's opt-in; 2) the URL for the feed isn't
actually output anywhere.  Watchlist aggregators should now be easy to
set up, plus people can just use their favorite feed reader.

On Thu, Jul 23, 2009 at 6:47 PM, Brion Vibberbr...@wikimedia.org wrote:
 At the moment, yes. However additional information is likely to end up
 existing in the future; some more social features (friend graph,
 mentor/mentee relationships, private messaging) would have obvious
 benefits to making new-user workflow smoother.

I hope MediaWiki doesn't start tacking on random social networking
features, though!

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l