Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Neil Kandalgaonkar
On 7/20/10 8:08 PM, Tim Starling wrote:

> The Firefogg chunking
> protocol itself is poorly thought-out and buggy, it's not the sort of
> thing you'd want to use by choice, with a non-Firefogg client.

What in your view would a better version look like?

The PLupload protocol seems quite similar. I might be missing some 
subtle difference.


> I'd still be
> more comfortable promoting better-studied client-side extensions, if
> we have to promote a client-side extension at all.

I don't think we should be relying on extensions per se. Firefogg does 
do some neat things nothing else does, like converting video formats. 
But it's never going to be installed by a larger percentage of our users.

As far as making uploads generally easier, PLupload's approach is way 
more generic since it abstracts away the "helper" technologies. It will 
work out of the box for maybe >99% of the web and provides a path to 
eventually transitioning to pure JS solutions. It's a really interesting 
approach and the design looks very clean. I wish I'd known about it 
before I started this project.

That said, it went public in early 2010, and a quick visit to its forums 
will show that it's not yet bug-free software either.

Anyway, thanks for the URL. We've gone the free software purist route 
with our uploader, but we may yet learn something from PLuploader or 
incorporate some of what it does.

-- 
Neil Kandalgaonkar  |) 


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Neil Kandalgaonkar
On 7/20/10 6:34 PM, Aryeh Gregor wrote:
> On Tue, Jul 20, 2010 at 2:28 PM, Platonides  wrote:
>> Or a modern browser using FileReader.
>>
>> http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/
>
> This would be best, but unfortunately it's not yet usable for large
> files -- it has to read the entire file into memory on the client.
 > [...]
 > But I don't think it actually addresses our use-case.  We'd want the
 > ability to slice up a File object into Blobs and handle those
 > separately, and I don't see it in the specs.  I'll ask.  Anyway, I
 > don't think this is feasible just yet, sadly.

Here's a demo which implements an EXIF reader for JPEGs in Javascript, 
which reads the file as a stream of bytes.

   http://demos.hacks.mozilla.org/openweb/FileAPI/

So, as you can see, we do have a form of BLOB access.

So you're right that these newer Firefox File* APIs aren't what we want 
for uploading extremely large images (>50MB or so). But I can easily see 
using this to slice up anything smaller for chunk-oriented APIs.

-- 
Neil Kandalgaonkar  |) 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit (backend)

2010-07-20 Thread Neil Kandalgaonkar
On 7/20/10 9:57 AM, Michael Dale wrote:
> * The reason for the 100meg limit has to do with php and apache and how
> it stores the uploaded POST in memory so setting the limit higher would
> risk increasing chances of apaches hitting swap if multiple uploads
> happened on a given box.

I've heard others say that -- this may have been true before, but I'm 
pretty sure it's not true any in PHP 5.2 or greater.

I've been doing some tests with large uploads (around 50MB) and I don't 
observe any Apache process getting that large. Instead it writes a 
temporary file. I checked out the source where it handles uploads and 
they seem to be taking care not to slurp the whole thing into memory. 
(lines 1061-1106)

http://svn.php.net/viewvc/php/php-src/trunk/main/rfc1867.c?view=markup

So, there may be other reasons not to upload a very large file, but I 
don't think this is one of them.

-- 
Neil Kandalgaonkar  |) 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Tim Starling
On 21/07/10 00:32, Daniel Kinzler wrote:
> Lars Aronsson schrieb:
>> What are the plans for increasing this limit? Would it be
>> possible to allow 500 MB or 1 GB for these file formats,
>> and maintain the lower limit for other formats?
> 
> As far as I know, we are hitting the limits of http here. Increasing the 
> upload
> limit as such isn't a solution, and a per-file-type setting doesn't help, 
> since
> the limit strikes before php is even started. It's on the server level.

The problem is just that increasing the limits in our main Squid and
Apache pool would create DoS vulnerabilities, including the prospect
of "accidental DoS". We could offer this service via another domain
name, with a specially-configured webserver, and a higher level of
access control compared to ordinary upload to avoid DoS, but there is
no support for that in MediaWiki.

We could theoretically allow uploads of several gigabytes this way,
which is about as large as we want files to be anyway. People with
flaky internet connections would hit the problem of the lack of
resuming, but it would work for some.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Tim Starling
On 21/07/10 00:30, Roan Kattouw wrote:
> There is support for chunked uploading in MediaWiki core, but it's
> disabled for security reasons AFAIK. With chunked uploading, you're
> uploading your file in chunks of 1 MB, which means that the impact of
> failure for large uploads is vastly reduced (if a chunk fails, you
> just reupload that chunk) and that progress bars can be implemented.
> This does need client-side support, e.g. using the Firefogg extension
> for Firefox or a bot framework that knows about chunked uploads. This
> probably means the upload limit can be raised, but don't quote me on
> that.

Firefogg support has been moved out to an extension, and that
extension was not complete last time I checked. There was chunked
upload support in the API, but it was Firefogg-specific, no
client-neutral protocol has been proposed. The Firefogg chunking
protocol itself is poorly thought-out and buggy, it's not the sort of
thing you'd want to use by choice, with a non-Firefogg client.

Note that it's not necessary to use Firefogg to get chunked uploads,
there are lots of available technologies which users are more likely
to have installed already. See the "chunking" line in the support
matrix at http://www.plupload.com/

When I reviewed Firefogg, I found an extremely serious CSRF
vulnerability in it. They say they have fixed it now, but I'd still be
more comfortable promoting better-studied client-side extensions, if
we have to promote a client-side extension at all.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "Take me back" too hip

2010-07-20 Thread James Salsman
jida...@jidanni.org wrote:
>
> The first words the user* sees on every page are "Take me back"

I agree that link should be renamed with different text.  I've already
seen two people confuse it with the back button's functionality,
thinking they needed to click it after logging in to get back to the
page they had to log in to create.  For those users, who were never in
Monobook to begin with, they were taken "back" to somewhere they had
never been before, didn't know what to do when they got there, and
weren't very happy about either of those facts.

May I suggest, "Use legacy interface" or "Abandon new interface"?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Changes to the new installer

2010-07-20 Thread Tim Starling
On 20/07/10 19:28, Jeroen De Dauw wrote:
> Hey,
> 
> Basically splitting core-specific stuff from general installer functionality
> (so the general stuff can also be used for extensions). Also making initial
> steps towards filesystem upgrades possible.
> 
> The point of this mail is not discussing what I want to do though, but
> rather avoiding commit conflicts, as I don't know which people are working
> on the code right now, and who has uncommitted changes.

There's still quite a lot of work to do to get the new installer ready
for 1.17. I think we should focus on that, and avoid expanding the
scope of the project until we've reached that milestone.

There are the issues discussed here:

http://www.mediawiki.org/wiki/New-installer_issues

and more will become apparent as more testing is done.

If the new installer is not ready to replace the old installer when it
comes time to branch 1.17, I will move it out of trunk, back to a
development branch. Hopefully that won't be necessary.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] "Take me back" too hip

2010-07-20 Thread jidanni
The first words the user* sees on every page are "Take me back".
Back to Mom's house? Back to the future? Back to the previous page?
Why can't you just use "Older interface", "Traditional interface", etc.?

*Wikipedia, Commons, etc. users, who have chosen the Vector interface,
especially those who haven't then afterward logged in again for a month
and don't now remember what this is all about.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Aryeh Gregor
On Tue, Jul 20, 2010 at 2:28 PM, Platonides  wrote:
> Or a modern browser using FileReader.
>
> http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/

This would be best, but unfortunately it's not yet usable for large
files -- it has to read the entire file into memory on the client.
This post discusses a better interface that's being deployed:

http://hacks.mozilla.org/2010/07/firefox-4-formdata-and-the-new-file-url-object/

But I don't think it actually addresses our use-case.  We'd want the
ability to slice up a File object into Blobs and handle those
separately, and I don't see it in the specs.  I'll ask.  Anyway, I
don't think this is feasible just yet, sadly.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit (frontend)

2010-07-20 Thread Neil Kandalgaonkar
I hope to begin to address this problem with the new UploadWizard, at 
least the frontend issues. This isn't really part of our mandate, but I 
am hoping to add in chunked uploads for bleeding-edge browsers like 
Firefox 3.6+ and 4.0. Then you can upload files of whatever size you want.

I've written it to support what I'm calling multiple "transport" 
mechanisms; some using simple HTTP uploads, and some more exotic methods 
like Mozilla's FileAPI.

At this point, we're not considering adding any new technologies like 
Java or Flash to the mix, although these are the standard ways that 
people do usable uploads on the web. Flash isn't considered open enough, 
and Java seemed like a radical break.

I could see a role for "helper" applets or SWFs, but it's not on the 
agenda at this time. Right now we're trying to deliver something that 
fits the bill, using standard MediaWiki technologies (HTML, JS, and PHP).

I'll post again to the list if I get a FileAPI upload working. Or, if 
someone is really interested, I'll help them get started.


On 7/20/10 11:28 AM, Platonides wrote:
> Roan Kattouw wrote:
>> 2010/7/20 Max Semenik:
>>> On 20.07.2010, 19:12 Lars wrote:
 Requiring special client software is a problem. Is that really
 the only possible solution?
>>>
>>> There's also Flash that can do it, however it's being ignored due
>>> to its proprietary nature.
>>>
>> Java applet?
>>
>> Roan Kattouw (Catrope)
>
> Or a modern browser using FileReader.
>
> http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/
>
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


-- 
Neil Kandalgaonkar  |) 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Mike.lifeguard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 37-01--10 03:59 PM, Roan Kattouw wrote:
> Java applet?

I was just about to say: surely a FLOSS & decent Java applet has already
been written?!

Or maybe I have too much hope :D

- -Mike
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAkxGLsUACgkQst0AR/DaKHvY6wCgkHPVBAZQLW1jlfYDqxhgdOvT
d24An2Z9WCq70WTvTRZYGPt2mbnPeneJ
=+H7k
-END PGP SIGNATURE-

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] [Semediawiki-user] Maps: marker specific parameters

2010-07-20 Thread Markus Krötzsch
On 19/07/2010 07:58, Jeroen De Dauw wrote:
> Hey,
>
> Currently the Maps extension [0] allows you to specify marker specific 
> data
> [1] (a titel, further text, and the icon to use). The way this is done is
> not very clean, and needs improvement. Someone poked me about this at
> Wikimania, but I don't know how to improve upon the current syntax, hence
> this discussion.
>
> What's bad about the current approach:
> * A 3rd level of parameter is needed, which is rather insane. (The first
> level are the regular parser function parameters, separated with vertical
> lines. The second level are coordinates or addresses, separated by
> semicolons. The third level for the marker specific data uses tildes as
> delimiters.)
> * Not very readable.
> * Unsuited for long text.
> * Excludes all the delimiters from usage in the text.
> * Problems with links in the wikitext.
>
> Anyone an idea what a better approach would be? You can either reply here,
> or write out your idea's on the discussion page [2].

An idea that has not been explored a lot yet is to introduce dedicated 
parser functions for such nested parameter blocks instead of changing 
the separators on each level of nesting. In the most basic case, one 
could use a parser function call to merely insert the idiosyncratic 
separator types of the deeper levels, where the parser function takes 
care of possible escaping. A cleaner approach would be to use the MW 
parser to get parameters via the DOM so that no actual serialisation of 
the nested parameters is required at all. In this case, one would not 
even need different parser functions for each new nesting level.

This would result in a syntax with nested parser functions, of course, 
which might not seem so compelling to everybody. But it seems to be the 
most MediaWiki way of handling the problem (maximise the amount of {{}} 
per page, but avoid custom syntax).

-- Markus

> [0] http://www.mediawiki.org/wiki/Extension:Maps
> [1] http://mapping.referata.com/wiki/Help:Marker_data
> [2] http://mapping.referata.com/wiki/Help_talk:Marker_data
>
> Cheers
>
> --
> Jeroen De Dauw
> * http://blog.bn2vs.com
> * http://wiki.bn2vs.com
> Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 
> 6C 69
> 66 65!
> --
> --
> This SF.net email is sponsored by Sprint
> What will you do first with EVO, the first 4G phone?
> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
> ___
> Semediawiki-user mailing list
> semediawiki-u...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Platonides
Roan Kattouw wrote:
> 2010/7/20 Max Semenik :
>> On 20.07.2010, 19:12 Lars wrote:
>>> Requiring special client software is a problem. Is that really
>>> the only possible solution?
>>
>> There's also Flash that can do it, however it's being ignored due
>> to its proprietary nature.
>>
> Java applet?
> 
> Roan Kattouw (Catrope)

Or a modern browser using FileReader.

http://hacks.mozilla.org/2010/06/html5-adoption-stories-box-net-and-html5-drag-and-drop/



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Platonides
Roan Kattouw wrote:
> Auto-deferral regexes for CodeReview were implemented in r63277 [1],
> and I deployed this feature three weeks ago. It allows us to set an
> array of regexes that will be matched against the path of each new
> commit; if one of them matches, the commit is automatically marked as
> 'deferred' instead of 'new'.

It should probably allow different status.


> There are a few limitations to this implementation that are important
> to understand:
> * it only matches paths, not commit summaries. This means
> auto-deferring e.g. TranslateWiki exports is harder
> * it only matches the root path of the commit, which is very often an
> uninformative one like /trunk/phase3 , /trunk/extensions , /trunk or
> even / . This means you can't e.g. auto-defer all commits touching a
> certain file or path

Why not base it also in $rev->mPaths ?
We don't touch so many files for it to be problematic on server
resources, and an auto-defer for commits with all files matching
"(^/trunk/phase3/languages/messages/|.i18n.php$)" would nicely skip all
translatewiki updates.

Although a hook entry may be a more appropiate way.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Markus Krötzsch
On 20/07/2010 16:25, Jeroen De Dauw wrote:
> Hey,
>
> That would be completely awesome!

Yes, that would help a lot, although it might be appropriate to still 
auto-defer reviews for some extensions in this queue, depending on 
whether or not enough reviewing happens for them (calling an extension 
"Semantic..." does not imply that SMW developers will review it ;-).

Markus

>
> Cheers
>
> --
> Jeroen De Dauw
> * http://blog.bn2vs.com
> * http://wiki.bn2vs.com
> Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
> 66 65!
> --
>
>
> On 20 July 2010 17:20, Max Semenik  wrote:
>
>> On 20.07.2010, 17:20 Chad wrote:
>>
>>> On Tue, Jul 20, 2010 at 9:16 AM, Jeroen De Dauw
>> wrote:
 Hey,

 About the semantic extensions:

 It would actually be nice if they did not get marked deferred at all,
>> and be
 reviewed by people that are familiar with them to some extend. I'm
>> willing
 to do that for all commits not made by myself. Assuming this would not
 interfere to much with WMF code review of course :)

 Cheers

>>
>>> If someone's going to start doing code review, that's fine. They've
>>> just all been getting deferred because nobody's been reviewing
>>> them so far.
>>
>> We could create a separate review queue for it.
>>
>>
>> --
>> Best regards,
>>   Max Semenik ([[User:MaxSem]])
>>
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Michael Dale
A few points:

* The reason for the 100meg limit has to do with php and apache and how 
it stores the uploaded POST in memory so setting the limit higher would 
risk increasing chances of apaches hitting swap if multiple uploads 
happened on a given box.

* Modern html5 browsers are starting to be able to natively split files 
up into chunks and do separate 1 meg xhr posts. Firefogg extension does 
something similar with extension javascript.

* The server side chunk uploading api support was split out into an 
extension by Mark Hershberger ( cc'ed )

* We should really get the chunk uploading "reviewed" and deployed. Tim 
expressed some concerns with the chunk uploading protocol which we 
addressed client side, but I don't he had time to follow up with 
proposed changes that we made for server api. At any rate I think the 
present protocol is better than normal http POST for large files. We get 
lots of manageable 1 meg chunks and a reset connection does not result 
in re-sending the whole file, and it works with vanilla php and apache 
(  other resumable http upload protocols are more complicated and 
require php or apache mods )

* Backed storage system will not be able to handle a large influx of 
large files for an extended period of time. All of commons is only 10 TB 
or so and is on a "single" storage system. So acompaning an increase of 
upload size should be an effort / plan to re-architect the backed 
storage system.

--michael

On 07/20/2010 09:32 AM, Daniel Kinzler wrote:
> Lars Aronsson schrieb:
>
>> What are the plans for increasing this limit? Would it be
>> possible to allow 500 MB or 1 GB for these file formats,
>> and maintain the lower limit for other formats?
>>  
> As far as I know, we are hitting the limits of http here. Increasing the 
> upload
> limit as such isn't a solution, and a per-file-type setting doesn't help, 
> since
> the limit strikes before php is even started. It's on the server level.
>
> The solution are "chunked uploads". Which people have been working on for a
> while, but I have no idea what the current status is.
>
> -- daniel
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Daniel Kinzler
Lars Aronsson schrieb:
> On 07/20/2010 04:30 PM, Roan Kattouw wrote:
>> This does need client-side support, e.g. using the Firefogg extension
>> for Firefox or a bot framework that knows about chunked uploads.
> 
> Requiring special client software is a problem. Is that really
> the only possible solution?

It appears to be the only good solution for really large files. Anything with a
progress bar requires client side support. Flash or a Java applet would be
enough, but both suck pretty badly.

I'd say: if people really need to upload huge files, it's ok to ask them to
install a browser plugin.

> I understand that a certain webserver or PHP configuration can
> be a problem, in that it might receive the entire file in /tmp (that
> might get full) before returning control to some upload.php script.

IIRC, PHP even tends to buffer the entire file in RAM(!) before writing it to
/tmp. Which is totally insane, but hey, it's PHP. I think that was the original
reason behind the low limit, but I might be wrong.

> But I don't see why HTTP in itself would set a limit at 100 MB.

HTTP itself doesn't. I guess as long as we stay in the 31 bit range (about 2GB),
HTTP will be fine. Larger files may cause overflows in sloppy software.

However, HTTP doesn't allow people to resume uploads or watch progress (the
latter could be done by browsers - sadly, I have never seen it). Thus, it sucks
for very large files.

> What decides this particular limit? Why isn't it 50 MB or 200 MB?

I think it was raised from 20 to 100 a year or two ago. It could be raised a bit
again i guess, but a real solution for really large files would be better, don't
you think?

> Some alternatives would be to open a separate anonymous FTP
> upload ("requires special client software" -- from the 1980s,
> still in use by the Internet Archive) or a get-from-URL
> (server would download the file by HTTP GET from the user's
> server at a specified URL).

To make this sane and safe, with making sure we always know which user did what,
etc, would be quite expensive. I have been thinking about this kind of thing for
mass uploads (i.e. uploads a TAR via ftp, have it unpack on the server, import).
But that's another barrel of fish. Finishing chunked upload is better for the
average user (using FTP to upload stuff is harder on the avarage guzy than
installing a firefox plugin...)

-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Roan Kattouw
2010/7/20 Max Semenik :
> On 20.07.2010, 19:12 Lars wrote:
>
>> Requiring special client software is a problem. Is that really
>> the only possible solution?
>
> There's also Flash that can do it, however it's being ignored due
> to its proprietary nature.
>
Java applet?

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Huib Laurens
Hello Lars,

I don´t think the problem is raising it to 200mb, or 150 mb but 500mb or 1
gb are a lot higher and can cause problems

Anomynous ftp access sounds like a very very  very bad and evil solution...



-- 
Huib "Abigor" Laurens

Tech team

www.wikiweet.nl - www.llamadawiki.nl - www.forgotten-beauty.com -
www.wickedway.nl - www.huiblaurens.nl - www.wikiweet.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Jeroen De Dauw
Hey,

That would be completely awesome!

Cheers

--
Jeroen De Dauw
* http://blog.bn2vs.com
* http://wiki.bn2vs.com
Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
66 65!
--


On 20 July 2010 17:20, Max Semenik  wrote:

> On 20.07.2010, 17:20 Chad wrote:
>
> > On Tue, Jul 20, 2010 at 9:16 AM, Jeroen De Dauw 
> wrote:
> >> Hey,
> >>
> >> About the semantic extensions:
> >>
> >> It would actually be nice if they did not get marked deferred at all,
> and be
> >> reviewed by people that are familiar with them to some extend. I'm
> willing
> >> to do that for all commits not made by myself. Assuming this would not
> >> interfere to much with WMF code review of course :)
> >>
> >> Cheers
> >>
>
> > If someone's going to start doing code review, that's fine. They've
> > just all been getting deferred because nobody's been reviewing
> > them so far.
>
> We could create a separate review queue for it.
>
>
> --
> Best regards,
>  Max Semenik ([[User:MaxSem]])
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Max Semenik
On 20.07.2010, 17:20 Chad wrote:

> On Tue, Jul 20, 2010 at 9:16 AM, Jeroen De Dauw  
> wrote:
>> Hey,
>>
>> About the semantic extensions:
>>
>> It would actually be nice if they did not get marked deferred at all, and be
>> reviewed by people that are familiar with them to some extend. I'm willing
>> to do that for all commits not made by myself. Assuming this would not
>> interfere to much with WMF code review of course :)
>>
>> Cheers
>>

> If someone's going to start doing code review, that's fine. They've
> just all been getting deferred because nobody's been reviewing
> them so far.

We could create a separate review queue for it.


-- 
Best regards,
  Max Semenik ([[User:MaxSem]])


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Max Semenik
On 20.07.2010, 19:12 Lars wrote:

> Requiring special client software is a problem. Is that really
> the only possible solution?

There's also Flash that can do it, however it's being ignored due
to its proprietary nature.

-- 
Best regards,
  Max Semenik ([[User:MaxSem]])


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Lars Aronsson
On 07/20/2010 04:30 PM, Roan Kattouw wrote:
> This does need client-side support, e.g. using the Firefogg extension
> for Firefox or a bot framework that knows about chunked uploads.

Requiring special client software is a problem. Is that really
the only possible solution?

I understand that a certain webserver or PHP configuration can
be a problem, in that it might receive the entire file in /tmp (that
might get full) before returning control to some upload.php script.
But I don't see why HTTP in itself would set a limit at 100 MB.
What decides this particular limit? Why isn't it 50 MB or 200 MB?

Some alternatives would be to open a separate anonymous FTP
upload ("requires special client software" -- from the 1980s,
still in use by the Internet Archive) or a get-from-URL
(server would download the file by HTTP GET from the user's
server at a specified URL).


-- 
   Lars Aronsson (l...@aronsson.se)
   Aronsson Datateknik - http://aronsson.se



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Daniel Kinzler
Lars Aronsson schrieb:
> What are the plans for increasing this limit? Would it be
> possible to allow 500 MB or 1 GB for these file formats,
> and maintain the lower limit for other formats?

As far as I know, we are hitting the limits of http here. Increasing the upload
limit as such isn't a solution, and a per-file-type setting doesn't help, since
the limit strikes before php is even started. It's on the server level.

The solution are "chunked uploads". Which people have been working on for a
while, but I have no idea what the current status is.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upload file size limit

2010-07-20 Thread Roan Kattouw
2010/7/20 Lars Aronsson :
> Time and again, the 100 MB limit on file uploads is a problem,
> in particular for multipage documents (scanned books) in PDF
> or Djvu, and for video files in OGV.
>
> What are the plans for increasing this limit? Would it be
> possible to allow 500 MB or 1 GB for these file formats,
> and maintain the lower limit for other formats?
>
There is support for chunked uploading in MediaWiki core, but it's
disabled for security reasons AFAIK. With chunked uploading, you're
uploading your file in chunks of 1 MB, which means that the impact of
failure for large uploads is vastly reduced (if a chunk fails, you
just reupload that chunk) and that progress bars can be implemented.
This does need client-side support, e.g. using the Firefogg extension
for Firefox or a bot framework that knows about chunked uploads. This
probably means the upload limit can be raised, but don't quote me on
that.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Upload file size limit

2010-07-20 Thread Lars Aronsson
Time and again, the 100 MB limit on file uploads is a problem,
in particular for multipage documents (scanned books) in PDF
or Djvu, and for video files in OGV.

What are the plans for increasing this limit? Would it be
possible to allow 500 MB or 1 GB for these file formats,
and maintain the lower limit for other formats?


-- 
   Lars Aronsson (l...@aronsson.se)
   Aronsson Datateknik - http://aronsson.se



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Memcached for interwiki transclusion

2010-07-20 Thread Daniel Kinzler
Hi Peter

For your info: I have been working on a system for transclusion of data records
(as opposed to wikitext) from external sources (which may again be wikis). The
idea is kind of complementary, but the caching issues may be similar.

Have a look at  if you
like.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Memcached for interwiki transclusion

2010-07-20 Thread Billinghurst
On 20 Jul 2010 at 15:07, Peter17 wrote:

> I am currently working on interwiki transclusion [1].
> 
> In the proposed approach, we currently retrieve distant templates:
> 1) using wfGetLB('wikiid')->getConnection if the distant wiki has a
> wikiid in the interwiki table
> 2) using the API in the opposite case
> 
> In case 1, it seems that retrieving a template from a distant DB is
> just as expensive as retrieving it from the local DB. So, we don't
> store the wikitext of the template locally.
> 
> In case 2, the retrieved wikitext is cached in the transcache table
> for an arbitrary time.
> 
> I have two questions about this system:
> * Is it better to use the transcache table or to use memcached for the
> API-retrieved remplates?
> * Should we cache the DB-retrieved templates with memcached?
> 
> An advantage of memcached here is that it is shared by all the WMF
> wikis, whereas the transcache table is owned by a wiki for itself.
> 
> Thanks in advance
> 
> Best regards
> 
> --
> Peter Potrowl
> http://www.mediawiki.org/wiki/User:Peter17
> 
> 
> [1] 
> http://www.mediawiki.org/wiki/User:Peter17/Reasonably_efficient_interwiki_transclusion
> 
A question for you Peter (and I have no opinion or knowledge on your question) 
about your 
project.

At Wikisource, we undertake a lot of transclusion, and a fair amount of it 
utilising 
Sanbeg's Extension:Labeled Section Transclusion [2]. Do you see that this may 
be part of 
what you are looking towards, or is it solely templates?

My reason for asking is that it would be possible to identify components of WS 
works where 
one may wish to incorporate part of a page into a page at another wiki.

I also see that there is scope for further display of the same texts in 
different 
translations.

Regards, Andrew

[2] http://www.mediawiki.org/wiki/Extension:Labeled_section_transclusion


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Memcached for interwiki transclusion

2010-07-20 Thread Roan Kattouw
2010/7/20 Chad :
> That won't work. Wikimedia doesn't use the text table, so you
> can't just query the text table. When calling things like getRevText()
> locally, it's actually accessing external storage, not querying
> the database.
>
Wikimedia actually does use the text table, it's just that old_text
contains a URL-like referrer to external storage and old_flags
contains 'external' to indicate this. You would still need to access
the text table on the remote wiki in order to be able to retrieve the
text from ES.

>> * Should we cache the DB-retrieved templates with memcached?
>>
No. At Wikimedia, we already cache revision texts in memcached so we
don't have to query ES too much. Caching DB-retrieved templates in
memcached would duplicate this at Wikimedia at least, and generally
not make a great deal of sense even without the duplication.

>> An advantage of memcached here is that it is shared by all the WMF
>> wikis, whereas the transcache table is owned by a wiki for itself.
>>
>
> It's also generally faster, handles its own expiries and allows for a bit
> more flexibility in cache times (some things can be cached longer
> than others, potentially).
>
Note that memcached is also segmented by the choice of keys:
wfMemcKey() prefixes the keys it generates with the wiki ID so wikis
don't interfere with each other's keys. To share memcached entries
between wikis, you'd either need to not use wfMemcKey() (bad) or hack
it to optionally replace the wiki ID with something else (better).

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Chad
On Tue, Jul 20, 2010 at 9:16 AM, Jeroen De Dauw  wrote:
> Hey,
>
> About the semantic extensions:
>
> It would actually be nice if they did not get marked deferred at all, and be
> reviewed by people that are familiar with them to some extend. I'm willing
> to do that for all commits not made by myself. Assuming this would not
> interfere to much with WMF code review of course :)
>
> Cheers
>

If someone's going to start doing code review, that's fine. They've
just all been getting deferred because nobody's been reviewing
them so far.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Memcached for interwiki transclusion

2010-07-20 Thread Chad
On Tue, Jul 20, 2010 at 9:07 AM, Peter17  wrote:
> Hello to all!
>
> I am currently working on interwiki transclusion [1].
>
> In the proposed approach, we currently retrieve distant templates:
> 1) using wfGetLB('wikiid')->getConnection if the distant wiki has a
> wikiid in the interwiki table

That won't work. Wikimedia doesn't use the text table, so you
can't just query the text table. When calling things like getRevText()
locally, it's actually accessing external storage, not querying
the database.

> 2) using the API in the opposite case
>
> In case 1, it seems that retrieving a template from a distant DB is
> just as expensive as retrieving it from the local DB. So, we don't
> store the wikitext of the template locally.
>
> In case 2, the retrieved wikitext is cached in the transcache table
> for an arbitrary time.
>

Currently the transcache table stores any entries, regardless of
where they were transcluded from (farm or remote wiki). This
probably doesn't need to be the situation for case 1, like you
said.

> I have two questions about this system:
> * Is it better to use the transcache table or to use memcached for the
> API-retrieved remplates?
> * Should we cache the DB-retrieved templates with memcached?
>

Memcached, imho. It's faster than the DB and handles expiries on its
own. And for sites without Memcached, they can always use CACHE_DB
or CACHE_ACCEL.

> An advantage of memcached here is that it is shared by all the WMF
> wikis, whereas the transcache table is owned by a wiki for itself.
>

It's also generally faster, handles its own expiries and allows for a bit
more flexibility in cache times (some things can be cached longer
than others, potentially).

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Jeroen De Dauw
Hey,

About the semantic extensions:

It would actually be nice if they did not get marked deferred at all, and be
reviewed by people that are familiar with them to some extend. I'm willing
to do that for all commits not made by myself. Assuming this would not
interfere to much with WMF code review of course :)

Cheers

--
Jeroen De Dauw
* http://blog.bn2vs.com
* http://wiki.bn2vs.com
Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
66 65!
--


On 20 July 2010 15:10, Chad  wrote:

> On Tue, Jul 20, 2010 at 7:46 AM, Daniel Kinzler 
> wrote:
> > Roan Kattouw schrieb:
> >> Despite these limitations, this feature could be quite useful for
> >> autodeferring at least some large parts of the repository we don't
> >> care about review-wise. Any suggestions for paths to auto-defer?
> >
> > http://svn.wikimedia.org/svnroot/mediawiki/trunk/WikiWord/ is basically
> my
> > personal pet project. If you can keep it out of the review queue, that
> would
> > probably be an improvement for everyone :)
> >
> > -- daniel
> >
>
> I was going to suggest that. And might I be so bold as to suggest
> the Semantic* extensions. They're generally marked as deferred
> without review.
>
> I also want to make a quick disclaimer. This doesn't mean that
> these code paths don't matter. It just means they aren't actively
> reviewed on CR, so we want to just automate what reviewers are
> doing manually anyway
>
> -Chad
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Chad
On Tue, Jul 20, 2010 at 7:46 AM, Daniel Kinzler  wrote:
> Roan Kattouw schrieb:
>> Despite these limitations, this feature could be quite useful for
>> autodeferring at least some large parts of the repository we don't
>> care about review-wise. Any suggestions for paths to auto-defer?
>
> http://svn.wikimedia.org/svnroot/mediawiki/trunk/WikiWord/ is basically my
> personal pet project. If you can keep it out of the review queue, that would
> probably be an improvement for everyone :)
>
> -- daniel
>

I was going to suggest that. And might I be so bold as to suggest
the Semantic* extensions. They're generally marked as deferred
without review.

I also want to make a quick disclaimer. This doesn't mean that
these code paths don't matter. It just means they aren't actively
reviewed on CR, so we want to just automate what reviewers are
doing manually anyway

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Memcached for interwiki transclusion

2010-07-20 Thread Peter17
Hello to all!

I am currently working on interwiki transclusion [1].

In the proposed approach, we currently retrieve distant templates:
1) using wfGetLB('wikiid')->getConnection if the distant wiki has a
wikiid in the interwiki table
2) using the API in the opposite case

In case 1, it seems that retrieving a template from a distant DB is
just as expensive as retrieving it from the local DB. So, we don't
store the wikitext of the template locally.

In case 2, the retrieved wikitext is cached in the transcache table
for an arbitrary time.

I have two questions about this system:
* Is it better to use the transcache table or to use memcached for the
API-retrieved remplates?
* Should we cache the DB-retrieved templates with memcached?

An advantage of memcached here is that it is shared by all the WMF
wikis, whereas the transcache table is owned by a wiki for itself.

Thanks in advance

Best regards

--
Peter Potrowl
http://www.mediawiki.org/wiki/User:Peter17


[1] 
http://www.mediawiki.org/wiki/User:Peter17/Reasonably_efficient_interwiki_transclusion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Changes to the new installer

2010-07-20 Thread Chad
On Tue, Jul 20, 2010 at 5:28 AM, Jeroen De Dauw  wrote:
> Hey,
>
> Basically splitting core-specific stuff from general installer functionality
> (so the general stuff can also be used for extensions). Also making initial
> steps towards filesystem upgrades possible.
>
> The point of this mail is not discussing what I want to do though, but
> rather avoiding commit conflicts, as I don't know which people are working
> on the code right now, and who has uncommitted changes.
>
> Cheers
>
> --
> Jeroen De Dauw
> * http://blog.bn2vs.com
> * http://wiki.bn2vs.com
> Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
> 66 65!
> --
>
>

It really depends on the change, and I'd rather review diffs on this
before applying them to new-installer. It's pretty feature-complete
and we're in the bugfixing stage. I'd rather not introduce a *whole*
lot of new code; a branch might be better suited until 1.17 is
branched, then could quickly be integrated into trunk (1.18alpha?).
That being said, feel free to ask. Good code is good code and if
things can be put in easily and quickly them I'm for it.

On a more general note, the reason I'm being cautious is this:
1.16 saw a lot of feature creep. Part of the reason it took so long
was a lack of reviewing manpower with Brion gone. The other
reason  is because developers just kept shoving more features in
(myself included). With no 1.16 deadline looming, it was easy to
keep adding new features to the release...delaying it even more.
When I took up new-installer, I started down the same road with
the schema abstraction work. It quickly grew out of scope and
was moved to a branch.

An actual roadmap would help solve these issues, but I'm getting
OT :)

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Daniel Kinzler
Roan Kattouw schrieb:
> Despite these limitations, this feature could be quite useful for
> autodeferring at least some large parts of the repository we don't
> care about review-wise. Any suggestions for paths to auto-defer?

http://svn.wikimedia.org/svnroot/mediawiki/trunk/WikiWord/ is basically my
personal pet project. If you can keep it out of the review queue, that would
probably be an improvement for everyone :)

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] CodeReview auto-deferral regexes

2010-07-20 Thread Roan Kattouw
Auto-deferral regexes for CodeReview were implemented in r63277 [1],
and I deployed this feature three weeks ago. It allows us to set an
array of regexes that will be matched against the path of each new
commit; if one of them matches, the commit is automatically marked as
'deferred' instead of 'new'.

There are a few limitations to this implementation that are important
to understand:
* it only matches paths, not commit summaries. This means
auto-deferring e.g. TranslateWiki exports is harder
* it only matches the root path of the commit, which is very often an
uninformative one like /trunk/phase3 , /trunk/extensions , /trunk or
even / . This means you can't e.g. auto-defer all commits touching a
certain file or path

Despite these limitations, this feature could be quite useful for
autodeferring at least some large parts of the repository we don't
care about review-wise. Any suggestions for paths to auto-defer?

Roan Kattouw (Catrope)

[1] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/63277

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Question about oldest Wikipedia dump available for different Wikipedias

2010-07-20 Thread Ilmari Karonen
On 07/20/2010 09:51 AM, paolo massa wrote:
> Thanks Gregor and yes, you are right.
> I didn't think about your suggestion before, sorry.
> The fact is that I wrote a script running on the
> pages-meta-current.xml because it is much smaller and manageable but,
> you are right: I can use the revision of the page I'm interested that
> is in pages-meta-history.xml

If you're only interested in a small number of pages, you can get an 
up-to-date "mini dump" through Special:Export.  See 
 and 
 for details.

Alternatively, you can also fetch page histories through the API: 


-- 
Ilmari Karonen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Changes to the new installer

2010-07-20 Thread Jeroen De Dauw
Hey,

Basically splitting core-specific stuff from general installer functionality
(so the general stuff can also be used for extensions). Also making initial
steps towards filesystem upgrades possible.

The point of this mail is not discussing what I want to do though, but
rather avoiding commit conflicts, as I don't know which people are working
on the code right now, and who has uncommitted changes.

Cheers

--
Jeroen De Dauw
* http://blog.bn2vs.com
* http://wiki.bn2vs.com
Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
66 65!
--


On 20 July 2010 11:14, Max Semenik  wrote:

> On 20.07.2010, 13:08 Jeroen wrote:
>
> > Hey,
>
> > I want to make several structural changes to the new installer, to make
> it
> > fit in a more general deployment model. This will involve moving code and
> > renaming things. Does anyone have objections to me doing this? (I don't
> want
> > to undermine work people are currently doing.)
>
> > Cheers
>
> Hard to tell without knowing what you're proposing:)
>
> --
> Best regards,
>  Max Semenik ([[User:MaxSem]])
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Changes to the new installer

2010-07-20 Thread Max Semenik
On 20.07.2010, 13:08 Jeroen wrote:

> Hey,

> I want to make several structural changes to the new installer, to make it
> fit in a more general deployment model. This will involve moving code and
> renaming things. Does anyone have objections to me doing this? (I don't want
> to undermine work people are currently doing.)

> Cheers

Hard to tell without knowing what you're proposing:)

-- 
Best regards,
  Max Semenik ([[User:MaxSem]])


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Changes to the new installer

2010-07-20 Thread Jeroen De Dauw
Hey,

I want to make several structural changes to the new installer, to make it
fit in a more general deployment model. This will involve moving code and
renaming things. Does anyone have objections to me doing this? (I don't want
to undermine work people are currently doing.)

Cheers

--
Jeroen De Dauw
* http://blog.bn2vs.com
* http://wiki.bn2vs.com
Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
66 65!
--
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l