[Wikitech-l] Re: Maintenance scripts are moving to Kubernetes

2024-10-20 Thread Tim Starling

On 26/9/24 13:10, Reuven Lazarus wrote:


Starting a maintenance script looks like this:


  rzl@deploy2002:~$ mwscript-k8s --comment="T341553" -- Version.php 
--wiki=enwiki



Any options for the mwscript-k8s tool, as described below, go before 
the --.



After the --, the first argument is the script name; everything else 
is passed to the script. This is the same as you're used to passing 
to mwscript.




Is that a limitation of Python's command line parsing?

I mean, the obvious way to do it from the viewpoint of usability is to 
take options after the first argument as belonging to the script.


-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Regenerating LocalSettings.php

2024-09-17 Thread Tim Starling

On 17/9/24 17:04, Tim Starling wrote:


Has anyone ever regenerated their LocalSettings.php using the 
installer for a reason other than testing that feature?




I am thinking about removing it. It adds a fair amount of code and 
complexity.


To clarify, I have in my sights two separate features: unconfigured 
upgrade, in which upgrade is done with no LocalSettings.php if tables 
are detected in the database, and LocalSettings.php regeneration, 
which optionally happens after that.


My idea is to have the DBConnect page simply fail validation if tables 
exist. The charset and engine detection code in 
MysqlInstaller::preUpgrade() could then be removed, along with a few 
other smaller chunks of code.


Unconfigured upgrade is a rare operation which is unlikely to be 
helpful for users.


-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


[Wikitech-l] Regenerating LocalSettings.php

2024-09-17 Thread Tim Starling
Has anyone ever regenerated their LocalSettings.php using the 
installer for a reason other than testing that feature?


I found this support desk topic 
<https://www.mediawiki.org/wiki/Topic:Ti70pv3negn5s53e> and its 
associated task, but that person was misunderstanding how upgrades are 
meant to work and should not have been using the feature.


I am thinking about removing it. It adds a fair amount of code and 
complexity.


It could be kept in a different form, but there's no point if nobody 
uses it.


-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Request MediaWiki +2 for Paladox

2024-01-22 Thread Tim Starling
Please consider my request for Paladox to be given +2 rights in 
MediaWiki repositories:


https://phabricator.wikimedia.org/T355619

-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


[Wikitech-l] PHP 8.1 tests are now voting

2022-11-09 Thread Tim Starling
Per T316078 <https://phabricator.wikimedia.org/T316078>, quibble tests 
(PHPUnit, Selenium, etc.) now need to pass on PHP 8.1 for automatic 
merges of changes in Gerrit.


Thanks to everyone who helped to make that happen.

PHP 8.1 support is a focus of the Wikimedia Performance Team. We want 
to unblock the WMF production migration to PHP 8.1, which will be led 
by Service Operations. Hopefully Service Operations will be able to 
migrate WMF production to PHP 8.1 in the first half of next year.


PHP 8.1 will bring performance improvements which we would like our 
users to benefit from. And bringing production closer to the upstream 
master branch allows us to more effectively participate in PHP's 
community-driven development process.


For the benefit of developers and third party users, I think we should 
try to support the latest stable release of PHP. So now is a good time 
to start thinking about PHP 8.2 support. General availability of PHP 
8.2 is expected on around November 24, according to the PHP wiki 
<https://wiki.php.net/todo/php82>. There is a migration guide 
<https://www.php.net/manual/en/migration82.php>.


-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Feedback wanted: PHPCS in a static types world

2022-11-09 Thread Tim Starling

On 29/10/22 01:03, Lucas Werkmeister wrote:


Proposition 2: *Adding types as static types is generally 
preferable.* Unlike doc comments, static types are checked at 
runtime and thus guaranteed to be correct (as long as the code runs 
at all); the small runtime cost should be partially offset by 
performance improvements in newer PHP versions, and otherwise 
considered to be worth it. New code should generally include static 
types where possible, and existing code may have static types added 
as part of other work on it. I believe this describes our current 
development practice as MediaWiki developers.


I generally don't add return type declarations to methods that I add, 
and I have pushed back on CR requests to add them, except when the 
exception is reachable, i.e. a TypeError may actually be thrown if the 
class is misused by an external caller. The reasons for this are 
clutter and performance.


Clutter, because it's redundant to add a return type declaration when 
the return type is already in the doc comment. If we stop requiring 
doc comments as you propose, then fine, add a return type declaration 
to methods with no doc comment. But if there is a doc comment, an 
additional return type declaration just pads out the file for no reason.


The performance impact is measurable for hot functions. In gerrit 
820244 <https://gerrit.wikimedia.org/r/c/mediawiki/core/+/820244> I 
removed parameter type declarations from a private method for a 
benchmark improvement of 2%.


I would prefer to return the performance benefits of newer PHP 
versions to our users, rather than to fully consume them ourselves by 
increasing abstraction.


-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Multi-DC deployment

2022-08-31 Thread Tim Starling

  
  
On 12/8/22 12:02, Tim Starling wrote:


  
  Deployment plan:
  
Stage 1: test.wikipedia.org and test2.wikipedia.org. Already
  done.
Stage 2: mediawiki.org. Planned for deployment on August 15.
Stage 3: traffic percentage. A small percentage of all
  requests will be sent to the nearest DC. Date undecided, but
  could be as early as August 22.

Stage 4: full deployment. Date TBA. But it will be soon. If
  you need to update your tools, please start updating. 

  
  For more details, see T279664.

I'm aiming to do stage 3 and 4 on September 6.
    -- Tim Starling

  
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Deletion of 5000+ pages forbidden

2022-08-23 Thread Tim Starling
On 23/8/22 21:29, Martin Domdey wrote:
> Hi,
>
> please tell me, what is the thought behind the impossibility, that a
> normal admin can delete pages with more than 5000 revisions.

The introduction of the limit was announced in 2008 at WP:VPT archive
16
<https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)/Archive_16#Deletion_restrictions_for_pages_with_long_histories>.
IIRC the main problem was replication lag.

Later, the queries were broken up into batches with a wait for
replication. This meant that deleting large articles was merely slow
(tens of seconds) and prone to failure, it didn't immediately break
the whole site. The bigdelete right was created and was granted to
some groups, but often, deleting articles with many revisions required
the use of a server-side maintenance script, since a normal request
would time out and the database writes would roll back.

In 2018, deleting pages with many revisions became asynchronous,
deferred via the job queue (T198176
<https://phabricator.wikimedia.org/T198176>). So it became feasible to
delete these pages via the web.

I don't think there has been a discussion since then on the value of
$wgDeleteRevisionsLimit or the groups given the bigdelete right.

-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Multi-DC deployment

2022-08-11 Thread Tim Starling
The performance team are progressively deploying the MediaWiki
multi-DC project. This project allows us to make use of servers in the
Dallas data center which were previously idle. When the project is
fully deployed, most MediaWiki backend GET requests will be routed to
whichever data center is nearest to the CDN node at which the request
is received.

Deployment plan:

  * Stage 1: test.wikipedia.org and test2.wikipedia.org. Already done.
  * Stage 2: mediawiki.org. Planned for deployment on August 15.
  * Stage 3: traffic percentage. A small percentage of all requests
will be sent to the nearest DC. Date undecided, but could be as
early as August 22.
  * Stage 4: full deployment. Date TBA. But it will be soon. If you
need to update your tools, please start updating.

For more details, see T279664 <https://phabricator.wikimedia.org/T279664>.

I'm not aware of any blockers. As far as I know, we could fully deploy
it now and it would more or less work. If you think otherwise, please
let us know.

-- Tim Starling
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: PHP RFC: Deprecate dynamic properties

2021-11-14 Thread Tim Starling
The right place for this kind of feedback is the PHP internals mailing
list.

On 14/11/21 2:00 am, Thiemo Kreuz wrote:
> Hm.
> * Can we get this the other way around, being able to mark classes
> with #[DisallowDynamicProperties]?

I did suggest it in my post on October 13. I don't think it is likely
to happen.

> * I would expect this to be the standard behavior on "final" classes.
> Unfortunately the RFC doesn't mention the word "final" anywhere. What
> do you think?

I don't recall this being discussed.

-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


[Wikitech-l] PHP RFC: Deprecate dynamic properties

2021-11-12 Thread Tim Starling
FYI: the PHP RFC to deprecate dynamic property assignment is now in
the voting stage. If this RFC is accepted, dynamic property assignment
will raise E_DEPRECATED in PHP 8.2 unless the
#[AllowDynamicProperties] attribute is set on the class.

I'm voting no on this, on account of the amount of migration work I
think it will create for us.

https://wiki.php.net/rfc/deprecate_dynamic_properties

-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


[Wikitech-l] Re: Goto for microoptimisation

2021-08-01 Thread Tim Starling
On 1/8/21 4:04 pm, rupert THURNER wrote:
> you triggered me reading more about it though. the commit comment
> states it takes 30% less instructions:
>   Measuring instruction count per iteration with perf stat, averaged over
>   10M iterations, PS1. Test case:
>   Html::openElement('a', [ 'class' => [ 'foo', 'bar' ] ] )
>
>   * Baseline: 11160.7265433
>   * in_array(): 10390.3837233
>   * dropDefaults() changes: 9674.1248824
>   * expandAttributes() misc: 9248.1947500
>   * implode/explode and space check: 8318.9800417
>   * Sanitizer inline: 8021.7371794
>
> does this mean these changes bring 30% speed improvement? that is
> incredible! 

Well, 30% reduction in instruction count. Time reduction is about 25%,
although you can take the reciprocal of that (1/0.75) and call it 34%
speed improvement.

I used instruction count rather than time because you can get 4-5
significant figures of accuracy, i.e. the first 4-5 digits stay the
same between runs, despite background activity, so you can measure
small changes very accurately.

> how often is this part called to retrieve one article?

Errr... let's just say there's no need to name a second day after me.
It's a small change.

The broader context is T284274
<https://phabricator.wikimedia.org/T284274>-- I'm trying to make sure
you can view the history page with limit=5000 without seeing a
timeout. My change to Thanks probably cut render time for big history
pages by 50% -- that's how much the 95th and 99th percentile service
times dropped by. Html::openElement() is a smaller piece of a smaller
piece of the puzzle.

-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Goto for microoptimisation

2021-07-30 Thread Tim Starling
For performance sensitive tight loops, such as parsing and HTML
construction, to get the best performance it's necessary to think
about what PHP is doing on an opcode by opcode basis.

Certain flow control patterns cannot be implemented efficiently in PHP
without using "goto". The current example in Gerrit 708880
<https://gerrit.wikimedia.org/r/c/mediawiki/core/+/708880/5/includes/Html.php#545>
comes down to:

if ( $x == 1 ) {
action1();
} else {
action_not_1();
}
if ( $x == 2 ) {
action2();
} else {
action_not_2();
}

If $x==1 is true, we know that the $x==2 comparison is unnecessary and
is a waste of a couple of VM operations.

It's not feasible to just duplicate the actions, they are not as
simple as portrayed here and splitting them out to a separate function
would incur a function call overhead exceeding the proposed benefit.

I am proposing

if ( $x == 1 ) {
action1();
goto not_2; // avoid unnecessary comparison $x == 2
} else {
action_not_1();
}
if ( $x == 2 ) {
action2();
} else {
not_2:
action_not_2();
}

I'm familiar with the cultivated distaste for goto. Some people are
just parotting the textbook or their preferred authority, and others
are scarred by experience with other languages such as old BASIC
dialects. But I don't think either rationale really holds up to scrutiny.

I think goto is often easier to read than workarounds for the lack of
goto. For example, maybe you could do the current example with break:

do {
do {
if ( $x === 1 ) {
action1();
break;
} else {
action_not_1();
}
if ( $x === 2 ) {
action2();
break 2;
}
} while ( false );
action_not_2();
} while ( false );

But I don't think that's an improvement for readability.

You can certainly use goto in a way that makes things unreadable, but
that goes for a lot of things.

I am requesting that goto be considered acceptable for micro-optimisation.

When performance is not a concern, abstractions can be introduced
which restructure the code so that it flows in a more conventional
way. I understand that you might do a double-take when you see "goto"
in a function. Unfamiliarity slows down comprehension. That's why I'm
suggesting that it only be used when there is a performance justification.

-- Tim Starling

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Breaking changes without deprecation in the Shellbox patch

2021-02-04 Thread Tim Starling
The following changes will be made without deprecation in gerrit
626548 <https://gerrit.wikimedia.org/r/c/mediawiki/core/+/626548>:

  * FirejailCommand was removed. CodeSearch reveals no usages.
  * Command::execute() now returns a Shellbox\Command\UnboxedResult
instead of a MediaWiki\Shell\Result. I added a class alias, but
type hints don't necessarily cause PHP to load the alias when it
parses the file, so any such type hints should be manually
updated. CodeSearch finds no affected code.

Context:

This is part of T260330 <https://phabricator.wikimedia.org/T260330>
"PHP microservice for containerized shell execution" a.k.a. Shellbox.
The patch moves the traditional shell execution code from MediaWiki to
the Shellbox library, and mildly refactors it. Rigorous backwards
compatibility was a goal of this change. We took the opportunity to
make a few minor interface changes, but we don't think anyone will be
affected.

-- Tim Starling
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] TechCom Radar 2020-09-23

2020-09-24 Thread Tim Starling
The minutes from TechCom's triage meeting on 2020-09-23.


Present: Tim S, Dan A, Daniel K, Niklas L, Timo T.


RFC: Parsoid Extension API

  *

https://phabricator.wikimedia.org/T260714 

  *

TS: on the basis of Subbu’s comment listing the different
consultations, should go to last call.

  *

TT: fine to put on last call

  *

DK: no objections, doing

  *

On Last Call to be approved on Oct 7.


RFC: Associated namespaces

  *

https://phabricator.wikimedia.org/T487 

  *

DK: use cases mentioned easier to implement with MCR, suggest to
close this RFC.

  *

TT: some other theoretical use cases have also been covered by
https://phabricator.wikimedia.org/T165149 

  *

TT: decline with last call?

  *

DK: wouldn’t be opposed to it if someone needed it or would do the
work, but does have merit, just no buy-in.

  *

TT: currently, if something doesn’t get resourced, it just stays
in P1.

  *

On Last Call to be declined on Oct 7.


Next week IRC office hours

No IRC discussion scheduled for next week.


You can also find our meeting minutes on MediaWiki.org
.

See also the TechCom RFC board


If you prefer you can subscribe to our newsletter
 on the wiki.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Allow HTML email

2020-09-22 Thread Tim Starling
OK done, and it seems to be working.

Sorry to pre-empt the discussion, but I really wanted to send that
triage email as HTML.

We still haven't heard from Faidon who, last I heard, still reads his
emails by piping telnet into less or something. But I think he can
make sense of multipart/alternative as long as it's not base-64
encoded. You should send the plain text as the first part so he
doesn't have to page down too far  ;)

-- Tim Starling

On 23/9/20 2:31 pm, Tito Dutta wrote:
> Yes, that would be helpful.
>
> বুধ, ২৩ সেপ্টেম্বর, ২০২০ ৯:০৫ AM তারিখে MusikAnimal
> mailto:musikani...@gmail.com>> লিখেছেন:
>
> Agreed! The word wrapping especially drives me nuts. My phone is
> just small
> enough that the last word or two of each line gets wrapped
> natively, on top
> of Mailman's wrapping, making any sizable email a difficult read.
>
> ~ MA
>
> On Tue, Sep 22, 2020 at 11:26 PM Gergő Tisza  <mailto:gti...@gmail.com>> wrote:
>
> > Yes please. A mere fifty years after the invention of
> hyperlinks, it would
> > be great to adopt them here.
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> <mailto:Wikitech-l@lists.wikimedia.org>
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> <mailto:Wikitech-l@lists.wikimedia.org>
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] TechCom meeting 2020-09-23

2020-09-22 Thread Tim Starling
Resending in HTML format. Hopefully I've got the settings right now.

This is the weekly TechCom board review in preparation of our meeting
on Wednesday. If there are additional topics for TechCom to review,
please let us know by replying to this email. However, please keep
discussion about individual RFCs to the Phabricator tickets.

Activity since Monday 2020-09-13 on the following boards:

https://phabricator.wikimedia.org/tag/techcom/
https://phabricator.wikimedia.org/tag/techcom-rfc/

Committee inbox:

  * Two tasks have been sitting in there for multiple weeks

Committee board activity:

  * RFCs only, see below

New RFCs:

  * None

Phase progression:

  * Timo hid the "Old" column and moved old RFCs to P1. Some had
additional changes or comments.
  * Several were closed by Timo with status "declined", which James
Forrester changed to "invalid".
  * It is hard to find justification for these actions in the linked
RFC process document. There is no mention of it in the committee
meeting minutes for the last three weeks.
  * T157402  Provide a
reliable way to pass information between hook handlers, "hooked"
objects
  o An RFC that was stalled since 2019, closed "declined"
  * T487  Associated namespaces

  o Timo asks if it can be merged with something.
  * T96384 
  * T154675  Introduce a
listener interface for LinkRenderer hooks
  o Closed
  * T259771  Drop support
for database upgrade older than two LTS releases
  o Moved to P3
  * T252091 Site-wide edit
rate limiting with PoolCounter
  o Moved to P2. Timo asks who is stewarding it.
  * T240775  Support PHP
7.4 preload
  o Moved to P3
  * T128351 Notifications
should be in core
  o Some back and forth over the status of this, ending up with it
being P1 and "stalled"
  * T215046 Use Github
login for mediawiki.org
  o Timo closed "declined".
  * T105766 Dependency
graph storage; sketch: adjacency list in DB
  o Timo closed due to lack of owner
  * T484 Scoped language converter
  o Timo closed due to lack of owner
  * T114662 Per-language
URLs for multilingual wiki pages
  o Timo closed due to lack of owner
  * T120380 Allow JSON
values to be included in the API results
  o Timo closed due to lack of owner
  * T193690 How should we
fix the undeletion system?
  o Timo moved to P1 and stalled
  * T113034 Overhaul
Interwiki map, unify with Sites and WikiMap
  o Old -> P1
  * T119043
Graph/Graphoid/Kartographer
- data storage architecture
  o Timo closed due to lack of owner, Yurik's RFC superseded by
Dan's RFC
  * T196950 Pages do not
have stable identifiers
  o Timo closed due to lack of owner
  * T158360 Reevaluate
LocalisationUpdate extension for WMF
  o Old -> P1
  * T181451 WebAssembly and
compiled JS code best practices
  o Old -> P1
  * T114445 Balanced templates
  o Old -> P1
  * T213345 Spin off
(Parsoid) language variants functionality as a microservice?
  o Timo closed due to lack of owner
  * T202673 Multiblocks -
let admins create multiple, overlapping blocks on a single user
  o Old -> P1
  * T111588 API-driven web
front-end
  o Timo closed due to lack of owner
  * T117550 Content bundler
  o Timo closed due to lack of owner
  * T111604 : Split parser
tests info multiple files
  o Timo closed due to lack of owner
  * T106099 Page
composition using service workers and server-side JS fall-back
  o Timo closed due to lack of owner
  * T40010 Re-evaluate
librsvg as SVG renderer on Wikimedia wikis
  o Old -> P1
  * T347 CentralNotice Caching
Overhaul - Frontend Proxy
  o Timo closed due to lack of

[Wikitech-l] TechCom meeting 2020-09-23

2020-09-22 Thread Tim Starling
This is the weekly TechCom board review in preparation of our meeting
on Wednesday. If there are additional topics for TechCom to review,
please let us know by replying to this email. However, please keep
discussion about individual RFCs to the Phabricator tickets.

Activity since Monday 2020-09-13 on the following boards:

https://phabricator.wikimedia.org/tag/techcom/
https://phabricator.wikimedia.org/tag/techcom-rfc/

Committee inbox:

  * Two tasks have been sitting in there for multiple weeks

Committee board activity:

  * RFCs only, see below

New RFCs:

  * None

Phase progression:

  * Timo hid the "Old" column and moved old RFCs to P1. Some had
additional changes or comments.
  * Several were closed by Timo with status "declined", which James
Forrester changed to "invalid".
  * It is hard to find justification for these actions in the linked
RFC process document. There is no mention of it in the committee
meeting minutes for the last three weeks.
  * T157402  Provide a
reliable way to pass information between hook handlers, "hooked"
objects
  o An RFC that was stalled since 2019, closed "declined"
  * T487  Associated namespaces

  o Timo asks if it can be merged with something.
  * T96384 
  * T154675  Introduce a
listener interface for LinkRenderer hooks
  o Closed
  * T259771  Drop support
for database upgrade older than two LTS releases
  o Moved to P3
  * T252091 Site-wide edit
rate limiting with PoolCounter
  o Moved to P2. Timo asks who is stewarding it.
  * T240775  Support PHP
7.4 preload
  o Moved to P3
  * T128351 Notifications
should be in core
  o Some back and forth over the status of this, ending up with it
being P1 and "stalled"
  * T215046 Use Github
login for mediawiki.org
  o Timo closed "declined".
  * T105766 Dependency
graph storage; sketch: adjacency list in DB
  o Timo closed due to lack of owner
  * T484 Scoped language converter
  o Timo closed due to lack of owner
  * T114662 Per-language
URLs for multilingual wiki pages
  o Timo closed due to lack of owner
  * T120380 Allow JSON
values to be included in the API results
  o Timo closed due to lack of owner
  * T193690 How should we
fix the undeletion system?
  o Timo moved to P1 and stalled
  * T113034 Overhaul
Interwiki map, unify with Sites and WikiMap
  o Old -> P1
  * T119043
Graph/Graphoid/Kartographer
- data storage architecture
  o Timo closed due to lack of owner, Yurik's RFC superseded by
Dan's RFC
  * T196950 Pages do not
have stable identifiers
  o Timo closed due to lack of owner
  * T158360 Reevaluate
LocalisationUpdate extension for WMF
  o Old -> P1
  * T181451 WebAssembly and
compiled JS code best practices
  o Old -> P1
  * T114445 Balanced templates
  o Old -> P1
  * T213345 Spin off
(Parsoid) language variants functionality as a microservice?
  o Timo closed due to lack of owner
  * T202673 Multiblocks -
let admins create multiple, overlapping blocks on a single user
  o Old -> P1
  * T111588 API-driven web
front-end
  o Timo closed due to lack of owner
  * T117550 Content bundler
  o Timo closed due to lack of owner
  * T111604 : Split parser
tests info multiple files
  o Timo closed due to lack of owner
  * T106099 Page
composition using service workers and server-side JS fall-back
  o Timo closed due to lack of owner
  * T40010 Re-evaluate
librsvg as SVG renderer on Wikimedia wikis
  o Old -> P1
  * T347 CentralNotice Caching
Overhaul - Frontend Proxy
  o Timo closed due to lack of owner

IRC meeting request:

  * None

Other RFC activity:

  * T2607

[Wikitech-l] Allow HTML email

2020-09-22 Thread Tim Starling
* Should Mailman collapse multipart/alternative to its first part content?
* Should Mailman convert text/html parts to plain text? This
conversion happens after MIME attachments have been stripped.

These mailman options are both enabled for this list. I would argue
that they should both be disabled. It is nice to write emails with
cutting-edge features like client-side word wrapping and links.

Are there any objections to this change?

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Logging everyone out

2020-06-25 Thread Tim Starling
On 26/6/20 3:26 pm, Steven Walling wrote:
> Thanks Tim,
> 
> 1. Does “saw the site” mean users actually had full or partial access to
> the accounts of other users, or simply were viewing a cached version of the
> site that appeared as if they were logged in as someone else? 

Users reportedly had full access to the accounts of other users.

> How many users were impacted?

We had three reports. We've added logging which should help to
determine whether anyone else was affected. So far, the indications
are that it is an extremely rare event.

> 2. Does the WMF hold incident review meetings and publish reports about
> what steps are taken to prevent repeat incidents with the same root cause?

Incidents are documented at
<https://wikitech.wikimedia.org/wiki/Incident_documentation>

Action items are tagged with the Incident Prevention tag in Phabricator:
<https://phabricator.wikimedia.org/project/view/4758/>

Whether there is an incident review meeting depends on the nature of
the incident.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Logging everyone out

2020-06-25 Thread Tim Starling
Everyone on Wikimedia wikis will shortly be logged out and will have
to log back in again.

We are resetting all sessions because we believe that, due to a
configuration error, session cookies may have been sent in cacheable
responses. Some users reported that they saw the site as if they were
logged in as someone else. We believe that the number of affected
users was very small. However, we believe that resetting all sessions
is a prudent measure to ensure that the impact is limited.

There are several layers of protection against something like this
happening, and we don't yet know how all of them failed, but we have
made a configuration change which should be sufficient to prevent it
from happening again.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The end of Hooks::run(): what you need to know

2020-06-01 Thread Tim Starling
On 2/6/20 4:27 am, Physikerwelt wrote:
>> Hook runner classes give hooks machine-readable parameter names and types.
> 
> This sounds really cool. Does this mean can read the parameter names
> and types without deep learning;-)

Yes. A large number of hook call sites initially failed Phan static
analysis type checks. Sometimes the documentation was at fault,
sometimes the call sites were at fault. Now that the migration patch
has been merged, Phan will enforce the correct types going forward.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] The end of Hooks::run(): what you need to know

2020-05-31 Thread Tim Starling
Hooks::run() was soft-deprecated in Nikki Nikkhoui's HookContainer
patch, merged on April 17. [1] And my patch to remove almost all
instances of it in MediaWiki Core was finally merged over the weekend.
[2] That means that everyone writing core code now needs to learn how
to use the new hook system.

HookContainer is a new service which replaces the functionality which
was previously in static methods in the Hooks class. HookContainer
contains a generic run() method which runs a specified hook, analogous
to Hooks::run(). However, unlike Hooks::run(), you generally should
not use HookContainer::run() directly. Instead, you call a proxy
method in a hook runner class.

Hook runner classes give hooks machine-readable parameter names and types.


How to call a hook
--

In MediaWiki Core, there are two hook runner classes: HookRunner and
ApiHookRunner. ApiHookRunner is used in the Action API, and HookRunner
is used everywhere else.

How you get an instance of HookRunner depends on where you are:

* In classes that use dependency injection, a HookContainer object is
passed in as a constructor parameter. Then the class creates a local
HookRunner instance:

  $this->hookRunner = new HookRunner( $hookContainer );

* In big hierarchies like SpecialPage, there are always
getHookRunner() and getHookContainer() methods which you can use.

* Some classes use the ProtectedHookAccessor trait, which provides
getHookRunner() and getHookContainer() methods that get their
HookContainer from the global service locator. You can also call
MediaWikiServices::getHookContainer() in your own code, if dependency
injection is not feasible.

* There is a convenience method for static code called
Hooks::runner(), which returns a HookRunner instance using the global
HookContainer.

* Extensions should generally not use HookRunner, since the available
hooks may change without deprecation. Instead, extensions should have
their own HookRunner class which calls HookContainer::run().

Once you have a HookRunner object, you call the hook by simply calling
the relevant method.


How to add a hook
-

* Create an interface for the hook. The interface name is always the
hook name with "Hook" appended. The interface should contain a single
method, which is the hook name with a prefix of "on". So for example,
for a hook called MovePage, there will be an interface called
MovePageHook with a method called onMovePage(). The interface will
typically be in a "Hook" subnamespace relative to the caller namespace.

* Add an "implements" clause to HookRunner.

* Implement the method in HookRunner.

Note that the name of the interface is currently not enforced by CI.
Alphabetical sorting of interface names and method names in HookRunner
is also not enforced. Please be careful to follow existing conventions.


How to deprecate a hook
---

Hooks were previously deprecated by passing options to Hook::run().
They are now deprecated globally by adding the hook to an array in the
DeprecatedHooks class.


Using the new system in extensions
--

Extensions should create their own HookRunner classes and use them to
call hooks. HookContainer::run() should be used instead of Hooks::run().

As for handling hooks, I think it's too early for a mass migration of
extensions to the new registration system as described in the RFC.[3]
Extension authors who are keen to pilot the new system can give it a
go. Make sure you add Nikki and me as reviewers.

More information about the new system can be found in docs/Hooks.md
[4]. The patch to add it should soon be merged.


[1] https://gerrit.wikimedia.org/r/c/mediawiki/core/+/571297
[2] https://gerrit.wikimedia.org/r/c/mediawiki/core/+/581225
[3] https://phabricator.wikimedia.org/T240307
[4]
<https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/core/+/323ac073d38ec30a97b73b4a25999079b3a125d3/docs/Hooks.md>

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] SelectQueryBuilder

2020-05-21 Thread Tim Starling
Yes, it would be interesting to add an expression builder. It is much
easier to do so now that the Database acts as a factory for the query
builder. Originally I was trying to make the data in
SelectQueryBuilder DBMS-independent. Now that we always have a
connection object in SelectQueryBuilder, it's possible to have it call
functions like addQuotes() while building the underlying condition array.

-- Tim Starling

On 22/5/20 2:14 pm, Brian Wolff wrote:
> That's really cool.
> 
> It would be interesting if we could use this as an excuse to move away from
> what i would consider antipatterns in our sql layer e.g., having no high
> level way of specifying WHERE comparisons (x > "foo". Currently we do
> addQuotes() instead of automatic or `field1` = `field2`). Similarly the
> whole thing with field names only getting automatically
> addIdentifierQuotes() if it looks like its not sql, always seemed sketch.
> This might provide us a path forward to address those while still
> maintaining backwards compat.
> 
> --
> Brian
> 
> p.s. Now all i need to dream about is a fluent version of Html class.
> 
> On Thursday, May 21, 2020, Tim Starling  wrote:
> 
>> SelectQueryBuilder is a new fluent interface for constructing database
>> queries, which has been merged to master for release in MediaWiki
>> 1.35. Please consider using it in new code.
>>
>> SELECT page_id FROM page
>> WHERE page_namespace=$namespace AND page_title=$title
>>
>> becomes
>>
>> $id = $db->newSelectQueryBuilder()
>>->select( 'page_id' )
>>->from( 'page' )
>>->where( [
>>   'page_namespace' => $namespace,
>>   'page_title' => $title,
>>] )
>>->fetchField();
>>
>> As explained on the design task T243051, SelectQueryBuilder was
>> loosely based on the query builder in Doctrine, but I made an effort
>> to respect existing MediaWiki conventions, to make migration easy.
>>
>> SelectQueryBuilder is easy to use for simple cases, but has the most
>> impact on readability when it is used for complex queries. That's why
>> I chose to migrate the showIndirectLinks query in
>> Special:WhatLinksHere as a pilot -- it was one of the gnarliest
>> queries in core.
>>
>> SelectQueryBuilder excels at building joins, including parenthesized
>> (nested) joins and joins on subqueries.
>>
>> SelectQueryBuilder can be used as a structured alternative to the
>> "query info" pattern, in which the parameters to Database::select()
>> are stored in an associative array. It can convert to and from such
>> arrays. As a pilot of this functionality, I converted ApiQueryBase to
>> use a SelectQueryBuilder to store accumulated query info.
>>
>> Check it out!
>>
>> -- Tim Starling
>>
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] SelectQueryBuilder

2020-05-21 Thread Tim Starling
SelectQueryBuilder is a new fluent interface for constructing database
queries, which has been merged to master for release in MediaWiki
1.35. Please consider using it in new code.

SELECT page_id FROM page
WHERE page_namespace=$namespace AND page_title=$title

becomes

$id = $db->newSelectQueryBuilder()
   ->select( 'page_id' )
   ->from( 'page' )
   ->where( [
  'page_namespace' => $namespace,
  'page_title' => $title,
   ] )
   ->fetchField();

As explained on the design task T243051, SelectQueryBuilder was
loosely based on the query builder in Doctrine, but I made an effort
to respect existing MediaWiki conventions, to make migration easy.

SelectQueryBuilder is easy to use for simple cases, but has the most
impact on readability when it is used for complex queries. That's why
I chose to migrate the showIndirectLinks query in
Special:WhatLinksHere as a pilot -- it was one of the gnarliest
queries in core.

SelectQueryBuilder excels at building joins, including parenthesized
(nested) joins and joins on subqueries.

SelectQueryBuilder can be used as a structured alternative to the
"query info" pattern, in which the parameters to Database::select()
are stored in an associative array. It can convert to and from such
arrays. As a pilot of this functionality, I converted ApiQueryBase to
use a SelectQueryBuilder to store accumulated query info.

Check it out!

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC: one interface per hook

2019-12-15 Thread Tim Starling
I made this RFC: https://phabricator.wikimedia.org/T240307

TL;DR: we'll have one interface per hook, and that interface will be
used for both calling and handling the hook.

It may seem like overkill, but on closer analysis, it actually seems
to work pretty nicely. There will be a place for doc comments,
arguments will be type-hinted, and smart code editors will be able to
show the hook documentation when you call or handle it.

The main open questions are:

* Where to put the many core interfaces: together or grouped by module?
* Should we split up the core HookRunner class by module? It contains
one line of boilerplate code per hook.

Existing hook handling classes in extensions can be converted to the
proposed system by:

* Changing all the handlers from static to non-static functions.
* Adding the necessary "implements" clause.
* Renaming any methods that do not already match the onHookName() pattern.
* Tweaking extension.json slightly.

I'm going on vacation soon, so I'll be aiming to move this to last
call early in the new year.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dealing with composer dependencies in early MediaWiki initialization

2019-06-26 Thread Tim Starling
On 27/6/19 10:36 am, Brian Wolff wrote:
> Another option is just removing the $wgServer back compat value.
> 
> The installer will automatically set $wgServer in LocalSettings.php. The
> default value in DefaultSettings.php is mostly for compat with really old
> installs before 1.16.
> 
> Allowing autodetection is a security vulnerability - albeit mostly
> difficult to exploit. The primary method is via cache poisioning and then
> either redirecting or otherwise tricking users about the fake domain. See
> the original ticket https://phabricator.wikimedia.org/T30798 .

Interesting that I wrote there: "How about this: let's set $wgServer
in the installer in 1.18, and remove $wgServer autodetection from
DefaultSettings.php a bit later, say in 1.20."

It was indeed 1.18, not 1.16, in which $wgServer started being set in
LocalSettings.php. I added it to LocalSettingsGenerator.php here:

https://www.mediawiki.org/wiki/Special:Code/MediaWiki/90105

Anyway, it's past 1.20 so I guess that would be a good thing to do.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] REST API last call

2019-06-10 Thread Tim Starling
MediaWiki core will soon have a REST API.

Some details of the route handler interface are in an RFC which is now
in its last call period:

https://phabricator.wikimedia.org/T221177

If you have any problems with this interface, please tell us as soon
as possible.

There is an associated Gerrit topic branch:

https://gerrit.wikimedia.org/r/q/topic:2019/rest

Several of the changes are almost ready to merge. We're not merging
the entry point (rest.php) just yet, that is still marked WIP.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-17 Thread Tim Starling
On 17/3/19 11:25 pm, MA wrote:
> Hello,
> 
> Would <https://www.mediawiki.org/wiki/Topic:Uvuzn1y39ik2f7ko> still be
> acceptable?
> 
> Creating repos also involve self-merging stuff (.gitreview files
> mostly; also sometimes importing from GitHub).

If you're doing it already and nobody cares, it's probably fine. To
repeat, the section on self merging is merely descriptive of the
current situation. It has plenty of wiggle room to allow things that
are happening already.

I also want to emphasize that we're not going to suddenly revoke +2
access of a valued contributor for violating some narrowly interpreted
clause in the policy. I can't speak for all situations or all
committee members, but if someone complains that you shouldn't be
doing a particular variety of self merges, the obvious outcome of that
is to ask you to stop doing it.

The policy defines an escalation path for complaints which allows them
to be handled with common sense and without needless public humiliation.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-17 Thread Tim Starling
On 17/3/19 7:43 am, Amir Sarabadani wrote:
> I just want to point out that this wasn't "handed down by a CTO", it was a
> RFC [1] that and was open for discussion to everyone and was discussed
> extensively (and the RFC changed because of these discussions, look at the
> history of the page), then had an IRC meeting that was also open to
> everyone, then had a "last call" period for raising any objections which
> was open to everyone too. That passed with no objection being raised and
> then it also got approved by the CTO.
> 
> I might be wrong, but if I understand the structure of TechCom correctly
> (correct me if I'm wrong), it's open and transparent, the CTO can veto
> changes (which hasn't happened so far), but it's not like a CTO would just
> implement a new policy without discussion. This process is more open and
> transparent than most companies and non-profits.

Yes, that's correct. Daniel raised in TechCom the fact that the Gerrit
privilege policy needed a review. I volunteered to lead the project.
We didn't think it was strictly within the purview of TechCom to make
a binding decision on this, which is why we structured it as a
TechCom-facilitated discussion leading to a recommendation presented
for CTO approval.

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-16 Thread Tim Starling
On 17/3/19 12:48 am, Merlijn van Deen (valhallasw) wrote:
> On Sat, 16 Mar 2019 at 03:01, Tim Starling  wrote:
> 
>> No, managing +2 permissions is not up to the maintainer of the tool,
>> that's the whole point of the change.
>>
> 
> I feel that this policy, although well-meaning, and a step forwards for
> MediaWiki and other WMF-production software, is unreasonably being applied
> as a 'one-size-fits-all' solution to situations where it doesn't make sense.
> 
> Two examples where the policy does not fit the Toolforge situation:
> 
> 1. According to the policy, self-+2'ing is grounds for revocation of Gerrit
> privileges. For a Toolforge tool, self +2-ing is common and expected: the
> repository is hosted on Gerrit to allow for CI and to make contributions
> from others easier, not necessarily for the code review features.

Merging your own code without review is grounds for revocation, with
several exceptions. One of the exceptions is for code that's not
deployed to the Wikimedia cluster. A toolforge tool would fall under
that exception.

In general, if self-merging is normal policy in some repository, we
are not trying to change that here. The +2 policy section is mostly
copied from the previous policy and is meant to be descriptive of the
current situation.

> 2. Giving someone +2 access to a repository now needs to pass through an
> extended process with checks and balances. At the same time, I can *directly
> and immediately give someone deployment access to the tool.*
> 
> Effectively, this policy forces me to move any tool repositories off Gerrit
> and onto GitHub: time and effort better spent otherwise.

The reason we wanted to make this change is because we didn't want to
repeat GitHub's mistakes. This case of a malware being added to an NPM
package used by many people was fresh in our minds:

https://github.com/dominictarr/event-stream/issues/115

The original maintainer had stopped caring about this package some
time before the incident. He gave contributor access to the first
person who asked, without any sort of check. Even after the malware
was discovered, the original maintainer was dismissive, leaving it for
others to clean up.

We've had an incident on Gerrit of a known malicious user, a Wikipedia
vandal, submitting code with a security vulnerability, using a
previously unknown pseudonym. We don't really want such a person to be
summarily given +2 access to a repository.

I don't think it's a huge inconvenience to list your proposed
contributors on a Phabricator ticket and then to wait a week.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-15 Thread Tim Starling
On 14/3/19 1:00 pm, Gergo Tisza wrote:
> On Wed, Mar 13, 2019 at 5:33 PM Tim Starling 
> wrote:
> 
>> File a task in Phabricator under the Gerrit-Privilege-Requests
>> project, recommending that the person be given access, giving your
>> reasons and mentioning that you are the maintainer of the project in
>> question. Wait for at least a week for comments. Then a Gerrit
>> administrator should add the person and close the task.
>>
> 
> Does the policy apply to Toolforge tools at all? The current text says "For
> extensions (and other projects) not deployed to the Wikimedia cluster, the
> code review policy is up to the maintainer or author of the extension." I'd
> assume that by extension managing +2 permissions is also up to them
> (although this is not explicitly stated, might be worth clarifying).

No, managing +2 permissions is not up to the maintainer of the tool,
that's the whole point of the change.

-- Tim Starling

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-15 Thread Tim Starling
On 16/3/19 7:53 am, Andre Klapper wrote:
> On Wed, 2019-03-13 at 15:24 +1100, Tim Starling wrote:
>> * The Phabricator projects for requesting access have changed. I'm in
>> the process of moving the tickets over.
> 
> What is supposed to happen with
> https://phabricator.wikimedia.org/tag/repository-ownership-requests/ ?
> 
> Do you plan to archive that project as it's superseded by
> https://phabricator.wikimedia.org/tag/mediawiki-gerrit-group-requests/
> and https://phabricator.wikimedia.org/tag/gerrit-privilege-requests/ ?

I archived it.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New Gerrit privilege policy

2019-03-13 Thread Tim Starling
File a task in Phabricator under the Gerrit-Privilege-Requests
project, recommending that the person be given access, giving your
reasons and mentioning that you are the maintainer of the project in
question. Wait for at least a week for comments. Then a Gerrit
administrator should add the person and close the task.

-- Tim Starling

On 13/3/19 11:18 pm, Zppix wrote:
> Hello,
> So what is the process for adding people to +2 on gerrit repos im the primary 
> maintainer of (for example my ZppixBot toolforge gerrit repo)
> 
> --
> Devin “Zppix” CCENT
> Volunteer Wikimedia Developer
> Africa Wikimedia Developers Member and Mentor
> Volunteer Mozilla Support Team Member (SUMO)
> Quora.com Partner Program Member
> enwp.org/User:Zppix
> **Note: I do not work for Wikimedia Foundation, or any of its chapters. I 
> also do not work for Mozilla, or any of its projects. ** 
> 
>> On Mar 13, 2019, at 12:40 AM, Physikerwelt  wrote:
>>
>>> On Wed, Mar 13, 2019 at 5:25 AM Tim Starling  
>>> wrote:
>>>
>>> Following approval by TechCom and WMF Interim CTO Erika Bjune, I've
>>> moved the new Gerrit privilege policy page out of my userspace to
>>>
>>> https://www.mediawiki.org/wiki/Gerrit/Privilege_policy
>>>
>>> This is a merge of two pages: [[Gerrit/+2]] and [[Gerrit/Project
>>> ownership]], with some additional changes. I've now redirected both of
>>> those pages to the new policy page.
>>>
>>> The main changes are:
>>>
>>> * The wmde LDAP group, representing WMDE staff members, will be given
>>> +2 access to mediawiki/* projects, similar to the rights given to WMF
>>> staff members.
>>
>> Great. This is the first step towards a global movement;-)
>>
>>>
>>> * The ability of ShoutWiki and Hallo Welt! to manage access to the
>>> extensions they maintain is described and formalised.
>>>
>>> * The ownership model for extensions is discouraged in favour of
>>> individual requests on Phabricator. An extension owner was able to
>>> promote developers to +2 access at their own discretion.
>>
>> I think this does not harm too much since many people use Microsofts
>> GitHub to maintain their non-WMF deployed extensions these days.
>>
>>
>> Physikerwelt
>>
>>>
>>> * The Phabricator projects for requesting access have changed. I'm in
>>> the process of moving the tickets over.
>>>
>>> * The revocation policy has been expanded, better describing the
>>> present situation and making several minor changes.
>>>
>>> -- Tim Starling
>>>
>>>
>>> ___
>>> Wikitech-l mailing list
>>> Wikitech-l@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] New Gerrit privilege policy

2019-03-12 Thread Tim Starling
Following approval by TechCom and WMF Interim CTO Erika Bjune, I've
moved the new Gerrit privilege policy page out of my userspace to

https://www.mediawiki.org/wiki/Gerrit/Privilege_policy

This is a merge of two pages: [[Gerrit/+2]] and [[Gerrit/Project
ownership]], with some additional changes. I've now redirected both of
those pages to the new policy page.

The main changes are:

* The wmde LDAP group, representing WMDE staff members, will be given
+2 access to mediawiki/* projects, similar to the rights given to WMF
staff members.

* The ability of ShoutWiki and Hallo Welt! to manage access to the
extensions they maintain is described and formalised.

* The ownership model for extensions is discouraged in favour of
individual requests on Phabricator. An extension owner was able to
promote developers to +2 access at their own discretion.

* The Phabricator projects for requesting access have changed. I'm in
the process of moving the tickets over.

* The revocation policy has been expanded, better describing the
present situation and making several minor changes.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] PHP time limits

2018-09-09 Thread Tim Starling
I noticed that WMF production accidentally had no PHP time limits.
Possibly it has been like that for as long as three years. The HHVM
configuration purportedly set the time limit to 60 seconds, but that
did not take effect.

I've deployed a change to set PHP time limits as follows:

* 60 seconds for GET requests
* 200 seconds for POST requests
* 20 minutes for ordinary job queue jobs
* 1 day for video scaler jobs

If it really has been three years of no time limits, this change may
break some accumulated assumptions. But note that Varnish times out
waiting for Apache after 120 seconds, so for most requests longer than
this, it would not have been obvious that HHVM continued to run, an
error was delivered anyway.

The logs so far show a trickle of timeouts from the parser, which is
normal by historical standards.

https://phabricator.wikimedia.org/T97192

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Next RFC meeting: Modern Event Platform - Choose Schema Tech

2018-07-31 Thread Tim Starling
On 27/07/18 14:55, Tim Starling wrote:
> In the TechCom committee meeting yesterday, it was decided that next
> week's RFC meeting will discuss T198256 "Modern Event Platform -
> Choose Schema Tech"
> 
> https://phabricator.wikimedia.org/T198256
> 
> This relates to the choice between Avro and JSONSchema for the next
> iteration of EventLogging.
> 
> The meeting will be at 1pm Wednesday PST in #wikimedia-office.

Sorry, I should have said 2pm, that is, the usual time.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Next RFC meeting: Modern Event Platform - Choose Schema Tech

2018-07-26 Thread Tim Starling
In the TechCom committee meeting yesterday, it was decided that next
week's RFC meeting will discuss T198256 "Modern Event Platform -
Choose Schema Tech"

https://phabricator.wikimedia.org/T198256

This relates to the choice between Avro and JSONSchema for the next
iteration of EventLogging.

The meeting will be at 1pm Wednesday PST in #wikimedia-office.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FW: Warning on behalf of Code of conduct committee

2018-06-24 Thread Tim Starling
On 25/06/18 07:46, MZMcBride wrote:
> Wikimedia Foundation Inc. employees have blocked the ability of new users
> to report bugs or file feature requests or even read the issue tracker.
> But yes, please focus on me calling Andre a troll for resetting the
> priority of <https://phabricator.wikimedia.org/T197550>. My single comment
> ("andre__: Such a troll.") is clearly what contributes to an unwelcoming
> environment for contributors, not blocking them from reading the site and
> demanding that they be vetted first. Great work, all.

MZMcBride, a few years ago, a number of people were thoroughly fed up
with your behaviour in technical communication spaces, and there was
serious discussion of banning you. I spoke in your favour at that time
because, when I talked to you 1:1 about the issues, you seemed
contrite and willing to improve yourself.

Although I think you've improved since then, I don't think you've
improved so far that you can now glibly reject a recommendation from
the Code of Conduct committee.

If you ask me, yes it is better to temporarily block newcomers than
expose them to an environment where staff members are called "trolls".
You can unblock but you cannot unoffend. The damage caused by toxicity
is not reversible.

I know you're passionate, and we need passionate people, but you have
to express your views in a civil manner. There's no easy solution to
T197550. We need to welcome newcomers but we also need to prevent
vandalism. Phabricator doesn't offer as many tools for this as
MediaWiki. We're all aware that it's not ideal, and while you're
ranting about civility, smart people are trying to find better
compromises.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting tomorrow: JavaScript package management

2018-01-30 Thread Tim Starling
Please join us tomorrow on the Freenode channel #wikimedia-office to
discuss "MediaWiki support for Composer equivalent for JavaScript
packages" <https://phabricator.wikimedia.org/T107561> at the usual
time of 21:00 UTC, 13:00 PST.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Last call: Bump PHP requirement to 5.6 in MW 1.31

2017-10-18 Thread Tim Starling
Today's RFC discussion was T172165, a proposal for MediaWiki 1.31 to
require PHP 7.0. There was no consensus on that proposal, due to the
opinion from Ops that it is not feasible to migrate all application
servers to Debian Stretch and PHP 7.0 by the expected release date of
June 2018.

However, there was consensus on the lesser goal of requiring PHP 5.6.
So, we have created a new RFC for PHP 5.6 (T178538) and are hereby
placing it into Last Call.

The proposal is: MediaWiki should bump its PHP requirement to 5.6 as
soon as possible, and at the latest in time for the 1.31 branch point
(i.e. April 2018).

"As soon as possible" means as soon as the few remaining uses of PHP
5.5 in the WMF cluster have been migrated to PHP 5.6 or later, or to
HHVM. We'd like to see this migration work be given a high priority.

If you have any objection to this proposal, please raise it on
Phabricator before the end of the Last Call period, which will be
October 31.

https://phabricator.wikimedia.org/T178538

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-28 Thread Tim Starling
On 19/09/17 20:56, Gilles Dubuc wrote:
> Should we have a TechComm-driven meeting about this ASAP?

We had an IRC meeting about this yesterday. Here is the log:

<https://tools.wmflabs.org/meetbot/wikimedia-office/2017/wikimedia-office.2017-09-27-21.03.log.html>

We mostly talked about the migration plan for production, including
the likely timeline.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-20 Thread Tim Starling
On 19/09/17 10:13, Tim Starling wrote:
> I'll run a benchmark

I upgraded the test wiki container on my laptop from Ubuntu 14.04 to
16.04, which also necessitated a platform switch from schroot to
systemd-nspawn. The benchmark is thus approximately native performance
on a Core i5 4210U @ 1.7 GHz. I killed the usual CPU hogs so that
everything was quiet in top. Then I ran benchmarkParse.php on a copy
of the [[Australia]] article, including templates, with --loops 3.

The results were

PHP 7.0:1.59 seconds
HHVM 3.21:  1.75 seconds

So PHP 7 was significantly faster on this test.

Note that I ran HHVM with JIT enabled; total wall clock time including
compilation and warmup was about 75 seconds, compared to 13 seconds
for PHP 7.

The test wiki has Scribunto with LuaStandalone. Debug logging was
disabled for this test.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-19 Thread Tim Starling
On 20/09/17 12:19, C. Scott Ananian wrote:
> On Sep 19, 2017 9:45 PM, "Tim Starling"  wrote:
> 
> Facebook have been inconsistent with
> HHVM, and have made it clear that they don't intend to cater to our needs.
> 
> 
> I'm curious: is this conclusion based on your recent meeting with them, or
> on past behavior?  Their recent announcement had a lot of "we know we
> haven't been great, but we promise to change" stuff in it ("reinvest in
> open source") and I'm curious to know if they enumerated concrete steps
> they planned to take, or whether even in your most recent meeting with them
> they failed to show actual interest.

"Have been inconsistent" refers to their past behaviour. "Don't intend
to cater" refers to the meeting and announcement.

According to people on the ops team who have worked with them
recently, they stopped working on the open source product altogether.
They stopped responding to bug reports. By "reinvest in open source"
they are apologising for that and promising to start reading their bug
mail again. This was discussed in the meeting.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-19 Thread Tim Starling
On 20/09/17 02:40, C. Scott Ananian wrote:
> For example, the top-line github stats are:
> hhvm: 504 contributors (24,192 commits)
> php-src: 496 contributors (104,566 commits)

There's a reason I've contributed loads of code to HHVM and hardly any
to PHP. Although the HHVM folks were sometimes tied up in internal
work and not available, when they were available, my interactions with
them were very pleasant. They were enthusiastic about my
contributions, and were happy to have long design discussions on IRC.
Code review sometimes had a lot of back and forth, but at least they
were positive and engaged. The people I dealt with in code review
usually had it as their job to accept community contributions. My bug
reports were respectfully treated.

It was a big contrast to my interactions with the PHP community, which
were so often negative. For example, Jani's toxic behaviour on the bug
tracker, closing bugs as "bogus" despite being serious and
reproducible, usually because he didn't understand them technically.
Even with other maintainers, I had to fight several times to keep
serious bugs open. I had no illusions that they would ever be fixed, I
just wanted them to be open for my reference and for the benefit of
anyone hitting the same issue. I filed bugs as "documentation issues",
requesting that undesired behaviour be documented in the manual, since
they were more likely to stay open that way.

My interactions with Derick Rethans were quite unpleasant, he would
not even consider accepting the code I wrote to fix a DoS
vulnerability which was affecting us constantly. He wouldn't provide a
code review, he just rejected it on principle. Instead he wrote his
own version of it a couple of years later. He seemed to think that
every line of code in the date module should be attributable to him.

Design discussions are apparently concentrated on the internals
mailing list, where there is an incredible amount of negativity for
any new idea. Developers really need a lot of energy to keep answering
negative comments, over and over for a period of months, in order to
get their RFCs accepted. Some language features which eventually made
it into PHP, such as short arrays, were shot down many times on the
mailing list before they found a champion who was sufficiently brave
and influential.

Their code review practices were quite archaic, I don't know if
they've improved. Their coding style is also dated.

Stas was great, a bright spot in a dismal field, which was why I was
so keen to hire him.

So I'm not looking forward to returning to PHP. But at least we know
what we are getting ourselves in for. Being community-driven means it
has inertia, change is slow. Facebook have been inconsistent with
HHVM, and have made it clear that they don't intend to cater to our needs.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-18 Thread Tim Starling
On 19/09/17 12:30, Legoktm wrote:
> Hi,
> 
> On 09/18/2017 05:13 PM, Tim Starling wrote:
>> * The plan to also drop PHP 5 compatibility, on a short timeline (1 year).
>> * Rather than "drifting away" from PHP, their top priority plans
>> include removing core language features like references and destructors.
> 
> On Reddit[1], a member of the HHVM team clarified they plan on dropping
> support for destructors *from Hack* soon. (Not that I think it really
> makes any difference in what our long-term plan should be.)

It's unclear how much difference that will make, since they are clear
about wanting to make HHVM be purely a Hack runtime. They'll carry on
supporting Composer and PHPUnit, but only until Hack has "its own
ecosystem of core frameworks".

They "will not be targeting PHP software beyond such libraries after
the 3.24 release", which presumably means that they will no longer run
MediaWiki or PHP unit tests against HHVM.

Also, they said that they want to remove destructors in order to
eliminate the performance overhead of reference counting, and I don't
think it is possible to get that performance benefit unless you remove
reference counts from the VM entirely. Maybe removing them from the
Hack language will be a first step, but we can't expect them to keep
them in the VM in the longer term.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] HHVM vs. Zend divergence

2017-09-18 Thread Tim Starling
On 19/09/17 06:58, Max Semenik wrote:
> Today, the HHVM developers made an announcement[1] that they have plans of
> ceasing to maintain 100% PHP7 compatibility and concentrating on Hack
> instead.

The HHVM team did tell us privately that they were planning on
changing their strategy, basically as you describe it above. The
surprising things for me in this announcement were:

* The plan to also drop PHP 5 compatibility, on a short timeline (1 year).
* Rather than "drifting away" from PHP, their top priority plans
include removing core language features like references and destructors.

> While this does not mean that we need to take an action immediately,
> eventually we will have to decide something. 

Actually, I think a year is a pretty short time for ops to switch to
PHP 7. I think we need to decide on this pretty much immediately.

> 3) Revert WMF to Zend and forget about HHVM. This will result in
> performance degradation, however it will not be that dramatic: when we
> upgraded, we switched to HHVM from PHP 5.3 which was really outdated, while
> 5.6 and 7 provided nice performance improvements.
> 
> I personally think that 3) is the only viable option in the long run. What
> do you think?

Yes, I think it's the only viable option.

I'll run a benchmark, but I don't see how it could influence the
decision. It'll be more for capacity planning.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Last Call: PostgreSQL schema change for consistency with MySQL

2017-08-16 Thread Tim Starling
The Wikimedia Technical Committee is hereby issuing a last call for
comments on the RFC "PostgreSQL schema change for consistency with MySQL".

https://phabricator.wikimedia.org/T164898

If no new objections are raised, this RFC will be approved on August 30.

This RFC proposes to resolve differences between the PostgreSQL and
MySQL support in MediaWiki by reducing the use of PostgreSQL-specific
features.

Note that I'm not planning implement this RFC in the current quarter.
I would welcome volunteer implementors.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Historical use of latin1 fields in MySQL

2017-05-02 Thread Tim Starling
On 03/05/17 03:10, Mark Clements (HappyDog) wrote:
> Can anyone confirm that MediaWiki used to behave in this manner, and
> if so why?

In MySQL 4.0, MySQL didn't really have character sets, it only had
collations. Text was stored as 8-bit clean binary, and was only
interpreted as a character sequence when compared to other text fields
for collation purposes. There was no UTF-8 collation, so we stored
UTF-8 text in text fields with the default (latin1) collation.

> If it was due to MySQL bugs, does anyone know in what version these
> were fixed?

IIRC it was fixed in MySQL 4.1 with the introduction of proper
character sets.

To migrate such a database, you need to do an ALTER TABLE to switch
the relevant fields from latin1 to the "binary" character set. If you
ALTER TABLE directly to utf8, you'll end up with "mojibake", since the
text will be incorrectly interpreted as latin1 and converted to
unicode. This is unrecoverable, you have to restore from a backup if
this happens.

I think it is possible to then do an ALTER TABLE to switch from binary
to utf8, but it's been a while since I tested that.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Update on WMF account compromises

2016-11-16 Thread Tim Starling
Since Friday, we've had a slow but steady stream of admin account
compromises on WMF projects. The hacker group OurMine has taken credit
for these compromises.

We're fairly sure now that their mode of operation involves searching
for target admins in previous user/password dumps published by other
hackers, such as the 2013 Adobe hack. They're not doing an online
brute force attack against WMF. For each target, they try one or two
passwords, and if those don't work, they go on to the next target.
Their success rate is maybe 10%.

When they compromise an account, they usually do a main page
defacement or similar, get blocked, and then move on to the next target.

Today, they compromised the account of a www.mediawiki.org admin, did
a main page defacement there, and then (presumably) used the same
password to log in to Gerrit. They took a screenshot, sent it to us,
but took no other action.

So, I don't think they are truly malicious -- I think they are doing
it for fun, fame, perhaps also for their stated goal of bringing
attention to poor password security.

Indications are that they are familiarising themselves with MediaWiki
and with our community. They probably plan on continuing to do this
for some time.

We're doing what we can to slow them down, but admins and other users
with privileged access also need to take some responsibility for the
security of their accounts. Specifically:

* If you're an admin, please enable two-factor authentication.
<https://meta.wikimedia.org/wiki/H:2FA>
* Please change your password, if you haven't already changed it in
the last week. Use a new password that is not used on any other site.
* Please do not share passwords across different WMF services, for
example, between the wikis and Gerrit.

(Cross-posted to wikitech-l and wikimedia-l, please copy/link
elsewhere as appropriate.)

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit screen size

2016-09-25 Thread Tim Starling
On 25/09/16 21:09, Bináris wrote:
> Hi,
> 
> I try to familiarize myself with Gerrit which is not a good example for
> user-friendly interface.
> I noticed a letter B in the upper right corner of the screen, and I
> suspected it could be a portion of my login name. So I looked at it in HTML
> source, and it was. I pushed my mouse on it and I got another half window
> as attached.
> 
> So did somebody perhaps wire the size of a 25" monitor into page rendering?
> My computer is a Samsung notebook.

In T38471 I complained that the old version was too wide at 1163px
(for my dashboard on a random day). Now the new version is 1520px. I'm
not sure if the Gerrit folks are serious or are trolling us. Perhaps
it is a tactic to encourage UI code contributions?

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] historical trivia: who first picked UTC as Wikimedia time, when and why?

2016-05-09 Thread Tim Starling
UseMod had a concept of "server time", and early versions of the main
page stated that server time was US Pacific Time. The date on the main
page was heroically updated manually by Malcolm Farmer, for example:

<https://en.wikipedia.org/w/index.php?title=HomePage&diff=331652346&oldid=331652345>

This archived RecentChanges gives you an idea of what the interface
looked like, the timezone is not specified ($ScriptTZ was not set) but
it is presumably all Pacific Time:

<https://web.archive.org/web/20011015062802/http://www.wikipedia.com/wiki/Recent_Changes>

UseMod stored dates in UNIX time (integer since epoch), which is
implicitly UTC, but converted them to server time for display.

Magnus's Wikipedia script also stored dates in UTC format, and
defaulted to server time for display. It had a user preference called
"hourDiff" which was relative to server time. For example in
special_recentchangeslayout.php:

  $adjusted_time_sc = tsc ( $s->cur_timestamp ) + 3600 *
$user->options["hourDiff"];
  $day = date ( "l, F d, Y" , $adjusted_time_sc);
  $time = date ( "H:i" , $adjusted_time_sc ) ;

tsc() converts the database time to UNIX time.

There were actually no references to UTC or GMT in the code base, and
it never calls gmdate() . So it seems Magnus more or less carried on
the same UI time zone policy as UseMod. If it was installed on the
same server as UseMod, then it presumably would have displayed Pacific
Time.

On the other hand, the early version of phase3 I have here does make
references to UTC, for example:

"timezonetext"  => "Enter number of hours your local time differs
from server time (UTC).",

Instead of converting database dates to server time, phase3's
Language::date() and Language::time() just takes substrings of the
database date:

if( $wgAmericanDates ) {
  $d = $this->getMonthAbbreviation( substr( $ts, 4, 2 ) ) .
" " . (0 + substr( $ts, 6, 2 )) . ", " .
substr( $ts, 0, 4 );
} else {
  $d = (0 + substr( $ts, 6, 2 )) . " " .
$this->getMonthAbbreviation( substr( $ts, 4, 2 ) ) . " " .
substr( $ts, 0, 4 );
}

So for the elevation of UTC as a standard in the UI, I think we can
safely credit Lee Daniel Crocker.

-- Tim Starling

On 10/05/16 02:15, Brion Vibber wrote:
> In 2001 when Magnus was writing the initial attempt at a custom wiki engine
> in PHP backed by MySQL, he chose to use the TIMESTAMP column type.
> 
> TIMESTAMPs in MySQL 3 were automatically filled out by the server at INSERT
> time, normalized to UTC, and exposed in the 14-digit MMDDHHMMSS format
> we still know and love today.
> 
> The first TIMESTAMP column in a row also got automatically updated when you
> changed something in a row, so we ended up switching them from TIMESTAMP
> type to text strings and just filled out the initial values on the PHP
> side. We could have used DATETIME but that would have been a slightly
> harder transition at the time, and would have introduced the fun of the
> server settings trying to give you local time half the time...
> 
> -- brion
> 
> On Mon, May 9, 2016 at 1:39 AM, David Gerard  wrote:
> 
>> Question about obscure historical detail: Who picked UTC as Wikimedia
>> time? When was this, and what was the thought process?
>>
>> (the answer is almost certainly "Brion or Jimbo, early 2001, it's the
>> obvious choice", but I'm just curious as to details.)
>>
>>
>> - d.
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Automatic image colorization

2016-05-03 Thread Tim Starling
On 04/05/16 05:21, Ori Livneh wrote:
> Colorization
> <https://en.wikipedia.org/wiki/Film_colorization#Digital_colorization>
> refers to the process of adding color to black-and-white photographs. This
> work was historically done by hand. These days, colorization is usually
> done digitally, with the support of specialized tooling. But it is still
> quite labor-intensive.
> 
> A forthcoming paper
> <http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/> from
> researchers at Waseda University of Japan have developed a method for
> automatic image colorization using deep learning neural network. The
> results are both impressive and easy to reproduce, as the authors have
> published
> their code <https://github.com/satoshiiizuka/siggraph2016_colorization> to
> GitHub with a permissive license.

Impressive, yes, but with lots of ridiculous errors. For example, the
ground often ends up green even when it's a road:

http://colorizr.io/image.php?uuid=cdcc0b2f-dc9e-4592-938b-b1146f75ecb5
http://colorizr.io/image.php?uuid=b868719e-b59a-42ed-ae9b-27f2a52fe246

Clothing is apparently always brown:

http://colorizr.io/image.php?uuid=a94e5ff7-25a1-4e61-b1be-b54f8301708d
http://colorizr.io/image.php?uuid=3f5dadcb-912c-40fb-82fa-b52dde6d280b
http://colorizr.io/image.php?uuid=687cb8e6-0031-443c-83b4-37d403a7fd34

Red is randomly splashed around with no apparent pattern:

http://colorizr.io/image.php?uuid=9567aab8-a94d-488d-a4b1-40b746649757
http://colorizr.io/image.php?uuid=3cce1ab2-b866-4ca6-b713-d7f49e392ab2
http://colorizr.io/image.php?uuid=d6d65eed-94e0-4a86-a772-057975d1a18c
http://colorizr.io/image.php?uuid=debdc3f9-369b-494f-929d-5cf5a5b38712

Sometimes feature identification fails spectacularly:

http://colorizr.io/image.php?uuid=44d1c028-074d-4162-be65-4200569b89d2

Is it good enough for Wikipedia? Even the best examples have subtle
defects.

> Should we have a bot that can perform colorization on demand,
> the way Rotatebot <https://commons.wikimedia.org/wiki/User:Rotatebot> can
> rotate images?

Well, Rotatebot uploads images without review.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Rob Lanphier appointed to the Architecture Committee

2016-04-25 Thread Tim Starling
At the previous meeting of the MediaWiki Architecture Committee (April
20), the members present approved the appointment of Rob Lanphier to
the committee.

Rob was the main instigator in the formation of the committee in 2014.
Lately he has been taking an active role, chairing the weekly meetings
and writing the meeting agenda. In recognition of the excellent work
he has been doing, and in the interests of transparency, we decided to
formalise his membership.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reducing the environmental impact of the Wikimedia movement

2016-03-30 Thread Tim Starling
On 31/03/16 02:55, Katherine Maher wrote:
> IIRC, we included clean energy consumption as a factor in
> evaluating in our RFC for our choice of a backup colo a few years back

Since I strongly support emissions reduction, on my own initiative I
did an analysis of expected CO2 emissions of each of the candidate
facilities during the selection process of the backup colo. That's
presumably what you're referring to.

<https://docs.google.com/spreadsheets/d/1adt45Msw2o8Ml0s8S0USm9QLkW9ER3xCPkU9d2NJS4Y/edit#gid=0>

My conclusion was that codfw (the winner) was one of the worst
candidates for CO2 emissions. However, the price they were offering
was so much lower than the other candidates that I could not make a
rational case for removing it as an option. You could buy high-quality
offsets for our total emissions for much less than the price difference.

However, this observation does require us to actually purchase said
offsets, if codfw is to be represented as an ethical choice, and that
was never done.

codfw would not tell us their PUE, apparently because it was a
near-empty facility and so it would have technically been a very large
number. I thought it would be fair to account for marginal emissions
assuming a projected higher occupancy rate and entered 2.9 for them,
following a publication which gave that figure as an industry average.
It's a new facility, but it's not likely that they achieved an
industry-leading PUE since the climate in Dallas is not suitable for
evaporative cooling or "free" cooling.

> Ops runs a tight ship, and we're a relatively small footprint in our colos,
> so we don't necessarily have the ability to drive purchasing decisions
> based on scale alone.

I think it's stretching the metaphor to call ops a "tight ship". We
could switch off spare servers in codfw for a substantial power
saving, in exchange for a ~10 minute penalty in failover time. But it
would probably cost a week or two of engineer time to set up suitable
automation for failover and periodic updates.

Or we could have avoided a hot spare colo altogether, with smarter
disaster recovery plans, as I argued at the time. My idea wasn't
popular: Leslie Carr said she would not want to work for an
organisation that adopted the relaxed DR restoration time targets that
I advocated. And of course DR improvements were touted many times as
an effective use of donor funds.

Certainly you have a point about scale. Server hardware has extremely
rudimentary power management -- for example when I checked a couple of
years ago, none of our servers supported suspend-to-RAM, and idle
power usage hardly differed from power usage at typical load. So the
only option for reducing power usage of temporarily unused servers is
powering off, and powering back on via out-of-band management. WMF
presumably has little influence with motherboard suppliers. But we
could at least include power management and efficiency as
consideratons when we evaluate new hardware purchases.

> At the time the report came out, we started talking to Lukas about how we
> could improve our efforts at the WMF and across the movement, but we've had
> limited bandwidth to move this forward in the Foundation (and some
> transitions in our Finance and Operations leadership, who were acting as
> executive sponsors). However, I think it's safe to say that we'd like to
> continue to reduce our environmental impact, and look forward to the
> findings of this effort.

We could at least offset our datacentre power usage, that would be
cheap and effective.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Rebasing old long-array commits

2016-03-10 Thread Tim Starling
Legoktm wrote:
> There's no need to do it manually. Just tell people to run the phpcs
> autofixer before they rebase, and the result should be identical to
> what's already there. And we can have PHPCS run in the other direction
> for backports ([] -> array()).

Unfortunately, it's not that simple. If you run phpcbf before you
rebase, you end up with a commit which changes every instance of
array() to [], even in lines the developer didn't touch. Then if any
of those phpcbf changes appear close to a recent real change in
master, the context won't match and you'll get a conflict.

A simple one-line change in a file with many array literals and many
intervening changes in master tends to end up with many conflicts on
rebase.

So I wrote a script to do it in a different way:

<https://phabricator.wikimedia.org/diffusion/MCUT/browse/master/rebase-long-array>

This script runs phpcbf on the base, and separately on the head of the
work branch, and then creates a patch based on the difference between
the two. So the patch contains short arrays on both sides. Then the
patch is applied to master to generate the new commit. I tried this on
https://gerrit.wikimedia.org/r/#/c/236508/ and it seemed to work.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Last call on RFC: drop PHP 5.3/5.4 support

2016-01-21 Thread Tim Starling
This is a last call for new arguments and facts related to the
proposal to drop PHP 5.3 and PHP 5.4 support in MediaWiki core git master.

If you have anything new to say about this issue, please comment on
the Phabricator ticket:

https://phabricator.wikimedia.org/T118932

The Architecture Committee plans on making a decision on this issue on
the basis of the Phabricator comments in next week's committee meeting
(January 27).

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ifexists across wikis

2015-12-06 Thread Tim Starling
On 07/12/15 06:29, Bartosz Dziewoński wrote:
> To add to what Alex and Florian said, the simple database lookup to
> check page existence is not actually that simple. When parsing a page,
> the query to determine link color (and to mark links to non-existent,
> redirect or disambig pages) is done in batches of 1000 links, after
> the whole page has been parsed and we know all the pages it links to.
> Special pages that have lists of links use a similar method.

Also, when you make a red link, and then someone creates the page,
people expect the link to turn blue straight away. That's implemented
using the pagelinks table -- when a page is created, we use pagelinks
to find all pages with red links to that page, update all their
page_touched fields, and purge them from Varnish, so that all the
links will turn blue in under a second.

It's possible to do that for interwiki links, but it increases the
amount of time it would take to implement such a feature. We currently
don't have a way to efficiently find all interwiki links to a page, so
one would have to be added.

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Objections to PHP 5.5 version requirement

2015-12-02 Thread Tim Starling
In last week's RFC meeting, it was proposed that we require PHP 5.5
for MediaWiki core git master (to be 1.27.0), and this was approved
without objections.

After the meeting, Mark Clements (HappyDog), a long-term valued
contributor to the project, expressed strenuous objections to this
decision on the ticket: <https://phabricator.wikimedia.org/T118932>

On the ticket, I proposed a process whereby the RFC will be reopened
for review if any existing Phabricator user will second the motion. If
you do object, please register your objection on Phabricator.

In the meantime, please do not merge any changes which require PHP 5.4+.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-11-17 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* API-driven web front-end
<https://phabricator.wikimedia.org/T111588>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 22:00
* US PST: Wednesday 14:00
* Europe CET: Wednesday 23:00
* Australia AEDT: Thursday 09:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-11-04 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Streamlining Composer usage
<https://phabricator.wikimedia.org/T105638>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 22:00
* US PST: Wednesday 14:00
* Europe CET: Wednesday 23:00
* Australia AEDT: Thursday 09:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Allowing empty list items

2015-10-29 Thread Tim Starling
Currently, it is not possible to have an empty list item, for example, in:

* A
*
* B

or in:

# A
#
# B

The middle list item will be removed. We (the parsing team) would like
to change that, since it is counter-intuitive and makes things
difficult for VisualEditor.

The challenge is that we suspect some templates are taking advantage
of the removal of empty list items, as a shortcut to omit a list item
when a parameter is empty. Such templates will have to be migrated to
explicitly omit the affected list items. So we are planning on
introducing logging and tools which will allow this problem to be
quantified and fixed.

A change was recently merged which causes empty list items to be
hidden with CSS, instead of removing them altogether. This change will
be deployed next week as part of the usual weekly release train. This
should not cause any visible changes. However it does enable
client-side tools to be developed which will show an article with
empty list items included, the way we imagine it will ultimately be
rendered.

For more information, see:
https://gerrit.wikimedia.org/r/#/c/246148/
https://phabricator.wikimedia.org/T49673

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-10-27 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Dependency Injection for MediaWiki core
<https://phabricator.wikimedia.org/T384>

In the event that Daniel is not able to attend, we will instead discuss:

* Implementing the reliable event bus using Kafka
<https://phabricator.wikimedia.org/T88459>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-10-21 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Hygienic (heredoc) arguments for templates
<https://phabricator.wikimedia.org/T114432>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEDT: Thursday 08:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Architecture Committee expansion

2015-10-08 Thread Tim Starling
In a recent meeting of the MediaWiki Architecture Committee, it was
agreed that Timo Tijhof (Krinkle) would be invited to join the
committee. Timo accepted this invitation.

Timo is a talented software engineer with experience in many areas,
especially the MediaWiki core and JavaScript frontend components such
as ResourceLoader and VisualEditor. He currently works for WMF in the
performance team. I look forward to working with him on the
Architecture Committee.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-10-07 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Overhaul Interwiki map, unify with Sites and WikiMap
<https://phabricator.wikimedia.org/T113034>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEDT: Thursday 08:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-08-26 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Replace Tidy with HTML 5 parse/reserialize
<https://phabricator.wikimedia.org/T89331>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Replace Tidy with HTML 5 parse/reserialize

2015-08-19 Thread Tim Starling
On 20/08/15 01:21, Erwin Dokter wrote:
> I mentioned this once before:
> 
> http://www.htacg.org/tidy-html5/
> 
> While Tidy died in 2008, this fork lives on and is HTML5 aware. That
> will at least solve a lot of problems *caused* by Tidy, such as not
> allowing block elements inside inline elemensts (which is allowed in
> HTML5).
> 
> Can we at least evaluate if this is a suitable interim solution?

That's not a solution to the problems that we are trying to solve.

As I said in my original post, my number one problem with Tidy is that
it changes. So I am very happy that it is not in active development.
Switching to a fork that is actively maintained would be much worse.
It would be like the switch from Tidy to the proposed HTML
reserializer web service, except that the pain would be repeated every
time we upgrade our Linux distribution.

The other problem with Tidy is that it is poorly specified and has
only one implementation. Switching to a fork of it doesn't improve the
situation.

HTML 5 has not significantly relaxed the rules about block elements
inside inline elements. The terminology has changed: now instead of
inline elements we have "phrasing content" and instead of block
elements we have "flow content". You're still not allowed to put a
 inside a , because  is phrasing content and  isn't.

The "children" column here has a summary:

http://www.w3.org/TR/html5/index.html#elements-1

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Replace Tidy with HTML 5 parse/reserialize

2015-08-19 Thread Tim Starling
On 13/08/15 15:43, MZMcBride wrote:
> Or could we replace Tidy with nothing? Relying on the principle of
> "garbage in, garbage out" seems reasonable in some ways. And modern
> browsers are fairly adept at handling moderately bad HTML.

The HTML 5 spec makes a distinction between valid, balanced HTML and
error recovery algorithms. Browsers are basically the only clients
able to handle moderately bad HTML, and as I've previously said in
discussions of HTML 5 output, I don't think it is acceptable to screw
over all non-browser clients by sending output that relies on obscure
details of the HTML 5 spec. I think XHTML or something close to it is
an appropriate machine-readable output format.

Have you looked at my survey on the bug? Compliant HTML 5 parsers are
10-30k source lines and are in pretty short supply.

Wikitext is not meant to be easily machine-readable, it is meant to be
easily human-writable. Unbalanced tags in HTML are errors, but in
wikitext they are allowed. This is a design choice. Most humans don't
really care about the spec, they just want the machine to figure out
what they meant.

And, as several others have noted, you can't just disable Tidy, since
the effects of unclosed tags are not confined to the content area, and
there is a large amount of existing content that depends on it. I have
seen the effects of Tidy being accidentally disabled on the English
Wikipedia, it is not pleasant.

Am I correct in saying that MZMcBride is the only person in this
thread in favour of the idea of getting rid of HTML cleanup?


By the way, you can see my work in progress on an HTML reserializer
web service in the mediawiki/services/html5depurate project on Gerrit:

<https://gerrit.wikimedia.org/r/#/q/status:open+project:mediawiki/services/html5depurate+branch:master,n,z>

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-08-19 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Multi-Content Revisions
<https://phabricator.wikimedia.org/T107595>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Replace Tidy with HTML 5 parse/reserialize

2015-08-11 Thread Tim Starling
Language choice. Tidy is written in C. Note that I included shelling
out to Node.js as an option in my original post. It's not really part
of Parsoid, it's a JavaScript library that Parsoid uses. We would use
the same JavaScript library with a few lines of wrapper code.

-- Tim Starling

On 12/08/15 10:24, Trevor Parscal wrote:
> Interesting. What is the cause of the slower speed?
> 
> - Trevor
> 
> On Tuesday, August 11, 2015, Gabriel Wicke  wrote:
> 
>> On Tue, Aug 11, 2015 at 5:16 PM, Trevor Parscal > >
>> wrote:
>>
>>> Is it possible use part of the Parsoid code to do this?
>>>
>>
>> It is possible to do this in Parsoid (or any node service) with this line:
>>
>>  var sanerHTML = domino.createDocument(input).outerHTML;
>>
>> However, performance is about 2x worse than current tidy (116ms vs. 238ms
>> for Obama), and about 4x slower than the fastest option in our tests. The
>> task has a lot more benchmarks of various options.
>>
>> Gabriel
>>
>>
>>
>>
>>
>>>
>>> - Trevor
>>>
>>> On Tuesday, August 11, 2015, Tim Starling > > wrote:
>>>
>>>> I'm elevating this task of mine to RFC status:
>>>>
>>>> https://phabricator.wikimedia.org/T89331
>>>>
>>>> Running the output of the MediaWiki parser through HTML Tidy always
>>>> seemed like a nasty hack. The effects on wikitext syntax are arbitrary
>>>> and change from version to version. When we upgrade our Linux
>>>> distribution, we sometimes see changes in the HTML generated by given
>>>> wikitext, which is not ideal.
>>>>
>>>> Parsoid took a different approach. After token-level transformations,
>>>> tokens are fed into the HTML 5 parse algorithm, a complex but
>>>> well-specified algorithm which generates a DOM tree from quirky input
>>>> text.
>>>>
>>>> http://www.w3.org/TR/html5/syntax.html
>>>>
>>>> We can get nearly the same effect in MediaWiki by replacing the Tidy
>>>> transformation stage with an HTML 5 parse followed by serialization of
>>>> the DOM back to HTML. This would stabilize wikitext syntax and resolve
>>>> several important syntax differences compared to Parsoid.
>>>>
>>>> However:
>>>>
>>>> * I have not been able to find any PHP implementation of this
>>>> algorithm. Masterminds and Ressio do not even attempt it. Electrolinux
>>>> attempts it but does not implement the error recovery parts that are
>>>> of interest to us.
>>>> * Writing our own would be difficult.
>>>> * Even if we did write it, it would probably be too slow.
>>>>
>>>> So the question is: what language should we use? Since this is the
>>>> standard programmer troll question, please bring popcorn.
>>>>
>>>> The best implementation of this algorithm is in Java: the validator.nu
>>>> parser is maintained by Mozilla, and has source translation to C++,
>>>> which is used by Mozilla and could potentially be used for an HHVM
>>>> extension.
>>>>
>>>> There is also a Rust port (also written by Mozilla), and notable
>>>> implementations in JavaScript and Python.
>>>>
>>>> For WMF, a Java service would be quite easily done, and I have
>>>> prototyped it already. An HHVM extension might also be possible. A
>>>> non-service fallback for small installations might be Node.js or a
>>>> compiled binary from Rust or C++.
>>>>
>>>> -- Tim Starling
>>>>
>>>>
>>>> ___
>>>> Wikitech-l mailing list
>>>> Wikitech-l@lists.wikimedia.org  
>>>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>> ___
>>> Wikitech-l mailing list
>>> Wikitech-l@lists.wikimedia.org 
>>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>>
>>
>>
>>
>> --
>> Gabriel Wicke
>> Principal Engineer, Wikimedia Foundation
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org 
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC: Replace Tidy with HTML 5 parse/reserialize

2015-08-11 Thread Tim Starling
I'm elevating this task of mine to RFC status:

https://phabricator.wikimedia.org/T89331

Running the output of the MediaWiki parser through HTML Tidy always
seemed like a nasty hack. The effects on wikitext syntax are arbitrary
and change from version to version. When we upgrade our Linux
distribution, we sometimes see changes in the HTML generated by given
wikitext, which is not ideal.

Parsoid took a different approach. After token-level transformations,
tokens are fed into the HTML 5 parse algorithm, a complex but
well-specified algorithm which generates a DOM tree from quirky input
text.

http://www.w3.org/TR/html5/syntax.html

We can get nearly the same effect in MediaWiki by replacing the Tidy
transformation stage with an HTML 5 parse followed by serialization of
the DOM back to HTML. This would stabilize wikitext syntax and resolve
several important syntax differences compared to Parsoid.

However:

* I have not been able to find any PHP implementation of this
algorithm. Masterminds and Ressio do not even attempt it. Electrolinux
attempts it but does not implement the error recovery parts that are
of interest to us.
* Writing our own would be difficult.
* Even if we did write it, it would probably be too slow.

So the question is: what language should we use? Since this is the
standard programmer troll question, please bring popcorn.

The best implementation of this algorithm is in Java: the validator.nu
parser is maintained by Mozilla, and has source translation to C++,
which is used by Mozilla and could potentially be used for an HHVM
extension.

There is also a Rust port (also written by Mozilla), and notable
implementations in JavaScript and Python.

For WMF, a Java service would be quite easily done, and I have
prototyped it already. An HHVM extension might also be possible. A
non-service fallback for small installations might be Node.js or a
compiled binary from Rust or C++.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting rules

2015-08-05 Thread Tim Starling
I've been trying to encourage a certain structure to our weekly IRC
meetings, by means of some brief statements in the meeting and by
talking to people afterwards about what I was trying to achieve. But
this approach has led to frustration and miscommunication. I think
it's about time I wrote my thoughts out in full for everyone.

What I want is pretty modest and achievable.

I want to have a brief wrap-up period, lasting 5-10 minutes at the end
of the meeting, where the regular flow of discussion is suspended, and
we instead focus on helping the RFC author and other implementors, by
producing action items, meeting summary notes, and if possible, RFC
resolution (acceptance or rejection).

At the end of this wrap-up period, the #endmeeting command will be
given. Then you are free to continue your discussions unlogged,
without expecting all relevant parties to remain in attendance.

We are all engineers, and we love thinking about hard problems and
clever solutions. That is why it is important that discussion be
suspended. Otherwise, it is too difficult to focus on the meeting goals.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-08-05 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Streamlining Composer usage
<https://www.mediawiki.org/wiki/Requests_for_comment/Streamlining_Composer_usage>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-07-29 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Content model storage
<https://www.mediawiki.org/wiki/Requests_for_comment/Content_model_storage>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-06-24 Thread Tim Starling
In the next RFC meeting, we will discuss MediaWiki architectural focus
areas and strategic priorities:

https://www.mediawiki.org/wiki/Architecture_focus_2015

The architecture committee has developed this draft document, and we'd
like to know what people think of it.

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-06-17 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Request timeouts and retries
https://phabricator.wikimedia.org/T97204

* Re-evaluate varnish-level request-restart behavior on 5xx
https://phabricator.wikimedia.org/T97206

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-06-10 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Create a proper command-line runner for MediaWiki maintenance tasks
https://phabricator.wikimedia.org/T99268

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC meeting this week

2015-05-14 Thread Tim Starling
On 14/05/15 09:11, Matthew Flaschen wrote:
> Outcome of this was
> "just submit a patch for it and we can continue the discussion in
> gerrit".

Note that the patch has been in Gerrit for almost 24 hours with no
negative comments on the principle, despite me adding almost everyone
from the meeting as a reviewer.

https://gerrit.wikimedia.org/r/#/c/210856/

If there are still no negative comments after the [WIP] tag is
removed, I will take that as evidence that the objections raised in
the meeting have been withdrawn and I will approve the patch.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-05-12 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Improving extension management
<https://www.mediawiki.org/wiki/Requests_for_comment/Improving_extension_management>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Multimedia team?

2015-05-10 Thread Tim Starling
On 10/05/15 07:06, Brian Wolff wrote:
> People have been talking about vr for a long time. I think there is more
> pressing concerns (e.g. video). I suspect VR will stay in the video game
> realm  or gimmick realm for a while yet

Maybe VR is a gimmick, but VRML, or X3D as it is now called, could be
a useful way to present 3D diagrams embedded in pages. Like SVG, we
could use it with or without browser support.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-05-06 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFCs:

* Request timeouts and retries
<https://phabricator.wikimedia.org/T97204>

* Re-evaluate varnish-level request-restart behavior on 5xx
<https://phabricator.wikimedia.org/T97206>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-04-29 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Integrate file revisions with description page history
<https://phabricator.wikimedia.org/T96384>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-04-22 Thread Tim Starling
In the next RFC meeting, we will discuss the following RFC:

* Business Layer Architecture on budget
<https://www.mediawiki.org/wiki/Requests_for_comment/Business_Layer_Architecture_on_budget>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 23:00
* Australia AEST: Thursday 07:00

-- Tim Starling

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-04-14 Thread Tim Starling
In the next RFC meeting, we would like to discuss the following RFC:

* Watch Categorylinks
<https://www.mediawiki.org/wiki/Requests_for_comment/Watch_Categorylinks>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 22:00
* Australia AEST: Thursday 07:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week (new time)

2015-04-07 Thread Tim Starling
In the next RFC meeting we will discuss the following RFCs:

* Clean up URLs
<https://www.mediawiki.org/wiki/Requests_for_comment/Clean_up_URLs>

* Assert
<https://www.mediawiki.org/wiki/Requests_for_comment/Assert>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CEST: Wednesday 22:00
* Australia AEST: Thursday 07:00

-- Tim Starling



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-03-18 Thread Tim Starling
In the next RFC meeting we will discuss the following RFC:

* Master & slave datacenter strategy for MediaWiki
<https://www.mediawiki.org/wiki/Requests_for_comment/Master_%26_slave_datacenter_strategy_for_MediaWiki>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week (new time)

2015-03-10 Thread Tim Starling
In the next RFC meeting we will discuss the following RFCs:

* Service split along presentation vs data manipulation line
<https://www.mediawiki.org/wiki/Requests_for_comment/Service_split_along_presentation_vs_data_manipulation_line>

* Support for user-specific page lists in core
<https://www.mediawiki.org/wiki/Requests_for_comment/Support_for_user-specific_page_lists_in_core>

The meeting time is the same as last week as measured in UTC, which
means that it will be an hour later for people in the US who observe
daylight savings time.

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PDT: Wednesday 14:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-03-03 Thread Tim Starling
In the next RFC meeting we will discuss the following RFCs:

* AuthManager
<https://www.mediawiki.org/wiki/Requests_for_comment/AuthManager>

* Allow ContentHandler to expose structured data to the search engine
<https://phabricator.wikimedia.org/T89733>

The second one is not a proper RFC page on mediawiki.org, because
Daniel Kinzler felt like being an anti-wiki rebel this week ;)

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PST: Wednesday 13:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Html.php line 269

2015-02-18 Thread Tim Starling
On 19/02/15 08:43, Gergo Tisza wrote:
> On Wed, Feb 18, 2015 at 1:38 PM, Petr Bena  wrote:
> 
>> (Perhaps wgWellFormedXml is true by default?)
> 
> 
> It is: https://www.mediawiki.org/wiki/Manual:$wgWellFormedXml

There was a Bugzilla report and Gerrit change requesting that it be
set to false:

https://phabricator.wikimedia.org/T52040
https://gerrit.wikimedia.org/r/#/c/70036/

I was against it, partly because of the omitted  tag.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-02-15 Thread Tim Starling
In the next RFC meeting we will discuss the following RFC:

* Improving extension management
<https://www.mediawiki.org/wiki/Requests_for_comment/Improving_extension_management>

Also, we will have a general discussion about services in the second
half hour.

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PST: Wednesday 13:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Boil the ocean, be silly, throw the baby out with bathwater, demolish silos, have fun

2015-02-15 Thread Tim Starling
On 14/02/15 09:39, Max Semenik wrote:
> On Fri, Feb 13, 2015 at 2:23 PM, Legoktm 
> wrote:
>>
>>
>> https://phabricator.wikimedia.org/T71366
> 
> 
> Note that my proposal is explicitly different from Jon's plans about that
> bug: he wants to continue overriding special pages, etc. while I want to
> leave everything like this outside of the skin, including its custom
> JS-based wikitext editor.

So how would this work exactly? Would Minvera's navigation drawer be
refactored so that it uses Skin::buildSidebar(),
SkinTemplate::buildContentNavigationUrls(), etc.? And how would MF
reapply its special page replacement? Hooks into those same common
functions? Would it be possible to get the mobile special pages on a
non-minerva skin?

I suppose the OutputPage::setTarget() call would also be left in MF.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-05 Thread Tim Starling
On 04/02/15 16:59, Marko Obrovac wrote:
> For v1, however, we plan to
> provide only logical separation (to a certain extent) via modules which can
> be dynamically loaded/unloaded from RESTBase. In return, RESTBase will
> provide them with routing, monitoring, caching and authorisation out of the
> box.

We already have routing, monitoring and caching in Varnish. That seems
like a good place to implement further service routing to me.

<https://git.wikimedia.org/blob/operations%2Fpuppet/4a7f5ce62d9cdd1ace20ca6c489cbdb538503750/templates%2Fvarnish%2Fmisc.inc.vcl.erb>

It's simple to configure, has excellent performance and scalability,
and monitoring and a distributed logging system are already implemented.

It doesn't have authorisation, but I thought that was going to be in a
separate service from RESTBase anyway.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki-schroot

2015-02-04 Thread Tim Starling
For the last year, I've been using schroot to run my local MediaWiki
test instance. After hearing some gripes about Vagrant at the Dev
Summit, I decided to share this idea by automating the setup procedure
and committing the scripts I use. You can read about it here:

https://www.mediawiki.org/wiki/MediaWiki-schroot

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-03 Thread Tim Starling
On 04/02/15 12:46, Dan Garry wrote:
> To address these challenges, we are considering performing some or all of
> these tasks in a service developed by the Mobile Apps Team with help from
> Services. This service will hit the APIs we currently hit on the client,
> aggregate the content we need on the server side, perform transforms we're
> currently doing on the client on the server instead, and serve the full
> response to the user via RESTBase. In addition to providing a public API
> end point, RESTBase would help with common tasks like monitoring, caching
> and authorisation.

I don't really understand why you want it to be integrated with
RESTBase. As far as I can tell (it is hard to pin these things down),
RESTBase is a revision storage backend and possibly a public API for
that backend. I thought the idea of SOA was to separate concerns.
Wouldn't monitoring, caching and authorization would be best done as a
node.js library which RESTBase and other services use?

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] The future of shared hosting

2015-01-18 Thread Tim Starling

On 16/01/15 17:38, Bryan Davis wrote:

The solution to these issues proposed in the RFC is to create
independent services (eg Parsoid, RESTBase) to implement features that
were previously handled by the core MediaWiki application. Thus far
Parsoid is only required if a wiki wants to use VisualEditor. There
has been discussion however of it being required in some future
version of MediaWiki where HTML is the canonical representation of
articles {{citation needed}}.


Parsoid depends on the MediaWiki parser, it calls it via api.php. It's 
not a complete, standalone implementation of wikitext to HTML 
transformation.


HTML storage would be a pretty simple feature, and would allow 
third-party users to use VE without Parsoid. It's not so simple to use 
Parsoid without the MediaWiki parser, especially if you want to 
support all existing extensions.


So, as currently proposed, HTML storage is actually a way to reduce 
the dependency on services for non-WMF wikis, not to increase it.


Based on recent comments from Gabriel and Subbu, my understanding is 
that there are no plans to drop the MediaWiki parser at the moment.



This particular future may or may not be
far off on the calendar, but there are other services that have been
proposed (storage service, REST content API) that are likely to appear
in production use at least for the Foundation projects within the next
year.


There is a proposal to move revision storage to Cassandra, possibly 
with node.js middleware. I don't think that project requires dropping 
support for revision storage in MySQL. I think MediaWiki should be a 
client for multiple revision storage backends, like what we are 
already doing for file storage.


There's no reason to think Cassandra is the best storage system that 
will ever be conceived; the end of history. There will be new 
technologies in the future, and an abstract backend API for revision 
storage will help us to utilise them when they become available.



One of the bigger questions I have about the potential shift to
requiring services is the fate of shared hosting deployments of
MediaWiki.


As long as there are no actual reasons for dropping pure-PHP core 
functionality, the idea of WMF versus shared hosting is a false dichotomy.


Note that feature parity with Wikipedia has not been possible in pure 
PHP since 2003, when texvc was introduced. And now that we have 
Scribunto, you can't even copy an infobox template from Wikipedia to a 
pure-PHP hosted MediaWiki instance. The shared hosting environment has 
never been preferred, and I'm not particularly attached to it. Support 
for it is an accidental consequence of MediaWiki's simplicity and 
flexibility, and those qualities should be valued for their own reasons.


-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-15 Thread Tim Starling

On 16/01/15 15:04, Rob Lanphier wrote:

Still, the uncomfortable shrugging continues.  The group is broader,
but still lacks the breadth, particularly in front end and in the
development of newer services such as Parsoid and RESTBase.


It appears that we won't be able to keep the members we have, let 
alone broaden our membership. Mark has said that it's not worth his 
time, Brion hasn't attended a committee meeting since November, Daniel 
has given hints that his involvment might not continue, and Roan has 
been deeply skeptical from the outset. I think I am the only one who 
is committed to it, and that is out of a sense of duty rather than 
rational reflection.


The problem is that the work is mostly administrative and not 
empowered. Committee members are skeptical of many of the current 
ideas floating around at the moment, and have their own ideas about 
what things should be priorities, but have no expectation that those 
ideas will be considered for resourcing.


We review the technical details of design proposals, but I think most 
committee members do not find that to be engaging. We've all reviewed 
things before, and will presumably continue to do so regardless of 
whether we are on a committee. We could veto technical details as 
individuals, so what is the committee for?



I believe no one would dispute the credentials
of every member of the group.  Brion, Tim, and Mark have an extremely
long history with the project, being employees #1, #2, and #3 of the
WMF respectively, and all having contributed massively to the success
of Wikipedia and to MediaWiki as general purpose wiki software.  In
most open source projects, one of them would probably be BFDL[5].
Roan and Daniel are more "recent", but only in relative terms, and
also have very significant contributions to their name.


It's not a community open source project, it is an engineering 
organisation with a strict hierarchy. We don't have a BDFL, we have a VPE.



On the leadership front, let me throw out a hypothetical:  should we
have MediaWiki 2.0, where we start with an empty repository and build
up?  If so, who makes that decision?  If not, what is our alternative
vision?  Who is going to define it?  Is what we have good enough?


Sorry to labour the point, but the way to go about this at present is 
pretty straightforward, and it doesn't involve the architecture 
committee. You just convince the management (Damon, Erik, etc.) that 
it is a good thing to do, get yourself appointed head of the 
"MediaWiki 2.0" team, hire a bunch of people who agree with your 
outlook, get existing engineers transferred to your team. It's not 
even hypothetical, we've seen this pattern in practice.


-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] RFC meeting this week

2015-01-12 Thread Tim Starling

In the next RFC meeting we would like to discuss the following RFCs:

* Support for user-specific page lists in core
<https://www.mediawiki.org/wiki/Requests_for_comment/Support_for_user-specific_page_lists_in_core>

* Guidelines for extracting, publishing and managing libraries
<https://www.mediawiki.org/wiki/Requests_for_comment/Guidelines_for_extracting,_publishing_and_managing_libraries>

The meeting will be on the IRC channel #wikimedia-office on
chat.freenode.net at the following time:

* UTC: Wednesday 21:00
* US PST: Wednesday 13:00
* Europe CET: Wednesday 22:00
* Australia AEDT: Thursday 08:00

-- Tim Starling

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Stance on Social Media

2015-01-12 Thread Tim Starling

On 12/01/15 17:11, Jay Ashworth wrote:

I personally attribute that to "we're so small, we have to cave on this point
or no one will know we're here", a problem a small journal might have, but
which Wikipedia certainly does not.


Do you suppose Physical Review (the lumbering giant of physics 
publishing) has that problem?


<http://journals.aps.org/prb/accepted/99078Yc6K231ec4c06a66cd9e85c6575ddf278adc>

Or PLOS ONE?

<http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0107794>

Or Philosophical Transactions A?

<http://rsta.royalsocietypublishing.org/content/373/2035/20140347>

All have share links.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Stance on Social Media

2015-01-12 Thread Tim Starling

On 12/01/15 16:35, MZMcBride wrote:

What problem are we trying to solve here?


The idea is to increase the number of shares, thus increasing the 
number of people who read our content, thus educating more people, 
thus better meeting our mission.



If the answer is that we want to make it painless to submit
noise into the ether


If you think Wikipedia is "noise", compared to the usual stuff that 
gets shared on Facebook, maybe you're contributing to the wrong 
project. The idea is to make sharing more frequent, not to make it easier.


-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   3   4   5   6   7   8   >