[Wikitech-l] Re: Renaming Gitlab repos/releng/release Default Branch

2023-03-03 Thread Adam Wight

Great to see that we're following industry best practice!

Another advantage to using "main" is that it more clearly documents that 
the main branch is one of many potential branches, whereas "master" 
could be seen as implying a specific branching strategy in which fixes 
are backported from from master into release branches.


A tip to fellow developers, you'll want to configure git locally like 
this for compatibility:


|  git config --global init.defaultBranch main|

-[[mw:User:Adamw]]

On 3/2/23 23:55, Jeena Huneidi wrote:

Hi all,

I've changed the default branch for repos/releng/release repo on 
gitlab[0] from 'master' to 'main'. Please remember to change branches 
for your local repositories. I'll be deleting the 'master' branch next 
week.


[0]: 

On Wed, Feb 22, 2023 at 2:47 PM Jeena Huneidi  
wrote:


One week from today, or sometime soon after, the
repos/releng/release default branch will be renamed from “master”
to “main” [0][1].

We will send notification when we complete the renaming.

If you use this repository, please make sure to change branches
after the renaming has been done [2].

Release Engineering will make any necessary changes to support
this rename, but if you have concerns about something that might
be affected by this change or overlooked, please do notify us.

Phab task for reference: https://phabricator.wikimedia.org/T329770

Thank you for your vigilance!

[0]: 
[1]: 
[2]: Instructions:



-- 
Jeena Huneidi

Software Engineer, Release Engineering
Wikimedia Foundation



--
Jeena Huneidi
Software Engineer, Release Engineering
Wikimedia Foundation

___
Wikitech-l mailing list --wikitech-l@lists.wikimedia.org
To unsubscribe send an email towikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Call for projects and mentors for Google Summer of Code 2023 and Outreachy Round 26 is ongoing!

2023-02-24 Thread Adam Wight
Thank you for the note—I hadn't realized that Outreachy participation 
details can't be published until the contribution period begins on March 
6th.


Please watch this page for more information about the research project 
generally: 
https://meta.wikimedia.org/wiki/Research:Content_Translation_language_imbalances


-Adam

On 2/24/23 13:12, Zoran Dori wrote:

Hello,
the linked Phabricator task is "restricted" for me, I'm unable to see it.

Best regards,
Zoran

пет, 24. феб 2023. у 12:49 Adam Wight  је 
написао/ла:


Research into translation imbalances

=


A short round of initial investigation by Jan Dittrich and myself
has revealed a strong pattern in how Content Translation is used:
comparing the number of articles being translated from Wikipedia
languages with more editors and articles into languages with fewer
articles, we’ve found an imbalance as large as 100:1 in favor of
translations from the larger to the smaller language.


There is much to be done to understand the source of this
imbalance, whether it’s desirable, and whether we can design an
intervention which leads to a more balanced exchange between
languages.


We kindly invite Outreachy applicants to consider helping us with
this project, and we will co-create a flexible mentoring
opportunity in whichever of these fields is most interesting to you:

 *

User experience research

 *

Programming

 *

Data analysis


Please see the longer version of this proposal for more details
and to find “microtasks” for getting started,


https://phabricator.wikimedia.org/T328597
<https://phabricator.wikimedia.org/T328597>- Research into
translation imbalances


Kind regards,

[[mw:Adamw <https://www.mediawiki.org/wiki/User:Adamw>]] and
[[meta:Simulo <https://meta.wikimedia.org/wiki/User:Simulo>]]

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/


___
Wikitech-l mailing list --wikitech-l@lists.wikimedia.org
To unsubscribe send an email towikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Call for projects and mentors for Google Summer of Code 2023 and Outreachy Round 26 is ongoing!

2023-02-24 Thread Adam Wight
Research into translation imbalances

=

A short round of initial investigation by Jan Dittrich and myself has
revealed a strong pattern in how Content Translation is used: comparing the
number of articles being translated from Wikipedia languages with more
editors and articles into languages with fewer articles, we’ve found an
imbalance as large as 100:1 in favor of translations from the larger to the
smaller language.

There is much to be done to understand the source of this imbalance,
whether it’s desirable, and whether we can design an intervention which
leads to a more balanced exchange between languages.

We kindly invite Outreachy applicants to consider helping us with this
project, and we will co-create a flexible mentoring opportunity in
whichever of these fields is most interesting to you:

   -

   User experience research
   -

   Programming
   -

   Data analysis


Please see the longer version of this proposal for more details and to find
“microtasks” for getting started,

https://phabricator.wikimedia.org/T328597 - Research into translation
imbalances

Kind regards,

[[mw:Adamw ]] and [[meta:Simulo
]]
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Feedback wanted: PHPCS in a static types world

2022-10-31 Thread Adam Wight

+1 to moving towards more type hints

And just to expand on Amir's comment, giving a type hint for return 
values is usually safe because you can see exactly what types are 
possible, but parameter type hints can be dangerous unless you look at 
all usages of the function.  Nulls can be especially surprising, passing 
a null into a "String" parameter for example will throw a fatal 
TypeError, the hint would need to be "?String" to accept null and 
distinguishing optional parameters isn't always done in the phpdoc comments.


-Adam

On 10/30/22 19:10, Amir Sarabadani wrote:

Hi,
A great initiative, thank you!

I am generally in favor of this proposal but just want to give a 
cautionary tale. It's a bit off-topic but important.
Given that there is no actual enforcing mechanism for the 
documentation typehints, some of them have actually drifted from 
reality. I caused a UBN bug once by relying on the documentation for 
the type of a variable. So my request is to avoid mass migration of 
documentation type hints to php type declaration.


Best

Am So., 30. Okt. 2022 um 14:02 Uhr schrieb Daniel Kinzler 
:


Thank you for suggesting this!
I agree that type declaration is preferable to type documentation,
and that type documentation is often redundant if type declaration
is present.

However, we can't always use type declarations. For instance,
union types are quite useful, since PHP doesn't have method
overloading. And union type declarations will only become
available in PHP 8. So we'll have a mix of declared and
un-declared parameters and fields for a while. I think we should
still require type documentation if there is no type declaration -
and of course, if a method has any @param tags, it needs to have
all of them.

Also there is the notable exception of the array type. Saying that
something is an array is generally insufficient, we should say at
least whether it's a list or an associative array, and document
the type of the array elements or and/or well-known keys.

And we should be careful that we don't end up discouraging
documentation of the meaning of a parameter. The barrier to adding
some explanation of the meaning of a parameter is lower if there
is already a @param string $name line. If I'd first have to create
a doc block, I may just not add the documentation at all. We
should still encourage having doc blocks in all but the most
trivial cases (simple constructors, getters and setters probably
don't need one).

-- daniel

PS: I'm not sure I like constructor argument property promotion...
For very simple value objects that might be nice, but generally, I
fear it will make it harder to see all fields declared on an object.

Am 28.10.2022 um 16:03 schrieb Lucas Werkmeister:

Hi all!

In my opinion, MediaWiki’s PHPCS ruleset feels largely rooted in
an older version of PHP, where static type declarations (formerly
known as “type hints”) did not exist. As we move towards more
modern code, I think some rules should be relaxed, and others
adjusted. More specifically, I’d like to know if most people
agree with the following propositions and conclusion:

Proposition 1: */Some/ code is sufficiently documented by names
and types*, and does not require additional documentation. Cases
where additional documentation is required do certainly exist,
but they can only be identified by human reviewers, not by
automated tools.

You can see this in our existing code wherever a doc comment
specifies only a type (with @var, @param, or @return), but no
additional text. For example, in CreditsAction

,
nobody needs to be told that the LinkRenderer will be used to
render links, or that the UserFactory creates User objects:

class CreditsAction extends FormlessAction {


/** @var LinkRenderer */

private $linkRenderer;


/** @var UserFactory */

private $userFactory;

Likewise, it’s not necessary to explain in great detail that the
string returned by LinksTable::getTableName()


is the table name, that the $actor parameter of
ActorCache::remove( UserIdentity $actor )


represents the actor to remove from the cache, or what the
meaning of the Message $m and returned MessageValue are in
Message\Converter::convertMessage()

:

/**

* Convert a Message to a MessageValue

* @param Message $m

* @return MessageValue

*/

public function convertMessage( Message $m ) {

(I 

[Wikitech-l] Stagnant quality control process for [[mw:API:Client_code]]

2022-05-26 Thread Adam Wight
A few years ago, I wrote a "yet another" client library for the MediaWiki
action API, and went to the official list on
https://www.mediawiki.org/wiki/API:Client_code , expecting to add my
library.  Instead, I found instructions on that page asking that new
libraries be added to https://www.mediawiki.org/wiki/API:Client_code/All
and describing a review process for periodically checking that libraries
meet some basic standards, before copying them over to the main list.

As happens with wiki workflows, this excellent concept fell into disrepair
and with nobody pushing it forward, there seems to be no new review
activity.  Without active curation, I think it's unhelpful to have two
separate pages for "reviewed" and unreviewed libraries.  My suggestion is
to merge the two pages and add a column for review status, so the
information is all in one place.  I imagine this will reduce the work
needed to maintain this list.

Regards,
[[mw:User:Adamw]]
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Usecases for Low-Code Platforms within the Wikimedia Projects

2022-05-25 Thread Adam Wight

This is a brilliant idea, I hope it gets some consideration.

One more use case for such a system would be the on-wiki workflows such 
as File Upload, Articles for Creation, and Articles for Deletion.  The 
basic software support can be written as robust building blocks, and 
customized with extra screens and functionality at according to the 
wishes of each wiki community. There's some research about this such as 
[1], and an ancient Request for Comment (by yours truly) proposing one 
design for a low-code workflow system [2].


I would be happy to collaborate as a volunteer, if I might be helpful.

Regards,
[[mw:User:Adamw]]

[1] * https://meta.wikimedia.org/wiki/Research:Wikipedia_Workflows
* 
https://www.mediawiki.org/wiki/Flow/Community_process_workflow_interviews_(June_2015)

* https://meta.wikimedia.org/wiki/Workflows

[2] 
https://www.mediawiki.org/wiki/Requests_for_comment/Workflows_editable_on-wiki



On 5/24/22 3:57 PM, Dan Andreescu wrote:
I think there are many possible applications, here are two that sound 
interesting to me (but my opinion really doesn't and shouldn't count):


* Abstract Wikipedia wiki functions 
: 
Snap! seems like an easy way for more people to get involved
* Lua templates : 
generating the lua code could open this up to more people


On Mon, May 23, 2022 at 2:43 PM > wrote:


At the Wikimedia Hackathon 2022 that ended yesterday I have showed
a program in the Showcase that can convert blocks from
visual-programming-language Snap! to source code. This is the link
to the folder where the program is located in.
https://public.paws.wmcloud.org/User:Hog%C3%BC-456/BlocktoCode/

The program reads an XML-File with the definition of a program and
gives the source code as an output. The platform for creating the
blocks I used is called Snap!. This is an further development of
Scratch. Scratch is an visual programming language based on
blocks, that can be combined to create a program. A block is a
small sentence with gaps for the variables. In Snap! it is
possible to create own blocks and it includes an feature to
directly convert blocks to code.  I dont know how far this is
developed and after I havent understand how to export the result
with the code I have written a own program to do that.

What do you think are potential use cases for low code platforms
like Snap! within the Wikimedia Projects. From my point of view
such platforms offer a chance to make programming accessible to
more people. It is from my point of view easier with such a
platform to write small programs as without such an support. I am
interested in use cases where the built-in codification feature or
my program can be used to generate code that will be then useful
within the Wikimedia projects.

Have a nice day and I am interested in your thoughts.
Hogü-456
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org

To unsubscribe send an email to
wikitech-l-le...@lists.wikimedia.org

https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/




___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Requesting feedback about the future of the LocalisationUpdate extension

2022-04-27 Thread Adam Wight
I wonder if this would be a good candidate for event-based replication?  
One drawback is that current streams keep at most one month of data [1], 
but that might be extended for translations depending on the volume.  
Another workaround might be to combine regular releases with a streaming 
update, for example if a language bundle were released once per month.


This approach might also work well for Wikimedia sites, I wasn't sure 
from the final question in the email whether or not this is an 
outstanding technical gap.


Regards,
[[mw:User:Adamw]]

[1] 
https://wikitech.wikimedia.org/wiki/Event_Platform/EventStreams#Historical_Consumption


On 4/27/22 1:22 PM, Niklas Laxström wrote:
Since the beginning of the year, the Wikimedia Language team has 
enabled translation backports for MediaWiki core, extensions and skins 
hosted on Gerrit. On a weekly schedule compatible translations from 
master branch are backpored to all the supported release branches. 
Currently supported branches are 1.35–1.38.


Translation backports partially replace the purpose of the 
LocalisationUpdate extension. Wikimedia sites no longer use the 
extension, and to our knowledge only a few other users of the 
extension exist, because it needs manual setup to use.


We, the Language team, think that maintaining the LocalisationUpdate 
extension is no longer a good use of our time. We are asking for your 
feedback about the future of this extension.


We are planning to:
* Remove LocalisationUpdate from the MediaWiki Language Extension 
Bundle starting from version 2022.07

* Remove us as maintainers of the extension

Additionally, based on the feedback, we are planning to either mark 
the extension as unmaintained, transfer maintenance to a new 
maintainer, or request the extension to be archived and removed from 
the list of extensions bundled with MediaWiki core if there is no 
indication that anyone uses this extension.


We request your feedback and welcome discussion on 
https://phabricator.wikimedia.org/T300498 
. Please let us know if you 
are using this extension and whether you would be interested in 
maintaining it.


*Anticipated questions*
Q: What about Wikimedia sites: does this mean they will not get 
frequent translation updates as they used to have?


A: We still think this is important, but we do not think the previous 
solution can be restored. We would like to collaborate on new 
solutions. One solution could be more frequent deployments.


  -Niklas

___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Goto for microoptimisation

2021-08-02 Thread Adam Wight

On 7/31/21 6:09 AM, Tim Starling wrote:

> Certain flow control patterns cannot be implemented efficiently in 
PHP without using "goto". The current example in Gerrit 708880 
 
comes down to:


If `goto` really does help, please go to it with no objections from me!

But could you split the guard logic in your example, like this? The `||` 
operator will short-circuit away the expensive test for "is 2" in the 
case of "is 1".


$state_1 = ($x == 1);
$state_2 = !($state_1 || $x != 2);

if ( $state_1 ) {
    action1();
} else {
    action_not_1();
}
if ( $state_2 ) {
    action2();
} else {
    action_not_2();
}

-Adam W.



For performance sensitive tight loops, such as parsing and HTML 
construction, to get the best performance it's necessary to think 
about what PHP is doing on an opcode by opcode basis.


Certain flow control patterns cannot be implemented efficiently in PHP 
without using "goto". The current example in Gerrit 708880 
 
comes down to:


if ( $x == 1 ) {
action1();
} else {
action_not_1();
}
if ( $x == 2 ) {
action2();
} else {
action_not_2();
}

If $x==1 is true, we know that the $x==2 comparison is unnecessary and 
is a waste of a couple of VM operations.


It's not feasible to just duplicate the actions, they are not as 
simple as portrayed here and splitting them out to a separate function 
would incur a function call overhead exceeding the proposed benefit.


I am proposing

if ( $x == 1 ) {
action1();
goto not_2; // avoid unnecessary comparison $x == 2
} else {
action_not_1();
}
if ( $x == 2 ) {
action2();
} else {
not_2:
action_not_2();
}

I'm familiar with the cultivated distaste for goto. Some people are 
just parotting the textbook or their preferred authority, and others 
are scarred by experience with other languages such as old BASIC 
dialects. But I don't think either rationale really holds up to scrutiny.


I think goto is often easier to read than workarounds for the lack of 
goto. For example, maybe you could do the current example with break:


do {
do {
if ( $x === 1 ) {
action1();
break;
} else {
action_not_1();
}
if ( $x === 2 ) {
action2();
break 2;
}
} while ( false );
action_not_2();
} while ( false );

But I don't think that's an improvement for readability.

You can certainly use goto in a way that makes things unreadable, but 
that goes for a lot of things.


I am requesting that goto be considered acceptable for micro-optimisation.

When performance is not a concern, abstractions can be introduced 
which restructure the code so that it flows in a more conventional 
way. I understand that you might do a double-take when you see "goto" 
in a function. Unfamiliarity slows down comprehension. That's why I'm 
suggesting that it only be used when there is a performance justification.


-- Tim Starling


___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: [Wikitech-l] Are ResourceLoader modules that bad?

2020-12-04 Thread Adam Wight
Hi physikerwelt, you're correct, the overhead is a very small hash table 
entry for each module in the ResourceLoader registry—but this gets sent 
on every pageview.  It's something like 12 bytes.  See 
https://phabricator.wikimedia.org/T202154 and 
https://www.mediawiki.org/wiki/Wikimedia_Performance_Team/Page_load_performance#Size_of_scripts 
for more background.


These small entries add up quickly, as you can imagine!

The one situation where you would *not* want to merge modules is if one 
module will be loaded very often, and the second module is rarely 
needed.  What you describe sounds like it might be one of these cases, 
so you'll have to estimate how much average overhead the merged module 
will cause, compared to the overhead of an additional module entry.  
Since there are a huge number of pageviews, it's usually a net benefit 
to coalesce ResourceLoader modules together even when they're used in 
slightly different workflows.


Sorry for the inconvenience of having to make this micro-optimization.

Regards,
Adam W.

On 12/4/20 9:17 AM, Physikerwelt wrote:

Dear all,

I am trying to understand the performance impact of adding new
ResourceLoader modules. I am currently stuck in the code review of

https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Math/+/638094

as I am not convinced that it is a good idea to bundle an OOUI widget
module and a special page js snippet into one module. One thing is a
general-purpose control that is planned to be used in other contexts
whereas the other is a very specific js code only executed on one
particular special page. However, since this seems to be the critical
point in the code review, I would like to better understand the impact
of the additional resource module call.

I was looking at
https://www.mediawiki.org/wiki/File:ResourceLoader_Client_lifecycle_2020.png
and according to that overview, an additional module is just one entry
in the module registry. I was hoping that minifying and caching is
something the ResourceLoader would take care of.
Any help would be appreciated.

Thank you
physikerwelt

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The Second round of voting for mediawiki logo just started!

2020-09-28 Thread Adam Wight

Hi Amir, thank you for the patient explanation!

Yes, it makes sense now.  I've been thrown off by wiki election math 
(again [1]).


I also happily accept the community consensus even if it endorses a 
circle logo.  But I gently suggest that our vote tally is an 
unconvincing reflection of any consensus.  This isn't a new problem, so 
I don't want to suggest we drag out the logo nomination, but I think it 
gives a good example of why on-wiki democratic machinery is in need of 
reform.


There's a lot to say on the topic, but for now I can give an example of 
a single statistic, that the winning logo has 63 support votes out of 
216 total support votes, or 29% of the total.  That means there are 153 
potentially disenfranchised voters, which is an analysis we should be 
obligated to run.  Were 63 of these 153 also people who voted for the 
winning proposal as well?  (We know that at least 90 did not vote for 
the winner.) Potentially these voters had another much preferred 
favorite? Should we hold a run-off between the winners?  Would a 
ranked-choice tally like Round 1 have given different results? These are 
questions we can't answer without using a better electoral system.


Kind regards,
Adam

[1] https://meta.wikimedia.org/wiki/User:Adamw/Draft/Board_Election_analysis

[2] 216 = 63 + 29+8+13+18+13+4+9+2+6+9+2+25+6+3+6

On 9/28/20 10:25 AM, Amir Sarabadani wrote:

Hey,
The first round was using the standard voting process in wikis (using 
support/oppose and the thresholds like 70%) and this is the way we 
elect admins, checkusers or other user rights, or change policies in 
Wikis. I don't recall that there has ever been anyone elected as admin 
with below 70% or we have ever changed any policies with below 70% 
(not to mention the runner up logos are 56% and 61%, basically for any 
support, they had an opposition). Our logo is similar, no logo except 
proposal six could reach seventy percent and while there were good 
designs that almost made it but clearly none of them has enough 
support (and percentage of support) to reach the next round. That's a 
pity (one of the runner ups was actually by me) but if that's what the 
community wants, I happily accept it.


The second round has always been 
<https://www.mediawiki.org/w/index.php?title=Project:Proposal_for_changing_logo_of_MediaWiki,_2020/Round_1=4006263=3997205> 
about different variants of the logos that pass the first round.


HTH

On Mon, Sep 28, 2020 at 9:30 AM Adam Wight <mailto:adam.wi...@wikimedia.de>> wrote:


Hi, thanks for helping coordinate this process!

I have concerns about what happened between round 1 and round 2,
it seems that we're no longer left with a real choice.  It's
unclear what method was used to tally the round 1

<https://www.mediawiki.org/wiki/Project:Proposal_for_changing_logo_of_MediaWiki,_2020/Round_1>
votes, was this a "support percentage"?  Whenever a vote is taken,
it's important to stick to democratic norms, basically "one
person, one vote".  Round 2 is entirely variations on a single
proposal, which disenfranchises everyone who didn't prefer that
design.  Is it too late to discuss?

Kind regards,
Adam

On 9/25/20 11:42 PM, Amir Sarabadani wrote:

Hello,
The subject line is self-explanatory, you can go to the voting
page

<https://www.mediawiki.org/wiki/Project:Proposal_for_changing_logo_of_MediaWiki,_2020/Round_2>
and cast your vote.

This is going to continue for a month and it's about different
variants of the top contender (different colors, different
wordmarks, etc.). You need to order logos based on your
preference (the most preferred one first, the least preferred one
the last) and then cast your vote. The final winner will be
chosen using Schulze method
<https://en.wikipedia.org/wiki/Schulze_method>.

If you have mistakenly voted in the test phase, you can just copy
your vote from the test page
<https://www.mediawiki.org/wiki/User:Ladsgroup/Round_2/votes> to
the actual voting page

<https://www.mediawiki.org/wiki/Project:Proposal_for_changing_logo_of_MediaWiki,_2020/Round_2/Votes>
(the numbers of logos haven't changed).

Special thank you to Chuck Roslof from WMF legal for doing the
preliminary clearance of the proposal.

Have a nice weekend!
-- 
Amir (he/him)



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org  <mailto:Wikitech-l@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org <mailto:Wikitech-l@lists.wikimedia.org>
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



--
Amir (he/him)


___
Wikitech-l mailing list
Wikitec

Re: [Wikitech-l] The Second round of voting for mediawiki logo just started!

2020-09-28 Thread Adam Wight

Hi, thanks for helping coordinate this process!

I have concerns about what happened between round 1 and round 2, it 
seems that we're no longer left with a real choice.  It's unclear what 
method was used to tally the round 1 
 
votes, was this a "support percentage"?  Whenever a vote is taken, it's 
important to stick to democratic norms, basically "one person, one 
vote".  Round 2 is entirely variations on a single proposal, which 
disenfranchises everyone who didn't prefer that design.  Is it too late 
to discuss?


Kind regards,
Adam

On 9/25/20 11:42 PM, Amir Sarabadani wrote:

Hello,
The subject line is self-explanatory, you can go to the voting page 
 
and cast your vote.


This is going to continue for a month and it's about different 
variants of the top contender (different colors, different wordmarks, 
etc.). You need to order logos based on your preference (the most 
preferred one first, the least preferred one the last) and then cast 
your vote. The final winner will be chosen using Schulze method 
.


If you have mistakenly voted in the test phase, you can just copy your 
vote from the test page 
 to the 
actual voting page 
 
(the numbers of logos haven't changed).


Special thank you to Chuck Roslof from WMF legal for doing the 
preliminary clearance of the proposal.


Have a nice weekend!
--
Amir (he/him)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Making breaking changes without deprecation?

2020-08-28 Thread Adam Wight

On 8/28/20 11:18 AM, Daniel Kinzler wrote:


 Can we shorten or even entirely skip the deprecation process,
 if we have removed all usages of the obsolete code from public
 extensions?


I would support this, if only with the schadenfreude that MediaWiki will 
become harder for intelligence agencies and other closed-source shops to 
administer.


It seems totally reasonable that our "service level" is to guarantee 
upgradeability of our public components, with the only requirement that 
it must be performed one major version at a time.  By definition, we 
can't guarantee anything about the non-public ecosystem.


Slightly off-topic, I don't see any reason to keep our suggestion to 
"even better" wait for two major revisions before removing interfaces.  
A non-binding suggestion doesn't seem useful, and I don't understand the 
use case which would be improved by waiting this extra time.


Regards,
Adam


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CI and Code Review

2020-07-07 Thread Adam Wight
I feel positive about every aspect of this announcement.  I've enjoyed 
my own experiments with GitLab and its integrated CI. It's a huge relief 
that we'll be able to move towards a feature-branch strategy.


Even as a paid developer with 8 years of wiki experience, Gerrit is 
still an obstacle to my work and a challenge to use "correctly".  
Similarly, tinkering with our custom CI framework has been fun and 
rewarding, but I understand it's less fun for the small circle of people 
doing the daily maintenance.


The only detail which makes me nervous is that GitLab seems to lack 
diversity on their Board and upper management, so they may be prone to 
diseases endemic to Silicon Valley such as sudden corporate takeover or 
a shift to more aggressive profiteering. Hopefully self-hosting will 
insulate ourselves from this kind of turmoil, or at least gives us a few 
years of runway to migrate away as needed.


Kind regards,
Adam W.

On 7/6/20 7:39 PM, Greg Grossmeier wrote:

First, apologies for not announcing this last week. A short work week
coupled with a new fiscal year delayed this until today.

tl;dr: Wikimedia will be moving to a self-hosted (in our datacenter(s))
GitLab Community Edition (CE) installation for both code review and
continuous integration (CI).

Longer:
We are making a change to our code review and continuous integration (CI)
systems in response to a complex set of inputs (Developer Satisfaction
Survey[0], passing comments/critiques, an evaluation of replacement
continuous integration infrastructure[1], feedback from leaders in the
Foundation, etc) and evaluation conversations with Wikimedia Technology
department leadership (eg CTO, VPs) and representatives from Wikimedia SRE,
Architecture, Core Platform, Product, Security, and Technical Engagement
teams. In those conversations with Technology department leadership,
coordinated by our CTO Grant Ingersoll, we determined that an RFC was not
needed for this decision[2].

We plan to replace Gerrit and Zuul+Jenkins (the software that powers our CI
system) with GitLab. We hope that making a move to GitLab now will address
most of the concerns and desires that are able to be addressed via software
alone.

We join a growing list of other free/open knowledge organizations using a
self-hosted GitLab installation and look forward to the added benefits of
working together with them.

The project portal page lives at:
https://www.mediawiki.org/wiki/Wikimedia_Release_Engineering_Team/GitLab
It is a living documentation portal and we will continue to update it as we
go. The Talk: page is also available for your feedback and questions (and
is already being used). We hope everyone will join in to make this
transition as successful as possible.

Notably, I would like to point people to the conversation[3] about
evaluating pull-request style workflows and finding a recommended workflow
for our projects. While pull-request workflows are much more common in the
wider software development world{{cn}} we would like to provide as much
guidance as reasonable so that we are using our tools to the best of their
ability.

Here is the list of stakeholders as we have them now. In a RACI[4] model
these would be Consulted. These are the people the Wikimedia Release
Engineering team will be consulting with most closely as they get this
going.
* SRE/Service Ops - Mark and delegate(s)
* Security - Chase P and Scott B
* Core Platform Team - TBD
* Technical Engagement - TBD
* Product - Daniel C

“TBD” means we have asked for a representative and we’re waiting to hear
back on confirmation. We also have a few other non-WMF groups that we have
already reached out to or will be shortly to include in that list; feel
free to ping me with other suggestions.

What does this mean for you as you do your work today? Right now, nothing.
But as we set up the new GitLab installation we will be looking for early
adopters to give us feedback as we work on improvements. As we start to
migrate more users and repositories we will strive to help everyone as much
as possible and reasonable. It should go without saying that this includes
completely volunteer projects using the shared infrastructure.

The full timeline and plan will be posted to the above project portal page.

For the avoidance of doubt, this does not impact issue/task management;
that will remain in Phabricator.

Thank you,

Greg

PS: Please let me know where else this announcement should be sent.

[0] https://www.mediawiki.org/wiki/Developer_Satisfaction_Survey/2020
[1]
https://www.mediawiki.org/wiki/Wikimedia_Release_Engineering_Team/CI_Futures_WG
[2] see also:
https://www.mediawiki.org/wiki/Wikimedia_Technical_Committee/Charter#Areas_within_scope
[3] https://www.mediawiki.org/wiki/Topic:Vpbwawb4lkdy89ym
[4] https://en.wikipedia.org/wiki/Responsibility_assignment_matrix
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org

Re: [Wikitech-l] First and last call: removing ApiQueryReferences

2019-12-05 Thread Adam Wight
The axe has fallen.  The Cite references API was removed in [1] and I
expect it to go live with MediaWiki 1.35.0-wmf10.

Thank you for considering this issue!
-Adam

[1] https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Cite/+/552211/

On Thu, Nov 21, 2019 at 1:07 PM Adam Wight  wrote:

> The Cite extension implements an API `action=query=references`[1],
> which was written to support a reference lazy-loading feature for mobile
> clients but never used.  This was a beta feature and the API was never
> enabled on Wikimedia sites.[2]  If you query it, you'll see nothing but an
> error response.
>
> My team is currently refactoring the Cite extension, and we would prefer
> to remove the API rather than continue maintaining dead code.  Please
> respond by the end of the week if you know of any non-Wikimedia sites where
> the API is enabled and in use, otherwise we'll continue with the plan to
> remove it without a deprecation period.
>
> Thank you,
> Adam
>
> [1]
> https://en.wikipedia.org/w/api.php?action=help=query%2Breferences
> [2] https://phabricator.wikimedia.org/T222373
> --
> Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
>


-- 
Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] First and last call: removing ApiQueryReferences

2019-11-21 Thread Adam Wight
The Cite extension implements an API `action=query=references`[1],
which was written to support a reference lazy-loading feature for mobile
clients but never used.  This was a beta feature and the API was never
enabled on Wikimedia sites.[2]  If you query it, you'll see nothing but an
error response.

My team is currently refactoring the Cite extension, and we would prefer to
remove the API rather than continue maintaining dead code.  Please respond
by the end of the week if you know of any non-Wikimedia sites where the API
is enabled and in use, otherwise we'll continue with the plan to remove it
without a deprecation period.

Thank you,
Adam

[1]
https://en.wikipedia.org/w/api.php?action=help=query%2Breferences
[2] https://phabricator.wikimedia.org/T222373
-- 
Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WebTestCase

2019-10-02 Thread Adam Wight
It depends on the test, I suppose.  OutputPage has a "getHTML" method.  For
subclasses of SpecialPage, you can "getOutput"...

Do you have draft code posted somewhere?

On Wed, Oct 2, 2019 at 10:44 AM Jeroen De Dauw 
wrote:

> Hey,
>
> > Sort of.  Once you have the HTML, you can assert various HTML matches and
> non-matches using Hamcrest extensions
>
> That's great, but how do I get the HTML in the first place?
>
> Cheers
>
> --
> Jeroen De Dauw | www.EntropyWins.wtf <https://EntropyWins.wtf> |
> www.Professional.Wiki <https://Professional.Wiki>
> Entrepreneur | Software Crafter | Speaker | Open Souce and Wikimedia
> contributor
> ~=[,,_,,]:3
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WebTestCase

2019-10-02 Thread Adam Wight
Hi!

Sort of.  Once you have the HTML, you can assert various HTML matches and
non-matches using Hamcrest extensions, for example:
https://phabricator.wikimedia.org/diffusion/EFLI/browse/master/tests/phpunit/Html/ImportPreviewPageTest.php$113

Here's the source to the pattern matching code:
https://github.com/wmde/hamcrest-html-matchers

These can slow down a test significantly, beware of visiting every DOM
subtree!

-Adam

On Wed, Oct 2, 2019 at 7:54 AM Jeroen De Dauw 
wrote:

> Hey,
>
> Does MediaWiki have something similar to Symfony's WebTestCase? (
> https://symfony.com/doc/current/testing.html#functional-tests)
>
> I want to write some integration tests in the form of "does web page
> /wiki/MyWikiPage contain HTML snippet XYZ".
>
> Cheers
>
> --
> Jeroen De Dauw | www.EntropyWins.wtf <https://EntropyWins.wtf> |
> www.Professional.Wiki <https://Professional.Wiki>
> Entrepreneur | Software Crafter | Speaker | Open Souce and Wikimedia
> contributor
> ~=[,,_,,]:3
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Book referencing proposal

2019-10-01 Thread Adam Wight
On Tue, Oct 1, 2019 at 5:05 PM Ryan Kaldari  wrote:

> The bug I linked to, T7265 ,
> seems
> like a relatively simple, straightforward solution to a common use case
> that still doesn't have a real solution: creating footnotes that have
> references.


Hi Ryan, thanks for bringing up this request.  I see that the  tag
(T7265)
discussion goes back even further than the task, for example on the
footnotes talk
page [1], as early as 2005 [2].  Un-merging the task was a good call, maybe
the
next step would be to promote it during the next community wishlist survey,
or elaborate enough that it would make a good mentorship project.

There might be some beneficial overlap from improvements to be made to the
marker number sequence logic?  We noticed that the VisualEditor module [3]
and
backend [4] each do their own numbering and neither code is open to
extension.

-Adam

[1] https://en.wikipedia.org/wiki/Help_talk:Footnotes
[2]
https://en.wikipedia.org/wiki/Wikipedia_talk:Manual_of_Style_(footnotes)/Archive_1
[3] https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Cite/+/531900/
[4] https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Cite/+/530399/
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Book referencing proposal

2019-09-30 Thread Adam Wight
The German Technical Wishes team is planning to implement a feature we're
calling "book referencing", which supports one level of nested references.
This makes it possible to reference the same book several times in an
article, pointing to various pages, without repeating the full citation.

A more complete description of the feature and a screenshot is available
below, please feel free to comment in Phabricator or on this thread.
https://phabricator.wikimedia.org/T234030

There has already been some community discussion, but since we're proposing
a small change to wikitext (the  tag will accept a new attribute), I
thought it would be appropriate to wait for a round of technical feedback
before we begin coding.

Regards,
Adam

--
Adam Wight - Developer - Wikimedia Deutschland e.V. - https://wikimedia.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Declaring methods final in classes

2019-08-29 Thread Adam Wight
On Wed, Aug 28, 2019 at 4:29 PM Daniel Kinzler 
wrote:

> Subclassing should be very limited anyway, and even more limited across
> module
> boundaries [...]


This is a solution I'd love to see explored.  Subclassing is now considered
harmful, and it would be possible to refactor existing cases to use
interfaces, traits, composition, etc.

-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Category aliasing proposals

2019-08-06 Thread Adam Wight
On Tue, Aug 6, 2019 at 8:32 AM Brian Wolff  wrote:

> Re other attempts - have you seen
> https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/65176/ ?
>

Thanks!  I had read the bug https://phabricator.wikimedia.org/T5311 but
somehow missed that 2013 patch—the code review comments are really helpful.

One detail I should emphasize about my new proposal to use hard redirected
categories as aliases is that it's a terrible workaround at heart.  It
would be nicer if there were a formal way to alias categories, for example
a directive "#ALIAS_TO[[]]" that could be used in category
pages.  What I'm proposing is that we introduce new behaviors for hard
redirects, and then agree that it's correct for articles to be categorized
permanently under these aliases, so that the alternative labels are
applied.  In previous discussion of hard redirects, an article categorized
under the redirect would still be "incorrect" and in a temporary state, to
be eventually recategorized under the alias's main category by bots like
RussBot.

-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Category aliasing proposals

2019-08-05 Thread Adam Wight
Friends,

As part of an investigation into category aliasing (think “theater
directors” vs. “theater directors"), we’ve identified two potential
technical implementations and I’m hoping to get feedback to help us choose
between the proposals, or change course entirely.

For a summary of the problem and solutions, please see
https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Gendered_Categories
and join the talk page if you wish.

Some questions on my mind are:
* Will the proposed hard-redirect category behavior break any MediaWiki or
third-party software assumptions about hard redirects or categories?
* Is “non-page” category aliasing really the mountain of tech debt I
imagine it to be?
* Have there been other attempts to solve this problem?

Kind regards,
Adam
from the Technical Wishes Team at Wikimedia Germany
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Announcing zuul-status plugin for gerrit

2019-01-30 Thread Adam Wight
Yes!  That's awesome, and thank you Paladox for your tireless pushing
towards modernity!  I'm looking forward to enjoying the plugin…

-Adam

On Wed, Jan 30, 2019 at 9:58 AM Paladox via Wikitech-l <
wikitech-l@lists.wikimedia.org> wrote:

> Hi, i am pleased to announce that the zuul-status plugin is now available
> for gerrit 2.16+.
> This plugin integrates the zuul status json response into gerrit
> displaying the jobs lined up and running. It's similar to how you would see
> it on  https://integration.wikimedia.org/zuul/ apart from it will only
> show for your change only. This will improve the experience as you won't
> need to leave your change to get a live feed of your change as it
> progresses through checks.
> You can see the demo at https://imgur.com/a/uBk2oxQ
> Plugin at
> https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-status
> First commit:
> https://gerrit-review.googlesource.com/c/plugins/zuul-status/+/212103
> The plugin uses PolyGerrit's ui rather then GWTUI as GWTUI has been
> removed upstream (from gerrit 3.0) and because PolyGerrit provides XSS
> protection.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] Security Notification: Malware creating fake Wikipedia donation banner

2019-01-24 Thread Adam Wight
Horrifying!

Is there anything we can do from our side, e.g. include some Javascript
which can detect and disable the malware banner?

[[mw:Adamw]]

On Thu, Jan 24, 2019 at 10:11 AM Paulo Santos Perneta <
paulospern...@gmail.com> wrote:

> Hi,
>
> I seem to recall some OTRS tickets recently sent warning about it. Should
> they be forward to any address in particular, in case they keep coming in?
>
> Paulo
>
> John Bennett  escreveu no dia quinta, 24/01/2019
> à(s) 14:02:
>
> > Hello,
> >
> > In order to keep the community informed of threats against Wikimedia
> > projects and users, the Wikimedia Security team has some information to
> > share.
> >
> > Malware installed via pirated contented downloaded from sites such as the
> > Pirate Bay can cause web browsers compromised by the malware to create a
> > fake donation banner for Wikipedia users. While the actual malware is not
> > installed or distributed via Wikipedia, unaware visitors may be confused
> or
> > tricked by it's activities.
> >
> > The malware seeks to trick visitors to Wikipedia by looking like a
> > legitimate Wikipedia banner asking for donations. Once the user clicks on
> > the banner, they are then taken to a portal that leads them to transfer
> > money to a fraudulent bitcoin account that is not controlled by the
> > Foundation.
> >
> > The current version of this malware is only infecting Microsoft Windows
> > users at the time of this notification. To date, the number of people
> > affected is small. The fraudulent accounts have taken approximately $700
> > from infected users. However, we strongly encourage all users to use and
> > update their antivirus software.
> >
> >
> > Additional details and a screenshot of the fake donation banner on can be
> > found at Bleepingcomputer.com. [0]
> >
> > [0]
> >
> >
> https://www.bleepingcomputer.com/news/security/fake-movie-file-infects-pc-to-steal-cryptocurrency-poison-google-results/
> >
> > Thanks,
> >
> > John Bennett
> > ___
> > Wikimedia-l mailing list, guidelines at:
> > https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> > https://meta.wikimedia.org/wiki/Wikimedia-l
> > New messages to: wikimedi...@lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > 
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
> https://meta.wikimedia.org/wiki/Wikimedia-l
> New messages to: wikimedi...@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] problematic use of "Declined" in Phabricator

2018-10-02 Thread Adam Wight
+1 that we shouldn't close valid bugs.

Assuming nobody brings up objections, here's a nice place to document new
consensus:
https://www.mediawiki.org/wiki/Bug_management/Bug_report_life_cycle

FWIW, that page is discoverable from:
https://www.mediawiki.org/wiki/Phabricator/Project_management#Closing_a_task

-Adam

On Tue, Oct 2, 2018 at 9:51 AM Joe Matazzoni 
wrote:

> I agree with Amir’s understanding. "Declined” is basically for ideas whose
> proper timing is never.  Valid ideas that we just aren’t going to work on
> any time soon should go in a backlog or freezer or some such, where they
> can await until some future project or other development makes them
> relevant (at least theoretically).
>
> All of which does raise a slightly different question: I am much less
> clear on what the exact difference is between “Invalid” and “Declined.”
> Thoughts?
>
> Best,
> Joe
> _
>
> Joe Matazzoni
> Product Manager, Collaboration
> Wikimedia Foundation, San Francisco
> mobile 202.744.7910 <(202)%20744-7910>
> jmatazz...@wikimedia.org
>
> "Imagine a world in which every single human being can freely share in the
> sum of all knowledge."
>
>
>
>
> > On Oct 2, 2018, at 9:31 AM, Amir E. Aharoni <
> amir.ahar...@mail.huji.ac.il> wrote:
> >
> > Hi,
> >
> > I sometimes see WMF developers and product managers marking tasks as
> > "Declined" with comments such as these:
> > * "No resources for it in (team name)"
> > * "We won't have the resources to work on this anytime soon."
> > * "I do not plan to work on this any time soon."
> >
> > Can we perhaps agree that the "Declined" status shouldn't be used like
> this?
> >
> > "Declined" should be valid when:
> > * The component is no longer maintained (this is often done as
> > mass-declining).
> > * A product manager, a developer, or any other sensible stakeholder
> thinks
> > that doing the task as proposed is a bad idea. There are also variants of
> > this:
> > * The person who filed the tasks misunderstood what the software
> component
> > is supposed to do and had wrong expectations.
> > * The person who filed the tasks identified a real problem, but another
> > task proposes a better solution.
> >
> > It's quite possible that some people will disagree with the decision to
> > mark a particular task as "Declined", but the reasons above are
> legitimate
> > explanations.
> >
> > However, if the task suggests a valid idea, but the reason for declining
> is
> > that a team or a person doesn't plan to work on it because of lack of
> > resources or different near-term priorities, it's quite problematic to
> mark
> > it as Declined.
> >
> > It's possible to reopen tasks, of course, but nevertheless "Declined"
> gives
> > a somewhat permanent feeling, and may cause good ideas to get lost.
> >
> > So can we perhaps decide that such tasks should just remain Open? Maybe
> > with a Lowest priority, maybe in something like a "Freezer" or "Long
> term"
> > or "Volunteer needed" column on a project workboard, but nevertheless
> Open?
> >
> > --
> > Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
> > http://aharoni.wordpress.com
> > ‪“We're living in pieces,
> > I want to live in peace.” – T. Moore‬
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Breaking change in ORES API: "wp10" models are renamed to "articlequality"

2018-08-27 Thread Adam Wight
Hi, we'll have to deprecate both usages due to how the software is
configured.

If migrating the clients turns out to be a pain for some reason, we can
negotiate the deprecation date, of course…

-Adam

On Mon, Aug 27, 2018 at 1:57 PM Ryan Kaldari  wrote:

> Thanks for the update (and the switch to a less confusing score name)! Just
> to clarify, when you say "we might pull the plug after four weeks after
> this announcement," does that refer to wp10 just in the generic scores
> request (e.g. https://ores.wikimedia.org/v3/scores/enwiki/855137823) or
> also in the specific scores request (e.g.
> https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10)? In other
> words, will https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10
> still work a month from now, or does that also need to be migrated to
> "articlequality"?
>
> On Mon, Aug 27, 2018 at 1:50 PM Amir Sarabadani <
> amir.sarabad...@wikimedia.de> wrote:
>
> > Hello,
> > If you don't use ORES API, please ignore this email.
> >
> > If you are using wp10 models in your tool, gadget, or research, please
> note
> > that these models are now renamed to "articlequality" to better reflect
> > what they are (in comparison to "editquality"). articlequality models are
> > deployed on English, Russian, French, Persian, Turkish, and Basque
> > Wikipedia languages.
> >
> > So URLs like this:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823/wp10
> > Need to be changed to something like this:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823/articlequality
> >
> > Same goes with parsing the results.
> >
> > The "wp10" still exists as an alias and if you don't determine models
> > (meaning you want scores for all models) we respond with wp10 and
> > articlequality data duplicated [1] but we might pull the plug after four
> > weeks after this announcement.
> >
> > [1]: For example see:
> > https://ores.wikimedia.org/v3/scores/enwiki/855137823
> >
> > For more information see: https://phabricator.wikimedia.org/T196240
> >
> > Best
> > --
> > Amir Sarabadani
> > Software Engineer
> >
> > Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> > Tel. (030) 219 158 26-0
> > http://wikimedia.de
> >
> > Stellen Sie sich eine Welt vor, in der jeder Mensch an der Menge allen
> > Wissens frei teilhaben kann. Helfen Sie uns dabei!
> > http://spenden.wikimedia.de/
> >
> > Wikimedia Deutschland – Gesellschaft zur Förderung Freien Wissens e. V.
> > Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter
> > der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
> > Körperschaften I Berlin, Steuernummer 27/029/42207.
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC Discussion Wednesday - new namespace for collaborative judgments about wiki entities

2018-08-22 Thread Adam Wight
On Wed, Aug 22, 2018 at 12:54 PM Ryan Kaldari 
wrote:

> It should also be noted that there are some existing stop-gap
> implementations for specific use-cases that could effectively be replaced
> by JADE such as the PageAssessments extension
> , the page tags
> system in the PageTriage extension
> 


Hi, thanks for pointing this out!  Here are the workflows we've identified
so far, and how JADE might affect them in the long-term:

* Huggle: JADE as a communication backend to indicate which pages have been
patrolled, what the damaging/not-damaging conclusion was, and any comments
the patrollers might leave.
* Recent changes patrol: Similar to Huggle.
* New pages patrol: Storage for sharing draftquality and draft topic data.
* Articles for creation: Similar to NPP.
* en:WP:RATER: Shared storage for articlequality data.
* FlaggedRevs: Similar to patrolling.
* PageTriage: Similar to patrolling.
* Wiki Labels: "blind", write-only store for labelers
* ORES training: high-quality data source for human-labeled observations.


> and ORES' existing database storage.
>

This last one is not a good fit, actually.  The ores_* tables and service
are optimized for bot requirements, for example we'll need to mass purge
all scores produced by an old model when an update is deployed.  These
scores should all be regenerated using the new model.  We're planning to
leave the ORES runtime architecture almost untouched, with one large
exception: JADE data will be provided in parallel, so a request for "all
scores on revision 123456" will give ORES scores and JADE data, and we'll
recommend that the client prefer JADE data since we expect it to be higher
quality.

-Adam

>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-14 Thread Adam Wight
Hi Petr,

Nobody is language policing, this is about preventing abusive behavior and
creating an inviting environment where volunteers and staff don't have to
waste time with emotional processing of traumatic interactions.

I think we're after the same thing, that we want to keep our community
friendly and productive, so it's just a matter of agreeing on the means to
accomplish this.  I see the Code of Conduct Committee standing up to the
nonsense and you see them as being hostile, so our perspectives diverge at
that point.  I also see lots of people on this list standing up for what
they think is right, and I'd love if that energy could be organized better
so that we're not sniping at each other, but instead refining our shared
statements of social values and finding a way to encourage the good while
more effectively addressing the worst in us.

This isn't coherent enough to share yet, but I'll try anyway—I've been
thinking about how our high proportion of anarchic- and
libertarian-oriented individuals helped shape a culture which doesn't
handle "negative laws" [1] well.  For example, the Code of Conduct is
mostly focused on "unacceptable behaviors", but perhaps we could rewrite it
in the positive sense, as a set of shared responsibilities to support each
other and the less powerful person in any conflict.  We have a duty to
speak up, a duty to keep abusers from their target, we own this social
space and have to maintain it together.  If you see where I'm headed?
Rewriting the CoC in a positive rights framework is a daunting project, but
it might be fun.

Regards,
Adam

[1] https://en.wikipedia.org/wiki/Negative_and_positive_rights

On Mon, Aug 13, 2018 at 9:36 AM Petr Bena  wrote:

> I am a bit late to the party, but do we seriously spend days
> discussing someone being banned from a bug tracker just for saying
> "WTF", having their original comment completely censored, so that the
> community can't even make a decision how bad it really was? Is that
> what we turned into? From highly skilled developers and some of best
> experts in the field to a bunch of language nazis?
>
> We have tens of thousands of open tasks to work on and instead of
> doing something useful we are wasting our time here. Really? Oh, come
> on...
>
> We are open source developers. If you make Phabricator too hostile to
> use it by setting up some absolutely useless and annoying rules,
> people will just move to some other bug tracker, or decide to spend
> their free time on a different open source project. Most of us are
> volunteers, we don't get money for this.
>
> P.S. if all the effort we put into this gigantic thread was put into
> solving the original bug instead (yes it's a bug, not a feature) it
> would be already resolved. Instead we are mocking someone who was so
> desperate with the situation to use some swear words.
>
> On Mon, Aug 13, 2018 at 12:06 AM, Yaron Koren  wrote:
> >  Nuria Ruiz  wrote:
> >> The CoC will prioritize the safety of the minority over the comfort of
> the
> >> majority.
> >
> > This is an odd thing to say, in this context. I don't believe anyone's
> > safety is endangered by hearing the phrase in question, so it seems like
> > just an issue of comfort on both sides. And who are the minority and
> > majority here?
> >
> >> The way the bug was closed might be incorrect (I personally as an
> engineer
> >> agree that closing it shows little understanding of how technical teams
> do
> >> track bugs in phab, some improvements are in order here for sure) but
> the
> >> harsh interaction is just one out of many that have been out of line for
> >> while.
> >
> > This seems like the current argument - that it's not really about the use
> > of a phrase, it's about an alleged pattern of behavior by MZMcBride. What
> > this pattern is I don't know - the one example that was brought up was a
> > blog post he wrote six years ago, which caused someone else to say
> > something mean in the comments. (!) As others have pointed out, there's a
> > lack of transparency here.
> >
> > -Yaron
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Making two factor auth less annoying

2018-08-14 Thread Adam Wight
Apologies, "lack of session persistence" was a bad way to summarize what
I've been seeing.  My session persistence is usually fine, and lasts a
while regardless of whether 2FA is enabled.

What I was complaining about is that 2FA has to be used every time I log
in.  There doesn't seem to be an industry standard yet, for example gmail
asks for 2FA only every 30 days if you've previously authenticated on the
same machine, but GitHub asks for 2FA on every login.  Asking only once a
month seems like a great compromise to consider.

-Adam

On Mon, Aug 13, 2018 at 10:21 AM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> On Mon, Aug 13, 2018 at 5:13 AM Amir E. Aharoni
>  wrote:
> > Most of the time my session doesn't work across projects. If I log in to
> > the English Wikipedia, I have to log in again to mediawiki.org, Hebrew
> > Wikisource, and Wikidata [...]
>
> This (old, erratic, hard to reproduce) bug can usually be fixed by
> logging out, and then clearing your cookies for all Wikimedia domains.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Making two factor auth less annoying

2018-08-12 Thread Adam Wight
Hi Petr,

Thank you for thinking about improvements to 2FA, the lack of session
persistence makes me want to buy a paper encyclopedia.

Another issue to add to your list is that a lost 2FA device (plus lost
scratch codes) requires admin help or someone with DB access, because the
self-serve option asks for a 2FA code in order to disable.  Most industry
implementations allow a 2FA reset via primary email account as well as
scratch codes.  There are many bugs about this, and I can't tell if the
design is a feature or bug.  Here's an interesting suggestion for how to
fix: https://phabricator.wikimedia.org/T180896

Regards,
Adam

On Sun, Aug 12, 2018 at 9:48 AM Petr Bena  wrote:

> Oh and I totally forgot to include link to phab task:
> https://phabricator.wikimedia.org/T201784
>
> On Sun, Aug 12, 2018 at 6:47 PM, Petr Bena  wrote:
> > Hello,
> >
> > I would like to do some major changes to two factor auth. I am cross
> > posting this on phabricator and the mailing list to give it some more
> > attention and to start some proper discussion before anyone starts
> > working on this:
> >
> > Right now there are only two options for two factor authentication:
> >
> > * Don't use two-factor authentication (insecure)
> > * Use two factor authentication (annoying as hell)
> >
> > With two factor authentication it doesn't seem to be possible to make
> > session persistent and it really is extremely annoying to look for
> > your mobile phone, open the app and fill in the code everytime you
> > want to do some simple wiki action. I am very lazy and even found
> > myself to rather decide not to do a minor change (be it fix of typo
> > correction etc. in article on English Wikipedia etc) rather than going
> > through the hassle of using the google authenticator.
> >
> > I think it would be really cool to have an option (or maybe even more
> > of them?) that would help to specify when two factor auth is really
> > desired, so that for example users could decide that for simple
> > actions like wiki editing normal login would be sufficient, but for
> > changes like:
> >
> > * Change of password
> > * Change of (some) preferences
> > * Admin actions (block, delete etc.)
> >
> > P.S. Unfortunately I no longer have so much free time to track every
> > single thread in this mailing list, so maybe this is a duplicate of
> > some older idea by someone else, if that's the case, please merge the
> > phab task with whatever the other identical proposal is.
> >
> > Thank you
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread Adam Wight
On Wed, Aug 8, 2018 at 4:13 PM MZMcBride  wrote:

> I think the Code of Conduct Committee _is_ arguing that it's the use of
> the word "fuck" that was problematic here.
>

This is disingenuous, MZMcBride.  In the "New Wikimedia Foundation has soft
launched!" thread, you also wrote:
> I think this type of behavior by the communications department is really
inappropriate, unbecoming, and inconsistent with Wikimedia's values. []
> Ah, I see now. This is just some cruel waste of staff and volunteer time [
]
> You ask for people to point out issues, even providing a link to
Phabricator Maniphest, and then gaslight them by closing the tasks and
telling them that the very obvious bug is intentional.

Apparently, that went on and was even escalated in the bug tracker, in
response to what looks like otherwise normal and harmless back-and-forth.

MZ, hopefully you recognize this is an abusive way to treat other people.
Silencing anyone is rarely appropriate, but your behavior in this earlier
thread was gross enough that I decided against participating.  In fact, I
had my own concerns about the new WMF site but you had already created a
toxic dynamic, effectively losing me (and undoubtedly others) as an ally in
that discussion.

That seems like exactly the sort of thing the Code of Conduct exists to
prevent, so I agree with their actions in silencing you in order to make
space for other voices.

Thank you for your energy and insights, and I hope we can work together to
root out the bad decisions and corruption, without this nonsense of having
to bail you out of Phabricator jail every few months.

-Adam Wight
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help remove unused wmf-config code

2018-07-08 Thread Adam Wight
Strange--the "UseContributionTracking" variable is still a thing, although
it's reported as unused by the script.  See
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/CommonSettings.php#L2098

I get correct results when querying manually:
https://codesearch.wmflabs.org/search/api/v1/search?repos=*=:20=nope==(\%27|%22|wg)UseContributionTracking

Donno why it's showing up as unused.

-Adam



On Sun, Jul 8, 2018 at 7:07 PM Gergo Tisza  wrote:

> On Sun, Jul 8, 2018 at 5:41 AM Stas Malyshev 
> wrote:
>
> > > Open for review:
> > >
> >
> https://gerrit.wikimedia.org/r/#/q/project:operations/mediawiki-config+topic:cleanup+is:open
> >
> > This one produces 404.
> >
>
> Unfortunately search URLs are not compatible between the old and new Gerrit
> interface. The Polygerrit equivalent for this one is
>
> https://gerrit.wikimedia.org/r/q/project:operations%252Fmediawiki-config+topic:cleanup+is:open
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Scoring platform status update

2018-05-02 Thread Adam Wight
We've summarized our technical work over the last three months, please see
the blog post for details:
https://phabricator.wikimedia.org/phame/post/view/104/status_update_may_2_2018/

Our novel, collaborative auditing system "judgment and dialogue engine"
(JADE) is deployed to the beta cluster and is now available for tool
developers to explore and integrate with.  Read more here,
https://mediawiki.org/wiki/JADE

A dynamically updated table of ORES support is now available for anyone
tracking our progress:
https://tools.wmflabs.org/ores-support-checklist/

Regards,
Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Scoring platform team status update

2018-01-30 Thread Adam Wight
Friends,

Anyone interested in ORES and Scoring Platform’s recent progress can read an 
exhaustive list on our engineering blog,
https://phabricator.wikimedia.org/phame/post/view/84/status_update_january_30_2018/
 


Aside from supporting several new languages, the biggest changes that might 
affect other developers are that: we now supply enwiki new article draft 
quality predictions, already loaded into the MediaWiki database via 
Extension:ORES; and all of our models now include statistics that can be used 
to calculate appropriate cutoff points.

Enjoy!

-Amir, Aaron, Adam, and Sumit


signature.asc
Description: Message signed with OpenPGP
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Simple overview image about how MW loads resources in clients

2017-11-07 Thread Adam Wight
Where has this drawing been all my life!

-Adam

> On Nov 6, 2017, at 1:39 PM, Joaquin Oltra Hernandez 
>  wrote:
> 
> Hi,
> 
> We were having a session where we talked about resource loading, code entry
> points for the front-end, and how things work on MediaWiki, and we came up
> with a small pic to explain the lifecycle for people newer to MediaWiki.
> 
> Maybe it could help some people get a better grasp about where files are
> coming from and what why the load.php urls are as they are.
> 
> Please, forgive any missing details, and if there is something very wrong
> I'd love to correct it, please let me know.
> 
> Also to clarify, "Magic" is used as "Dynamic, runtime based, dependent on
> the state of your code/client cache/server state & extensions" to shorten
> things and in a humorous key.
> 
> Links:
> 
>   - Phab: https://phabricator.wikimedia.org/M232
>   - Imgur: https://i.imgur.com/DYLqtQf.png
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] New scoring models deployed

2017-07-18 Thread Adam Wight
Friends,

This is a Scoring Platform technical update for the last couple of weeks of
work.


*Deployments*
We completed a biweekly deployment today, adding models for quite a few new
wikis.  Congratulations, and a big thank you to everyone who helped us
coordinate and gather the data to accomplish this!

Epic: https://phabricator.wikimedia.org/T170485

*New models*

   - Albanian Wikipedia: new models for reverted, damaging, and goodfaith.


   - Bengali Wikipedia: new model for reverted.


   - Greek Wikipedia: new model for reverted.


   - Tamil Wikipedia: new model for reverted.


   - Romanian Wikipedia: new models for damaging and goodfaith.


   - Turkish Wikipedia: fixes to the article quality model ("wp10") (
   https://phabricator.wikimedia.org/T170838 )

*Service updates*

   - Switched precaching from RCStream (deprecated) to new EventStreams,
   https://phabricator.wikimedia.org/T166046


   - Better error handling for bad API params,
   https://phabricator.wikimedia.org/T168920


   - Patched a DoS caused by a combination of bad regex and inadequate
   timeout code.  Incident:
   https://wikitech.wikimedia.org/wiki/Incident_documentation/20170623-ORES


Note that Albanian and Romanian Wikipedias will soon support ORES in their
Recent Changes feeds.  Subscribe to
https://phabricator.wikimedia.org/T170723 for updates.

Regards,
Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Changes in colors of user interface

2016-12-14 Thread Adam Wight
Hi Pine,

That's a really interesting question you bring up, that people develop
associations between colors and exact positions on the screen, so that
moving or discoloring a button will jar them out of a pleasant, easy to
anticipate experience, and will invalidate some of what they learned,
possibly making the videos less effective teaching tools as the interface
changes.

I'm not sure what we (wearing my WMF hat) can do to help with the problem,
however.  Change control at the level you're talking about would mean a
radical reworking of our deployment strategy.  As I understand it, the
alternative is to stockpile code changes and do big software releases on a
longer timeline, but that's a somewhat discredited approach.  If we stick
to the lighter, weekly deployments then it's inevitable that the interface
will be evolving rapidly.

Also, let me thank you for your work on the instructional videos!  I'm sure
these will make a huge difference for newbies and might even attract new
editors.  In an ideal world, we could write scripts to automate the UI
segments you'll be filming, and could even replay them later and replace
segments to publish new editions of the videos.  Short of that, however,
maybe you could distribute a companion quick reference card, which would be
easier to keep up-to-date and would illustrate the placement and coloration
of major components.

I'm happy to see so many of my colleagues in this thread, and feeling
immensely proud that such a potentially explosive issue (the bigger issue
of WMF deployments in the context of broader community consensus and
engagement in the process) was discussed at length, yet there was nothing
but an outpouring of generosity and assumption of good faith.  This is when
it feels good to be a Wikimedian!

I do think we should start new threads for potential ideas to improve WMF
deployment communication.  Amir's announcement was a wonderful model of
developer citizenship, and I think the palette change itself is beyond
reproach.  Continuing here makes me uncomfortable really, because our focus
should not be on Amir's patch or even his announcement, but on the bigger
issues of two-way communication.  Making sure we're targeting policy for
criticism rather than people is essential to this healthy
communication--otherwise some of us will feel obliged to defend the person
being targeted and will struggle to be receptive to the constructive
content.

Warm regards,
Adam

On Wed, Dec 14, 2016 at 1:30 AM, Pine W  wrote:

> Hi Peachy,
>
> As an example of a potential high-impact color change that would
> result in a need to change documentation, I recall a proposal to
> change all red links to a different color. I don't recall the user's
> reasoning for the proposed change, but that is one situation where I
> believe that color alone signifies important information to the end
> user.
>
> I understand that in the example that came up in this thread we're
> talking about UI color harmonization rather than a move that would be
> as obvious a color change to users as changing red links to (for
> example) green links.
>
> Does that answer your question? I was thinking about UI changes in
> general, particularly across millions of pages, rather than about a
> specific scenario of color being used intentionally to convey a
> certain kind of information to the user.
>
> Pine
>
>
>
> On Wed, Dec 14, 2016 at 1:12 AM, K. Peachey  wrote:
> > Hi Pine,
> >
> > Any chance to provide information or examples of these documents that
> > would need to be replaced if/when colours are changed?
> >
> > To my knowledge, There is no where in MediaWiki core that relies on
> > colour only to convey information to the clients/end users. The colour
> > is used to enhance and/or supplement the information provided.
> >
> > I know from personal experience, I have many times used documentation
> > where colouring and/or other user experience elements (examples:
> > icons, system dialog environments) have changed weather it's from
> > in-application/services changes and redesigns or from external changes
> > such as system user interface provided mechanisms without major
> > impacts to the documentation that i've used.
> >
> > On 14 December 2016 at 18:39, Pine W  wrote:
> >> I have delayed responding to this thread until I felt that I could do
> >> with some degree of calmness.
> >>
> >> I view UI changes that affect millions of pages as a big deal. I
> >> realize that from a developer's perspective it may seem trivial to
> >> change a color setting. Let me try to illustrate a different
> >> perspective that might help to explain how seemingly small changes,
> >> when implemented at large scale, can have significant effects. I am
> >> going to ask for the collective patience of the people in this thread
> >> as I explain a perspective that appears to be different from a number
> >> of theirs.
> >>
> >> Marketers spend significant 

Re: [Wikitech-l] Setting up a new Tomcat servlet in production?

2016-10-18 Thread Adam Wight
Hi Shoichi,

On Mon, Oct 17, 2016 at 5:25 AM, 魔法設計師  wrote:

> 1. Acutally speaking it can be launched as a self-server by lightweight
> Jetty (the server is Jetty embedded) it will be launched as a nomal java
> application. Why I use Tomcat now but not launched by "java -jar .jar"
> ,is  because on tools.wmflabs.org , as I remembered,
>
> I had took the way :
> "webservice generic start /data/project/toolname/code/myserver.bash "
>
> Then, the port binding always faiedl,because the command "port" seemed
> always take away parameters  belongs to
> the java ap owns.
>
> I couldn't resolve it,so I changed to Tomcat.
>

Great point--we can serve this using whatever Java web container is most
practical!  Question for wikitech-l: Which would that be?  It looks like
we're already running Jetty for Gerrit, is that the best choice for a new
service?


> 2.About Chinese variable and function names in the server code, after
> forking from the upstream , I think it can be translated to English (
> Actually speaking,I have get some works done.)
>

I'd be happy to help with some of that, maybe we can coordinate the work at
some point.  Forking just for the sake of translation sounds problematic,
though.  Maybe I'm wrong about this being a review requirement--or maybe
the upstream author would be open to switching to English variable names?
Sorry again about the anglocentrism...

3. There are another way : deploying it to an independent server. For
> example
>
>a. use the upstream server like this
> 
>b.use a server maintained by  we, Taiwan Chapter, providing service for
> C.J.K.V wiki_  and etc.
>

I don't think we have the option of using a 3rd-party server, nor relying
on a WMF Labs service for production use.  We want to have the same uptime
and security guarantees as for the wiki itself.  The production
configuration will probably need to look like a dedicated node or cluster
for serving the IDS requests, and we would cache its PNG or SVG responses.
Someone here might be able to correct me, though?

Thanks again for integrating this tool for us!

-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Setting up a new Tomcat servlet in production?

2016-10-17 Thread Adam Wight
Friends,

I'm helping review a tool 
that I understand Wikimedia Taiwan is eager to use, which uses a parser
hook to render ideographic description characters

into PNG glyphs in order to display historic or rare characters which
aren't covered by Unicode.  It's very cool.

The challenges are first that it's based on a Tomcat backend
,
which I'm not sure is precedented in our current ecosystem, and second that
the code uses Chinese variable and function names, which should
unfortunately be Anglicized by convention, AIUI.  Finally, there might be
security issues around the rendered text itself, if it were misused to mask
content.

I'm mostly asking this list for help with the question of using Tomcat in
production.

Thanks,
Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] tags are a usability nightmare for editing on mediawiki.org

2016-04-03 Thread Adam Wight
On Apr 3, 2016 1:30 AM, "Jon Robson"  wrote:
> The Translate tag has always seemed like a hack that I've never quite
understood.

+1. Couldn't we use Parsoid data tags to identify paragraphs? It seems like
that would lend itself to an incremental migration.

-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "Try the free Wikipedia app" banners

2015-09-09 Thread Adam Wight
Brandon, great to hear from you!  I think you're working off of old
information--like Matt said, you can edit via the app now.  It's cool that
you're inspired to bring up more questions, though, and I'm glad you're
focusing on the design phase of the next experiment.  Are you busy for the
next ten years?

Another misconception or oversight I want to bring up is that Fundraising
is the team pioneering the 2/3-page or full-page banners.  We're driving
readers from the website to completely closed and somewhat evil payments
platforms.  If there's any relevant or even irrelevant research about
interstitials, please apply it to our work, cos we're about to have a huge
impact on the English-speaking community in December.  Any complaints about
the Finnish mobile experiment and dogpiling on the awesome apps developers
seem incredibly misplaced while I'm walking around with this "Kick Me" sign
on my backside.

Thanks,
Adam

On Fri, Sep 4, 2015 at 12:12 PM, Brandon Harris  wrote:

>
> > On Sep 2, 2015, at 3:53 PM, Toby Negrin  wrote:
>
> > 1. We're moving people from an open platform to a closed platform: I
> think
> > this is an oversimplification of the situation -- as has been noted
> before,
> > the android app is 100% open source and while the data is not, in my
> > opinion, comprehensive, it's inarguable that a large percentage of mobile
> > traffic on the internet is from apps. It's not possible to fulfill our
> > mission[4] if Wikipedia and sister project content is not available in
> > widely used channels.
>
> I'm not sure this makes a lot of sense.  The widest, most-open
> content channel that the projects have is through the web interfaces:  all
> phones, all devices, all computers can access the same content in the same
> manner.  That is to say: 100% of our readers have the ability to use the
> web versions (either desktop or mobile web) where as only a subset can use
> the Android app, which is a different subset that can use iOS.  (They also
> end up having fragmented experiences, which is sub-optimal.)
>
> So it seems to me that the apps are not required to fulfill the
> mission.  They feel like distractions, and - quite possibly - negatives to
> the mission (in that we can't convert Readers into Editors through the app).
>
> (Which, by the way, this whole "focus entirely on readers" shift
> seems counter-intuitive to me.  Having a billion readers doesn't mean
> anything if there aren't any editors anymore. It's a complete failure at
> that point.)
>
> > 2. The campaign was not publicized before launch: We notified the Finnish
> > community on their Village pump before the campaign began[5] and the
> > campaign is detailed on the central notice page[6]. We felt this was
> > appropriate considering the scope of the test.
>
> Restricting the conversation to two very small, almost
> impenetrable discussion areas seems unwise.  It seems obvious to me that
> this idea and action would cause friction with the community.  I don't
> think there's any bad-faith going on here, but this definitely feels like
> an oversight.
>
> > 3. Banners/Interstitials don't work/suck/etc: There's a difference
> between
> > a forced install and letting users know that an app exists and our
> > designers have worked hard to make the banners effective without being
> > excessively intrusive. You can see the designs on the Phab ticket above.
> I
> > don't generally place a great deal of faith in blog posts or other
> > company's data -- the google study showing the ineffectiveness of
> > interstitials has already been challenged by other similarly reputable
> > sources [7,8]. For this and other reasons, I believe that we need to
> gather
> > our own data.
>
> Is "our own data" more important than the goodwill of our users or
> developers?  I think that's a big part of why people might be upset about
> this: it's a step away from what had classically been the principles
> underlying the movement's activities.
>
> Even that said, though:  this is the first anyone is saying "yes,
> we did some research about interstitials".  It seems to me that the Google
> study was something that could have been discussed ahead of time.   I also
> don't understand why we can't do the whole Open Source thing and make use
> of other people's research, unless this indicates a further shift into "not
> invented here" territory.
>
> > 4. We don't understand what success looks like: We are planning a meeting
> > with our Research team[9] to assess the statistical validity of our
> > results, but the basic question is if users read more content using the
> app
> > than the mobile web. This information will help guide us on future
> product
> > decisions and will be shared with the community.
>
> An experiment without a box isn't an experiment.
>
> "We would like to determine if people read more through
> the apps than through the web 

Re: [Wikitech-l] Lists as first class citizens

2015-04-08 Thread Adam Wight
I think the explicit schema will be brilliant when applied to collections,
it will facilitate linking tools and more.  But would it make sense to
represent lists as a wikidata statements, as a compromise between native
SQL and wiki pages?  We would gain the standard onwiki tools, a data
structure that makes lists queryable and richly linkable, and it also
becomes easy to add properties for higher-level projects such as
distinguishing between Education Program's list of articles to review and
list of articles I'm editing.

-Adam

On Wed, Apr 8, 2015 at 10:58 AM, Jon Robson jdlrob...@gmail.com wrote:

 The main motivation for lists as not being wikipages is so that they
 can be combined with the recent changes feed and other things stored
 in the database. We'll also hoping to support the filtering of
 collections via tags which becomes much easier if stored in a
 database. A watchlist is not a wikipage, so that in my eyes sets a
 precedent.

 We have plenty of options to surface edits to collections as items in
 the recent changes if necessary.
 It would be most helpful to articulate what the problems are, rather
 than say wikipages are the solution! This might prove to be true but
 without understanding the inadequacies of the current approach we
 won't be able to pass that judgement.. so please test and provide that
 feedback and we'll find the right solutions.

 Thanks for your feedback thus far.



 On Wed, Apr 8, 2015 at 8:52 AM, Federico Leva (Nemo) nemow...@gmail.com
 wrote:
  I hope no 60 storey building is in the making. The bazaar is horizontal,
 a
  vertical suk is too similar to a cathedral.
 
  Nemo
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Jon Robson
 * http://jonrobson.me.uk
 * https://www.facebook.com/jonrobson
 * @rakugojon

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Global user pages deployed to all wikis

2015-02-20 Thread Adam Wight
Hi Legoktm,

Thanks for doing this great work!  I tried to follow the instructions at
[1], and discovered that GlobalPreferences hasn't been deployed yet, so I
don't think it's possible to enable a global user page yet.  Please keep us
posted when that happens, I'm looking forward to abusing this new feature :D

-Adam

[1] https://www.mediawiki.org/wiki/Help:Extension:GlobalUserPage

On Fri, Feb 20, 2015 at 8:24 AM, Erwin Dokter er...@darcoury.nl wrote:

 On 20-02-2015 13:07, Gerard Meijssen wrote:

 Hoi,
 Babel templates are replaced by the #Babel functionality... The only
 problem I have with the Babel functionality on Meta is that they decided
 to
 have everything in Green..


 The content is transcluded, but it's the local CSS that will style that
 content. So #babel does look different on other projects.

 Regards,
 --
 Erwin Dokter



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New feature in development: collections

2015-01-16 Thread Adam Wight
Great to hear you're taking on this work!

From my wishlist, I'd like to see support for cross-wiki collections (T19168
https://phabricator.wikimedia.org/T19168), this could be used to make
really cool multimodal or bilingual materials...

Thanks,
Adam

On Fri, Jan 16, 2015 at 12:16 PM, Federico Leva (Nemo) nemow...@gmail.com
wrote:

 Hi and welcome. Thanks for sharing. Do you already have a mediawiki.org
 page for this idea?

 Please choose another name. Collection is taken.
 https://www.mediawiki.org/wiki/Extension:Collection

 Nemo


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: [Ops] [Gerrit] Emit alert when Ori commits on a weekend - change (operations/puppet)

2014-12-04 Thread Adam Wight
Thanks for all your outstanding work, Ori!  This one was so fantastic I had
to forward a bit...  at least you can admit you have a problem :p

-Adam

-- Forwarded message --
From: Ori.livneh (Code Review) ger...@wikimedia.org
Date: Thu, Dec 4, 2014 at 4:55 AM
Subject: [Ops] [Gerrit] Emit alert when Ori commits on a weekend - change
(operations/puppet)
To:


Ori.livneh has uploaded a new change for review.

  https://gerrit.wikimedia.org/r/177521

Change subject: Emit alert when Ori commits on a weekend
..

Emit alert when Ori commits on a weekend

Provisions an Icinga check on palladium that issues an alert if the time is
between Friday 21:00 and Monday 01:00 and there exists a commit from me in
the
last hour in operations/puppet.

Change-Id: I2c4fd0d6ef907e4afe91f0248e7039f31d70696c
---
A files/icinga/check-ori-commits
M manifests/misc/monitoring.pp
M manifests/site.pp
3 files changed, 37 insertions(+), 0 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet
refs/changes/21/177521/1

diff --git a/files/icinga/check-ori-commits b/files/icinga/check-ori-commits
new file mode 100755
index 000..d2d9552
--- /dev/null
+++ b/files/icinga/check-ori-commits
@@ -0,0 +1,20 @@
+#!/bin/bash
+# Icinga alert script for Ori weekend commits
+#
+# Alerts if the time is between 21:00 on Friday and 01:00 on Monday
+# (my time zone) and there exists a commit from me in the last hour.
+
+TZ=America/Los_Angeles
+  /usr/bin/git \
+  --git-dir=/var/lib/git/operations/puppet/.git log \
+  --author=o...@wikimedia.org \
+  --since=1hour \
+  --format=%cd | /bin/grep -Pq '(Fri .* 2.:|Sat|Sun)'
+
+if [ $? -eq 0 ]; then
+  echo CRITICAL: Ori committed a change on a weekend
+  exit 2
+else
+  echo OK: Ori is behaving himself
+  exit 0
+fi
diff --git a/manifests/misc/monitoring.pp b/manifests/misc/monitoring.pp
index b333799..baf49ad 100644
--- a/manifests/misc/monitoring.pp
+++ b/manifests/misc/monitoring.pp
@@ -613,3 +613,19 @@
 ],
 }
 }
+
+
+class misc::monitoring::ori_weekend_commits {
+file { '/usr/local/lib/nagios/plugins/check-ori-weekend-commits':
+source = 'puppet:///files/icinga/check-ori-weekend-commits',
+owner  = 'root',
+group  = 'root',
+mode   = '0555',
+}
+
+nrpe::monitor_service { 'ori_weekend_commits':
+description  = 'Ori committing changes on the weekend',
+nrpe_command =
'/usr/local/lib/nagios/plugins/check-ori-weekend-commits',
+require  =
File['/usr/local/lib/nagios/plugins/check-ori-weekend-commits'],
+}
+}
diff --git a/manifests/site.pp b/manifests/site.pp
index 688fa18..f3bf770 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -2162,6 +2162,7 @@
 include role::access_new_install
 include role::puppetmaster::frontend
 include role::pybal_config
+include misc::monitoring::ori_weekend_commits

 $domain_search = [
 'wikimedia.org',

--
To view, visit https://gerrit.wikimedia.org/r/177521
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I2c4fd0d6ef907e4afe91f0248e7039f31d70696c
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: Ori.livneh o...@wikimedia.org

___
Ops mailing list
o...@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ops
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Anonymous editors IP addresses

2014-07-29 Thread Adam Wight
++the EFF for more ideas, they are actively doing great work on so-called
perfect forward secrecy.

There are simple things we could do to achieve a better balance between
privacy and sockpantsing, such as cryptolog [1], in which IP addresses are
hashed using a salt that changes every day.  In theory, nobody can reverse
the function to reveal the IP, but you can still correlate all of an
address's edits for the day, week, or whatever, making CheckUser possible.

IP range blocking obviously needs to happen up-front, before the IP is
mangled.  I have no suggestions, but maybe browser and preferences
fingerprinting would be more effective anyway, since: tor.

-Adam

[1] https://git.eff.org/?p=cryptolog.git;a=summary


On Fri, Jul 11, 2014 at 8:45 AM, Chris Steipp cste...@wikimedia.org wrote:

 On Friday, July 11, 2014, Daniel Kinzler dan...@brightbyte.de wrote:

  Am 11.07.2014 17:19, schrieb Tyler Romeo:
   Most likely, we would encrypt the IP with AES or something using a
   configuration-based secret key. That way checkusers can still reverse
 the
   hash back into normal IP addresses without having to store the mapping
  in the
   database.
 
  There are two problems with this, I think.
 
  1) No forward secrecy. If that key is ever leaked, all IPs become
 plain.
  And
  it will be, sooner or later. This would probably not be obvious, so this
  feature
  would instill a false sense of security.
 

 This is probably the biggest issue. Even if we hmac it, it's trivial to
 brute force the entire ipv4 (and with intelligent assumptions about
 generation, most of the ipv6) range in seconds, if the key was ever known.


 
  2) No range blocks. It's often quite useful to be able to block a range
 of
  IPs.
  This is an important tool in the fight against spammers, taking it away
  would be
  a problem.
 

 Range blocks, I imagine, would continue working the same way they do.
 Someone would have to identify the correct range (which is very difficult
 when administrators can't see IP's), but on submission, we have the IP
 address to check against the blocks. (Unless someone proposes to store
 block ranges as hashes, that would definitely get rid of range blocks).


 
  -- daniel
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org javascript:;
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MW-Vagrant improvements at the Zürich Hackathon

2014-07-07 Thread Adam Wight
Bryan,

I think I need to take you up on the offer to help.  I can do the coding,
but I need some borrowed insight to get started.  I don't think the
wikimania_scholarships model is a good one to follow, I'd much rather add
configurability to the mediawiki::wiki or multiwiki::wiki classes.
Unfortunately, I see a lot of cascading changes being necessary, which
makes me think I'm on the wrong track.

-Adam


On Fri, Jun 13, 2014 at 5:29 PM, Bryan Davis bd...@wikimedia.org wrote:

 Adam,

 I wanted to avoid the complexity of a full multi version setup if we
 could, but if there are more than a couple of roles that would benefit from
 such features it would be possible. The easiest thing for your payment wiki
 may be to follow the pattern of the wikimania_scholarships role and add
 your own apache vhost and git checkout management. I'd be glad to help you
 work on this if you'd like some assistance. We might be able to add a bit
 more configurability to some of the existing puppet classes and defines to
 make working with multiple checkouts of mw-core easier.

 Bryan

 On Friday, June 13, 2014, Adam Wight awi...@wikimedia.org wrote:

 Bryan and Chris,
 The multiwiki work is fantastic, a big thank you for pursuing this!  I
 tried to use your new module to provide a vagrant development environment
 for Fundraising's payments wiki [1], and I ran up against a large and
 very solid-looking wall that I think is worth mentioning.  We maintain a
 special release branch of MediaWiki for payments, with a bit of security
 hardening.  We cannot follow trunk development without carefully reading
 over the new features, and we need to develop against this target so that
 we catch version incompatibilities before deployment.

 I see that multiwiki encapsulates the various wikis by configuration only,
 and they all share the main codebase.  Do you have multiple checkouts of
 MediaWiki-core on your roadmap, or are we a fringe case?  I'd like to help
 support our development under vagrant, but this issue is a bit of a
 blocker.  Any advice would be appreciated.

 Thanks,
 Adam

 [1] https://gerrit.wikimedia.org/r/135326, production is
 https://payments.wikimedia.org


 On Wed, May 21, 2014 at 9:55 AM, Bryan Davis bd...@wikimedia.org wrote:

  On Fri, May 16, 2014 at 2:40 PM, Arthur Richards
  aricha...@wikimedia.org wrote:
  
   CentralAuth/Multiwiki:
   Bryan Davis, Chris Steipp, and Reedy spent a lot of time hacking on
 this,
   and we now have support for multiwiki/CentralAuth in Vagrant! There is
   still some cleanup work being done for the role to remove
  kludge/hacks/etc
   (see https://gerrit.wikimedia.org/r/#/c/132691/).
 
  The CentralAuth role and the associated puppet config that allows
  creation of multiple wikis as Apache virtual hosts on a single
  MediaWiki-Vagrant virtual machine have been merged! Go forth and
  debug/extend CentralAuth. :)
 
  I'd love to see additional roles created that use the multwiki::wiki
  Puppet define to add interesting things for testing/debugging like RTL
  wikis or other complex features such as WikiData that use a
  collaboration between multiple wikis in the WMF production cluster.
  If you're interested in working on something like this and get stuck
  with the Puppet code needed or find shortcomings in the setup that
  Chris and I developed I'd be glad to try and help work through the
  issues.
 
  Bryan
  --
  Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
  [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
  irc: bd808v:415.839.6885 x6855
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
 [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
 irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Old page versions and historical templates

2014-06-30 Thread Adam Wight
Hi,
Fundraising would like the ability to snapshot or retrieve a CentralNotice
banner exactly as it was rendered at an earlier time, including the old
revisions of templates, which is tricky and AFAICT not supported by
MediaWiki-core.

I know this issue has come up before but it looks like development is
external and has died out.  The most relevant code seems to be
Extension:BackwardsTimeTravel, which adds some expensive-looking hook
callbacks, and Extension:Memento, which is too far into the future and
amends how HTTP operates.

If ours is an unsupported usage, we'll have to implement a horrible kludge
such as taking a recursive snapshot of all transcluded pages.

Thanks,
Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Logging edit lifecycle events

2014-06-27 Thread Adam Wight
Hi VE team,
I heard a rumor that there is a new focus on logging edit events to track
the overall session trajectory and other stuff.  If it's helpful, I've been
working on that as well, here are my notes and attempts to implement,
hopefully it is complementary to whatever you've done so far:

https://meta.wikimedia.org/wiki/Schema:EditLifecycle
https://wikitech.wikimedia.org/wiki/User:Awight/Edit_logging
https://gerrit.wikimedia.org/r/#/c/141097/
https://gerrit.wikimedia.org/r/#/c/141113/
https://gerrit.wikimedia.org/r/#/c/141114/

Please loop me in on the conversation!

Thanks,
-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-06-23 Thread Adam Wight
On Fri, Jun 6, 2014 at 4:08 PM, Aaron Schulz aschulz4...@gmail.com wrote:

 I suppose that naming scheme is reasonable.

 $contentsRevId sounds awkward, maybe $sourceRevId or $originRevId is
 better.


What about rollbackRevId?  I want the variable name to make its purpose
very clear.

-Adam




 --
 View this message in context:
 http://wikimedia.7.x6.nabble.com/Unclear-Meaning-of-baseRevId-in-WikiPage-doEditContent-tp5028661p5029674.html
 Sent from the Wikipedia Developers mailing list archive at Nabble.com.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MW-Vagrant improvements at the Zürich Hackathon

2014-06-13 Thread Adam Wight
Bryan and Chris,
The multiwiki work is fantastic, a big thank you for pursuing this!  I
tried to use your new module to provide a vagrant development environment
for Fundraising's payments wiki [1], and I ran up against a large and
very solid-looking wall that I think is worth mentioning.  We maintain a
special release branch of MediaWiki for payments, with a bit of security
hardening.  We cannot follow trunk development without carefully reading
over the new features, and we need to develop against this target so that
we catch version incompatibilities before deployment.

I see that multiwiki encapsulates the various wikis by configuration only,
and they all share the main codebase.  Do you have multiple checkouts of
MediaWiki-core on your roadmap, or are we a fringe case?  I'd like to help
support our development under vagrant, but this issue is a bit of a
blocker.  Any advice would be appreciated.

Thanks,
Adam

[1] https://gerrit.wikimedia.org/r/135326, production is
https://payments.wikimedia.org


On Wed, May 21, 2014 at 9:55 AM, Bryan Davis bd...@wikimedia.org wrote:

 On Fri, May 16, 2014 at 2:40 PM, Arthur Richards
 aricha...@wikimedia.org wrote:
 
  CentralAuth/Multiwiki:
  Bryan Davis, Chris Steipp, and Reedy spent a lot of time hacking on this,
  and we now have support for multiwiki/CentralAuth in Vagrant! There is
  still some cleanup work being done for the role to remove
 kludge/hacks/etc
  (see https://gerrit.wikimedia.org/r/#/c/132691/).

 The CentralAuth role and the associated puppet config that allows
 creation of multiple wikis as Apache virtual hosts on a single
 MediaWiki-Vagrant virtual machine have been merged! Go forth and
 debug/extend CentralAuth. :)

 I'd love to see additional roles created that use the multwiki::wiki
 Puppet define to add interesting things for testing/debugging like RTL
 wikis or other complex features such as WikiData that use a
 collaboration between multiple wikis in the WMF production cluster.
 If you're interested in working on something like this and get stuck
 with the Puppet code needed or find shortcomings in the setup that
 Chris and I developed I'd be glad to try and help work through the
 issues.

 Bryan
 --
 Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
 [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
 irc: bd808v:415.839.6885 x6855

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Unclear Meaning of $baseRevId in WikiPage::doEditContent

2014-06-03 Thread Adam Wight
It looks like we should leave the existing hook parameters values alone for
the moment, but it would improve the situation if we renamed variables
which seem to be overloaded or unclear, in MediaWiki core and in
FlaggedRevs.  What do you think of the following conventions,

'oldid' (index.php parameter) -- keep this name only to preserve interface
compatibility.  This refers to a historical revision when used in the
action=view case, and to the latest revision ID of the page at the time an
edit session begins.

$oldid -- keep as-is in the action=view codepath, rename to $parentRevId in
action=edit

$parentRevId -- latest available revision ID at the time an edit session
begins.  Used to detect conflicts, and identify the parent revision record
upon save.  This is updated during successful automatic rebase.  I don't
see a good use case for preserving what Daniel calls the reference
revision, the parentRevId before rebase.

$baseRevId and $baseId -- rename everywhere to $contentsRevId, but
examining the code contexts for the smell of confounding with $parentRevId.

$contentsRevId -- revision ID of the source text to copy when performing
undo or rollback.  We will probably want to supplement hooks that only
passed $contentsRevId, such as NewRevisionFromEditComplete, with
$parentRevId as an additional parameter.

A refactor along these lines would keep me from losing already scant
marbles as I attempt to fix related issues in core:
https://gerrit.wikimedia.org/r/#/c/94584/ , I see now that I've already
begun to introduce mistakes caused by the difficult common-sense
interpretation of current variable naming.

-Adam


On Mon, Jun 2, 2014 at 1:22 AM, Daniel Kinzler dan...@brightbyte.de wrote:

 Am 30.05.2014 15:38, schrieb Brad Jorsch (Anomie):
  I think you need to look again into how FlaggedRevs uses it, without the
  preconceptions you're bringing in from the way you first interpreted the
  name of the variable. The current behavior makes perfect sense for that
  specific use case. Neither of your proposals would work for FlaggedRevs.

 As far as I understand the rather complex FlaggedRevs.hooks.php code, it
 assumes
 that

 a) if $newRev === $baseRevId, it's a null edit. As far as I can see, this
 does
 not work, since $baseRevId will be null for a null edit (and all other
 regular
 edits).

 b) if $newRev !== $baseRevId but the new rev's hash is the same as the base
 rev's hash, it's a rollback. This works with the current implementation of
 commitRollback(), but does not for manual reverts or trivial undos.

 So, FlaggedRevs assumes that EditPage resp WikiPage set $baseRevId to the
 edits
 logical parent (basically, the revision the user loaded when starting to
 edit).
 That's what I described as option (3) in my earlier mail, except for the
 rollback case; It would be fined with me to use the target rev as the base
 for
 rollbacks, as is currently done.

 FlaggedRevs.hooks.php also injects a baseRevId form field and uses it in
 some
 cases, adding to the confusion.

 In order to handle manual reverts and null edits consistently, EditPage
 should
 probably have a base revision as a form field, and pass it on to
 doEditContent.
 As far as I can tell, this would work with the current code in FlaggedRevs.

  As for the EditPage code path, note that it has already done edit
 conflict
  resolution so base revision = current revision of the page. Which is
  probably the intended meaning of false.

 Right. If that's the case though, WikiPage::doEditContent should probably
 set
 $baseRevId = $oldid, before passing it to the hooks.

 Without changing core, it seems that there is no way to implement a
 late/strict
 conflict check based on the base rev id. That would need an additional
 anchor
 revision for checking.

 The easiest solution for the current situation is to simply drop the strict
 conflict check in Wikibase and accept a race condition that may cause a
 revision
 to be silently overwritten, as is currently the case in core.

 -- daniel


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Non-Violent Communication

2014-02-17 Thread Adam Wight
Interesting...

I have very little authority to stand on, but in my exposure to so-called
NVC, it seems more appropriate for diplomatic negotiations than for any
real-life human situation.  IMO this approach boils down to getting your
way without looking like a dick.  Creeps me out.

That said, yes it's important to always deal generously with others.
Unless you're pissed :p

love,
Adam


On Mon, Feb 17, 2014 at 3:14 PM, Derk-Jan Hartman 
d.j.hartman+wmf...@gmail.com wrote:

 On 17 feb. 2014, at 21:45, Monte Hurd mh...@wikimedia.org wrote:

  +1
 
  When I read certain threads on this list, I feel like the assume good
 faith principle is often forgotten.
 
  Because this behavior makes me not want to participate in discussions
 about issues I actually care about, I wonder how many other voices, like
 mine, aren't heard, and to what degree this undermines any eventual
 perceived consensus?
 
  To be sure, if you don't assume good faith, your opinion still matters,
 but you unnecessarily weaken both your argument and the discussion.

 +many

 Yes on this list we have some strong opinions and we aren't always
 particularly careful about how we express them, but assume good faith[1]
 does indeed go a long way and that should be the default mode for reading.
 The default mode for writing should of course be don't be a dick [2].

 We have to remember that although many people are well versed in English
 here, it is often not their mother tongue, making it more difficult to
 understand the subtleties of the opinions of others and/or to express
 theirs, which might lead to frustration for both sides. And some people are
 simply terse where others are blunt and some people have more time than
 others to create replies or to wait for someones attempts to explain
 something properly.
 Being inclusive for this reason is usually regarded as a good thing and is
 thus a natural part of assume good faith. It is why 'civility' often is so
 difficult too map directly to community standards, because it is too
 tightly coupled with ones own norms, values and skills to be inclusive.

 I'm personally good with almost anything that keeps a good distance from
 both Linus Torvalds-style and NVC. We shouldn't be afraid to point out
 errors or have hefty discussions and we need to keep it inside the lines
 where people will want to participate. But this is no kindergarten either
 and some of the more abrasive postings have made a positive difference.
 It's difficult to strike the right balance but it's good to ask people once
 in a while to pay attention to how we communicate.

 DJ

 [1] https://meta.wikimedia.org/wiki/Assume_good_faith
 [2] https://meta.wikimedia.org/wiki/Don%27t_be_a_dick

 PS.

  Because this behavior makes me not want to participate in discussions
 about issues I actually care about, I wonder how many other voices, like
 mine, aren't heard, and to what degree this undermines any eventual
 perceived consensus?

 If that's what you think of wikitech-l, I assume it is easy to guess what
 you think about the talk page of Jimmy Wales, en.wp's Request for adminship
 and en.wp's Administrator noticeboard ? :)

 PPS.
 I'm quite sure Linus would burn NVC to the ground if he had the chance :)
 For those who haven't followed it and who have a bit of time on their
 hands: There was a very 'interesting' flamewar about being more
 professional in communication on the Linux kernel mailinglist last July.

 http://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/
 If you distance yourself a bit and just read everything, you'll find that
 there is some basic truth to both sides of the spectrum and it basically
 once again sums up to: we often forget how potty trained we are, even more
 so that there are different styles of potty around the world and whether or
 not a human/animal actually needs training to go potty to begin with. That
 doesn't give an answer, but it's an interesting/lively discussion every
 single time :D
 Slightly related fun:
 https://twitter.com/wyshynski/statuses/430734034113536000


  On Feb 17, 2014, at 11:45 AM, Derric Atzrott 
 datzr...@alizeepathology.com wrote:
 
  Hoy all,
 
  I've been meaning to start a thread about this for a while, but just
 hadn't
  gotten around to it.  Things have been rather heated the past few days,
 so I
  figured now would be as good a time as any to go about starting this
 thread.
 
  Have any of you ever heard of Non-Violent Communication (NVC).  It's a
 method of
  communicating, well really more a method of thinking, that aims to
 reduce and
  resolve conflicts between people.  NVC has sometimes also been called
 Empathetic
  Communication or Needs Based Communication.  The idea of NVC is to
 frame the
  discussion in terms of needs and feelings, followed up by requests.
  Nonviolent
  Communication holds that most conflicts between individuals or groups
 arise from
  miscommunication about their human 

[Wikitech-l] Proposed autoloader improvement

2013-09-18 Thread Adam Wight
I'd appreciate some feedback on this enhancement to the MediaWiki
autoloader:

https://bugzilla.wikimedia.org/show_bug.cgi?id=53835
https://gerrit.wikimedia.org/r/59804

The PSR-4 recommendation has not been ratified, but I think it's the most
convenient and rational namespace-based autoloading proposal, and I doubt
it will be modified before adoption.

Putting this logic into core allows extension authors a standardized way of
namespacing their classes, and also has the potential to deprecate or at
least greatly reduce the redundant wgAutoloadClasses lists.

-Adam
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Anonymous user id on wikipedia?

2012-12-18 Thread Adam Wight
I've been digging around in our cookie jar, as part of my work with 
Fundraising, and I have a few questions about the cookies we set on 
anonymous users.


First, I am deeply impressed with the care we have taken to respond to 
the community's privacy concerns, and after first-hand experience 
negotiating with our lawyers to implement an additional cookie, I think 
that WMF deserves its place as a model to the rest of the internet.  I 
would like to help clean up or at least explain the few oversights I 
identify below, so that we can be fully confident that we are doing 
everything we can to prevent abuse of our visitors' privacy.


1) Anonymous users are given a 1-year cookie which uniquely identifies 
them.  After logging out and clearing all cookies from my browser, I 
visited en.wikipedia.org and received this cookie.  Why would an 
anonymous user be given an identifying token?
mediaWiki.user.id=oDNtHcMSeGMSZyRehhuC7ypQRuPEGk3a; expires=Wed, 18 
Dec 2013 18:25:38 GMT; path=/; domain=en.wikipedia.org


2) Anonymous users are enrolled in clicktracking.  I was surprised 
because the extension page at 
http://www.mediawiki.org/wiki/Extension:ClickTracking specifies that it 
affects users, and I think it should very explicitly state that it 
affects logged-in users and anonymous visitors if that is really the 
intention.
clicktracking-session=0orJJTU79otWR6x1m8ykUAyasVpZJBn2x; path=/; 
domain=en.wikipedia.org


3) Registered user's cookies are not cleared at logout.  This seems like 
a pretty basic fix.
enwikiUserName=Adamw; expires=Sun, 16 Jun 2013 18:43:51 GMT; path=/; 
domain=en.wikipedia.org; Secure; HttpOnly


Ideally, an anonymous user, whether or not they have ever been logged in 
as a registered user, will not transmit any personally identifying 
information in their requests.  All three of these cookies violate that 
principle.  I have not found any public debate on the issue, hopefully 
others are interested in this topic.


Regards,
Adam Wight

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] mail archive permalinks [was Re: Mailman archives broken?]

2012-08-17 Thread Adam Wight

Tilman, thanks for those links.

I thought the base-32 encoded hash of Message-Id discussed in 
[http://wiki.list.org/display/DEV/Stable+URLs] gives us a 
straightforward and effective solution to the problem.  Ten characters 
or so should be plenty.  This would produce URLs like, 
[http://lists.wikimedia.org/pipermail/wikitech-l/2012-August/OHRDQGOX35.html]


We could prefix these with a parent directory that serves as a 
versioning scheme for our hash, allowing us to create forwarding rules 
if the permalink rules change in the future.  For example (and I have no 
experience, this might not work), we can generate an .htaccess at the 
root of old archive directories, which redirects each of the old 
sequential URLs to the new, hashed location.


-Adam

On 08/17/2012 08:00 AM, Tilman Bayer wrote:

On Fri, Aug 17, 2012 at 4:26 AM, MZMcBride z...@mzmcbride.com wrote:

Guillaume Paumier wrote:

I was told yesterday that the mailman/pipermail archives were broken,
in that permalinks were no longer linking to the messages they used to
link to (therefore not being permalinks at all).

This is pretty devastating. It's difficult to overstate the importance of
Mailman archives in documenting Wikimedia's history (or even history before
Wikimedia was a concept). I've come across links such as the one at
https://en.wikipedia.org/wiki/Wikipedia:Tim_Starling_Day that I can't even
find anywhere in the Mailman archives any longer. :-(

MZMcBride


Many historical Signpost articles are affected as well:
https://en.wikipedia.org/w/index.php?title=Special%3ASearchsearch=pipermail+wikitech+prefix%3AWikipedia%3AWikipedia+Signpost%2F2

BTW, here's Brion dreaming about a stable archiving system in 2007 ...
http://article.gmane.org/gmane.science.linguistics.wikipedia.technical/28993

In the same year, the lead developer of Mailman said that fixing this
problem of breaking URLs was absolutely critical
(http://mail.python.org/pipermail/mailman-developers/2007-July/019632.html
) and  some ideas were thrown around
(http://wiki.list.org/display/DEV/Stable+URLs ), but apparently this
huge data integrity problem still hasn't been solved.




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] mail archive permalinks [was Re: Mailman archives broken?]

2012-08-17 Thread Adam Wight
Mailman 3 already has code to add this X-Message-ID-Hash header, and 
integrate with mail archiving tools.


-Adam

On 08/17/2012 11:32 AM, Adam Wight wrote:

Tilman, thanks for those links.

I thought the base-32 encoded hash of Message-Id discussed in 
[http://wiki.list.org/display/DEV/Stable+URLs] gives us a 
straightforward and effective solution to the problem.  Ten characters 
or so should be plenty.  This would produce URLs like, 
[http://lists.wikimedia.org/pipermail/wikitech-l/2012-August/OHRDQGOX35.html] 



We could prefix these with a parent directory that serves as a 
versioning scheme for our hash, allowing us to create forwarding rules 
if the permalink rules change in the future.  For example (and I have 
no experience, this might not work), we can generate an .htaccess at 
the root of old archive directories, which redirects each of the old 
sequential URLs to the new, hashed location.


-Adam

On 08/17/2012 08:00 AM, Tilman Bayer wrote:

On Fri, Aug 17, 2012 at 4:26 AM, MZMcBride z...@mzmcbride.com wrote:

Guillaume Paumier wrote:

I was told yesterday that the mailman/pipermail archives were broken,
in that permalinks were no longer linking to the messages they used to
link to (therefore not being permalinks at all).
This is pretty devastating. It's difficult to overstate the 
importance of
Mailman archives in documenting Wikimedia's history (or even history 
before

Wikimedia was a concept). I've come across links such as the one at
https://en.wikipedia.org/wiki/Wikipedia:Tim_Starling_Day that I 
can't even

find anywhere in the Mailman archives any longer. :-(

MZMcBride


Many historical Signpost articles are affected as well:
https://en.wikipedia.org/w/index.php?title=Special%3ASearchsearch=pipermail+wikitech+prefix%3AWikipedia%3AWikipedia+Signpost%2F2 



BTW, here's Brion dreaming about a stable archiving system in 2007 ...
http://article.gmane.org/gmane.science.linguistics.wikipedia.technical/28993 



In the same year, the lead developer of Mailman said that fixing this
problem of breaking URLs was absolutely critical
(http://mail.python.org/pipermail/mailman-developers/2007-July/019632.html 


) and  some ideas were thrown around
(http://wiki.list.org/display/DEV/Stable+URLs ), but apparently this
huge data integrity problem still hasn't been solved.




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] gerrit tab width

2012-08-08 Thread Adam Wight
  It should be 4 spaces...  I found where the default is set, here's a 
quick untested patch with no configurability.
  You can also change your personal preferences under Differences - 
Preferences


diff --git 
a/gerrit-reviewdb/src/main/java/com/google/gerrit/reviewdb/client/AccountDiffPreference.java 
b/gerrit-reviewdb/src/main/java/com/google/gerrit/reviewdb/client/AccountDiffPreference.java

index 3b04725..5c689aa 100644
--- 
a/gerrit-reviewdb/src/main/java/com/google/gerrit/reviewdb/client/AccountDiffPreference.java
+++ 
b/gerrit-reviewdb/src/main/java/com/google/gerrit/reviewdb/client/AccountDiffPreference.java

@@ -58,7 +58,7 @@ public class AccountDiffPreference {
   public static AccountDiffPreference createDefault(Account.Id 
accountId) {

 AccountDiffPreference p = new AccountDiffPreference(accountId);
 p.setIgnoreWhitespace(Whitespace.IGNORE_NONE);
-p.setTabSize(8);
+p.setTabSize(4);
 p.setLineLength(100);
 p.setSyntaxHighlighting(true);
 p.setShowWhitespaceErrors(true);


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] gerrit tab width

2012-08-08 Thread Adam Wight

On 08/08/2012 11:07 AM, Mark Holmquist wrote:

Also, it should be 4 spaces is a matter of opinion--everyone likes
different tab widths depending on their preferences and monitor size.


http://www.mediawiki.org/wiki/CC#Tab_size

you should make no assumptions appears to support Chad's statement. 
However, I'm pretty sure that in reality, many people assume a width 
of 4. I've definitely seen funky tab-plus-space indentations that 
support that theory.




I officially redact my bogus claim that it should be 4 spaces. 
However, we certainly shouldn't default to 8!



(moral authority provided by:
Cleric: And the Lord spake, saying, First shalt thou take out the Holy 
Pin. Then shalt thou count to three, no more, no less. Three shall be 
the number thou shalt count, and the number of the counting shall be 
three. Four shalt thou not count, neither count thou two, excepting that 
thou then proceed to three. *Five is right out.* Once the number three, 
being the third number, be reached, then lobbest thou thy Holy Hand 
Grenade of Antioch towards thy foe, who, being naughty in my sight, 
shall snuff it.)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] branched article history extension [Re: Article revision numbers]

2012-07-25 Thread Adam Wight

Hi,
I've started working on an extension to manage branching history, 
calling it Nonlinear.  Here's the crude code, 
https://github.com/adamwight/Nonlinear


Screenshot of the effect on revision history:
mediawiki screenshot

On 07/16/2012 04:10 PM, Platonides wrote:

On 17/07/12 00:22, Adam Wight wrote:

Hello comrades,
I've run into a challenge too interesting to keep to myself ;)  My
immediate goal is to prototype an offline wikipedia, similar to Kiwix,
which allows the end-user to make edits and synchronize them back to a
central repository like enwiki.

The catch is, how to insert these changes without edit conflicts? With
linear revision numbering, I can't imagine a natural representation of
the data, only some kind of ad-hoc sandbox solution.

Extending the article revision numbering to represent a branching
history would be the natural way to handle optimistic replication.

Non-linear revisioning might also facilitate simpler models for page
protection, and would allow the formation of multiple, independent
consensuses.

-Adam Wight

Actually, the revision table allows for non-linear development (it
stores from which version you edited the article). You could even make
to win a version different than the one with the latest timestamp (by
changing page_rev) one.
You will need to change the way of viewing history, however, and add a
system to keep track of heads and merges.
There may be some assumtions accross the codebase about the latest
revision being the active one, too.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Article revision numbers

2012-07-23 Thread Adam Wight
 It's really not. Things that are (relatively) simple in the
 database tend to require walking the entire revision
 tree in Git in order to figure the same data out.
 
 Git is awesome for software development, but trying
 to use it as an article development tool is really a bad 
 solution in search of a problem. We could've had the 
 same argument years ago and said why use a database, 
 SVN stores information in a linear history that's useful 
 for articles. Having diverging articles may be cool/
 desired, but using Git is not the answer.
 
 -Chad
 
 Fair enough.  I learn something new every day.  I definitely think that
 distributed article editing is a great idea, even if a git-like system is
 not the answer to it.
 
 Thank you,
 Derric Atzrott

Git is almost never used in a truly decentralized fashion, so it isn't
optimized for that type of use.  See git hub, for example.
Actual peer-to-peer is infinitely more scalable ;) because you don't
have one poor enterprise Java server getting hit by everyone in the
world, instead individuals are distributing the load among themselves.

That would be a difficult model for Wikipedia however, because
maintaining an authoritative edition would require centralized
cryptography, at the least.

Allowing articles on our central server to diverge temporarily is
easily achievable, with very little overhead.  In fact, when you
consider the savings in revert wars, maybe there is a net gain.

I'm interested in writing a mediawiki extension to allow us to
experiment with this idea.

-Adam

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Article revision numbers

2012-07-20 Thread Adam Wight
wi...@wikidev.net:
 On 07/16/2012 04:49 PM, Adam Wight wrote:
  Cool!  That's a nice solution because it's transparent to the end-user's
  system.  However, if we use the current schema as you're describing, we
  would have to reconcile rev_id conflicts during the merge.  This seems
  like a nasty problem if the merge is asynchronous, for example a batched
  changeset sent in email.
 
 And that would be the core problem of asynchronous optimistic
 replication ;) Simple last-write-wins or union (for shopping carts..)
 strategies are still manageable, but merging textual changes is harder.
 Manual intervention will often be needed.
 
 The editor rather than some unsuspecting reader should be best equipped
 to resolve these conflicts, so some degree of synchrony in the 'push'
 stage might make sense to provide an opportunity for editor-guided merging.
 
 Gabriel

Although it might be simpler for the original editor to merge their
own changes, that's not always what we want.  The most flexible
arrangement would be to separate the process into three workflows:
edit, synchronize, and merge.  Different people could perform each
stage, or they can be folded together when appropriate.

On protected pages, for example, we specifically want some amount of
peer review before deciding to merge.  This could be seen as positive
feedback also, if each successfully merged change comes with a bit of
validation by the community.

Even a simple branching model will offer some delicious low-hanging
fruit, for example, editors could Save Draft for any article and
resume editing later.

-adam

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Article revision numbers

2012-07-16 Thread Adam Wight

Hello comrades,
I've run into a challenge too interesting to keep to myself ;)  My 
immediate goal is to prototype an offline wikipedia, similar to Kiwix, 
which allows the end-user to make edits and synchronize them back to a 
central repository like enwiki.


The catch is, how to insert these changes without edit conflicts? With 
linear revision numbering, I can't imagine a natural representation of 
the data, only some kind of ad-hoc sandbox solution.


Extending the article revision numbering to represent a branching 
history would be the natural way to handle optimistic replication.


Non-linear revisioning might also facilitate simpler models for page 
protection, and would allow the formation of multiple, independent 
consensuses.


-Adam Wight

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Article revision numbers

2012-07-16 Thread Adam Wight

On 07/16/2012 04:10 PM, Platonides wrote:

On 17/07/12 00:22, Adam Wight wrote:

Hello comrades,
I've run into a challenge too interesting to keep to myself ;)  My
immediate goal is to prototype an offline wikipedia, similar to Kiwix,
which allows the end-user to make edits and synchronize them back to a
central repository like enwiki.

The catch is, how to insert these changes without edit conflicts? With
linear revision numbering, I can't imagine a natural representation of
the data, only some kind of ad-hoc sandbox solution.

Extending the article revision numbering to represent a branching
history would be the natural way to handle optimistic replication.

Non-linear revisioning might also facilitate simpler models for page
protection, and would allow the formation of multiple, independent
consensuses.

-Adam Wight

Actually, the revision table allows for non-linear development (it
stores from which version you edited the article). You could even make
to win a version different than the one with the latest timestamp (by
changing page_rev) one.
You will need to change the way of viewing history, however, and add a
system to keep track of heads and merges.
There may be some assumtions accross the codebase about the latest
revision being the active one, too.

Cool!  That's a nice solution because it's transparent to the end-user's 
system.  However, if we use the current schema as you're describing, we 
would have to reconcile rev_id conflicts during the merge.  This seems 
like a nasty problem if the merge is asynchronous, for example a batched 
changeset sent in email.

-adam

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Hard tabs in vim: suffering and smiling

2012-07-13 Thread Adam Wight

Hello new developer comrades,

The following line in my .vimrc file has given me back some form of 
happiness that I thought I had lost after several weeks of switching tab 
settings every time I might be editing a mw sourcefile.  I am once again 
free to work on multiple open-source projects without heaping 
disapprobation and calumny on myself and others.


With love,
Adam
=


 surprise! ascii 9 is back from the dead ... what, are we writing 
Makefiles??

autocmd BufNewFile,BufRead */mediawiki-*/** set tabstop=4 noexpandtab


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How to mount a local copy of the English Wikipedia for researchers?

2012-06-12 Thread Adam Wight
I ran into this problem recently.  A python script is available at 
https://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/Offline/mwimport.py,
 that will convert .xml.bz2 dumps into flat fast-import files which can be 
loaded into most databases.  Sorry this tool is still alpha quality.

Feel free to contact with problems.

-Adam Wight

j...@sahnwaldt.de:
 mwdumper seems to work for recent dumps:
 http://lists.wikimedia.org/pipermail/mediawiki-l/2012-May/039347.html
 
 On Tue, Jun 12, 2012 at 11:19 PM, Steve Bennett stevag...@gmail.com wrote:
  Hi all,
   I've been tasked with setting up a local copy of the English
  Wikipedia for researchers - sort of like another Toolserver. I'm not
  having much luck, and wondered if anyone has done this recently, and
  what approach they used? We only really need the current article text
  - history and meta pages aren't needed.
 
  Things I have tried:
  1) Downloading and mounting the SQL dumps
 
  No good because they don't contain article text
 
  2) Downloading and mounting other SQL research dumps (eg
  ftp://ftp.rediris.es/mirror/WKP_research)
 
  No good because they're years out of date
 
  3) Using WikiXRay on the enwiki-latest-pages-meta-history?.xml-.xml 
  files
 
  No good because they decompress to astronomically large. I got about
  halfway through decompressing them and was over 7Tb.
 
  Also, WikiXRay appears to be old and out of date (although
  interestingly its author Felipe Ortega has just committed to the
  gitorious repository[1] on Monday for the first time in over a year)
 
  4) Using MWDumper (http://www.mediawiki.org/wiki/Manual:MWDumper)
 
  No good because it's old and out of date: it only supports export
  version 0.3, and the current dumps are 0.6
 
  5) Using importDump.php on a latest-pages-articles.xml dump [2]
 
  No good because it just spews out 7.6Gb of this output:
 
  PHP Warning:  xml_parse(): Unable to call handler in_() in
  /usr/share/mediawiki/includes/Import.php on line 437
  PHP Warning:  xml_parse(): Unable to call handler out_() in
  /usr/share/mediawiki/includes/Import.php on line 437
  PHP Warning:  xml_parse(): Unable to call handler in_() in
  /usr/share/mediawiki/includes/Import.php on line 437
  PHP Warning:  xml_parse(): Unable to call handler in_() in
  /usr/share/mediawiki/includes/Import.php on line 437
  ...
 
 
  So, any suggestions for approaches that might work? Or suggestions for
  fixing the errors in step 5?
 
  Steve
 
 
  [1] http://gitorious.org/wikixray
  [2] 
  http://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l