Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread K. Peachey
On Thu, Dec 6, 2012 at 9:45 AM, Platonides  wrote:
> Why do you need a UID?
> The autoincrement id we use in most tables can (should) serve as UID.
>
> It needs a little care when sending the inserts, but it's
> straighforward. It can easily be done by a layer on top of our db
> classes (transparent to the application).
>
> What table were you planning to partition (and how) that you can't use
> the id for the partitioning?

Can we please spilt further discussions about this off to a new topic?
It has very little to do with the original discussion of this list
topic (the merging of your own code in our vcs'es)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Matthew Flaschen
On 12/05/2012 07:55 PM, Chris Steipp wrote:
> If Wikibase wants to define another hook, and can
> present the data in a generic way (like Daniel did for content
> handler) we can probably add it into AbuseFilter.

It should be presented in a suitable way (not obscure Wikibase internal
structures), that still includes the necessary information.

> But if the processing is specific to Wikibase (you pass an Entity into the 
> hook,
> for example), then AbuseFilter shouldn't be hooking into something
> like that, since it would basically make Wikibase a dependency, and I
> do think that more independent wikis are likely to have AbuseFilter
> installed without Wikibase than with it.

AbuseFilter would not depend on Wikibase if AbuseFilter only hooks into it.

It's fine for you to register a hook that is never called:

$wgHooks[ 'WikibaseEditFilterMerged' ][] =
'AbuseFilter::onWikibaseEditFilterMerged';

will not cause an error if Wikibase is not installed.
onWikibaseEditFilterMerged would then transform the data and call
internal AbuseFilter functions/methods.

>> I don't think it necessarily needs one.  A spam filter with a different
>> approach (which may not have a rule UI at all) can register its own
>> hooks, just as AbuseFilter does.
> 
> I can definitely appreciate that, but that is also why we currently
> have so many extensions for spam / bot handling, using the existing
> hooks. I would hate to see yet another spam extension that does really
> great spam detection, but is has a dependency on Wikibase.

I think inevitably different people are going to address the spam
challenge differently.  By using hooks, though, that great extension
does not need a hard dependency on Wikibase.

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Chris Steipp
On Wed, Dec 5, 2012 at 3:53 PM, Matthew Flaschen
 wrote:
> No, we disagree on this.

I was afraid that might be the case, so I'm glad we clarified.

> The same general idea should apply for Wikibase.  The only difference is
> that the core functionality of data editing is in Wikibase.

Correct, and I would say that Wikibase should be calling the same
hooks that core does, so that AbuseFilter can be used to filter all
incoming data. If Wikibase wants to define another hook, and can
present the data in a generic way (like Daniel did for content
handler) we can probably add it into AbuseFilter. But if the
processing is specific to Wikibase (you pass an Entity into the hook,
for example), then AbuseFilter shouldn't be hooking into something
like that, since it would basically make Wikibase a dependency, and I
do think that more independent wikis are likely to have AbuseFilter
installed without Wikibase than with it.

> I don't think it necessarily needs one.  A spam filter with a different
> approach (which may not have a rule UI at all) can register its own
> hooks, just as AbuseFilter does.

I can definitely appreciate that, but that is also why we currently
have so many extensions for spam / bot handling, using the existing
hooks. I would hate to see yet another spam extension that does really
great spam detection, but is has a dependency on Wikibase.

But that's just my preference.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Refactor of mediawiki/extensions/ArticleFeedbackv5 backend

2012-12-05 Thread Terry Chay

On Dec 5, 2012, at 12:43 PM, Patrick Reilly  wrote:

> Fellow Wikimedia Developers,
> 
> Matthias Mullie has been working hard to refactor the backend of
> mediawiki/extensions/ArticleFeedbackv5 to add proper sharding support.
> 
> The original approach that he took was to rely on RDBStore that was
> first introduced in Change-Id:
> Ic1e38db3d325d52ded6d2596af2b6bd3e9b870fe
> https://gerrit.wikimedia.org/r/#/c/16696 by Aaron Schulz.
> 
> Asher Feldman, Tim Starling and myself reviewed the new class RDBStore
> and determined that it wasn't really the best approach for our current
> technical architecture and database environment. Aaron Schulz had a
> lot of really good ideas included in RDBStore, but it just seemed like
> it wasn't a great fit right now. We decided collectively to abandon
> the RDBStore work permanently at this time.

:-( I'm going through all the stages of grief right now. In a few 
moments, I'll hit "acceptance"

> So, we're now left with the need to provide Matthias Mullie with some
> direction on what is the best solution for the ArticleFeedbackv5
> refactor.
> 
> One possible solution would be to create a new database cluster for
> this type of data. This cluster would be solely for data that is
> similar to Article Feedback's and that has the potential of being
> spammy in nature. The MediaWiki database abstraction layer could be
> used directly via a call to the wfGetDB() function to retrieve a
> Database object. A read limitation with this approach will be
> particularly evident when we require a complex join. We will need to
> eliminate any cross-shard joins.

This seems like the only reasonable solution that can be done in a 
timely manner at the moment.

I caution against making this sort of vertical partitioning a long-term 
solution. Knowledge of what data lives on what machine is the sort of human 
knowledge created by heterogeneous systems that is prone to failure. All 
"Social" style data tends to proliferate fast and ArticleFeedback is just one 
such piece. Tying ourselves to a vertical partition that grows at Moore's law 
is going to bite us when we hit things like Flow (messages), LQT3, etc.

Cross-shard JOINs are already eliminated (or should be) in AFTv5 patch 
since it assumes RDB, so there shouldn't be a call in the code that requires a 
wfGetDB() to return the same database object for AFT-related tables and 
non-AFT-related ones.

> The reality is that Database Sharding is a very useful technology, but
> like other approaches, there are many factors to consider that ensure
> a successful implementation. Further, there are some limitations and
> Database Sharding will not work well for every type of application.

Most of this is alleviated with increased dependence on memcache for 
caching intermediate values and rollups. Since this isn't handled on the object 
level in mediawiki, I assumed this is a problem for the AFTv5 patch and not RDB 
store.

> So, to this point when we truly implement sharding in the future it
> will more than likely be benificial to focus on place in core
> mediawiki where it will have the greatest impact, such as the
> pagelinks and revision tables.

Yes, it'd have the greatest impact here, but these are tables with some 
of the most indexes/rollups/joins.

To do this properly we'd need a core object baseclass that would allow 
us to use BagOfStuff/memcached as a passthrough on anything going to the shard 
so that joins are done in PHP and reads only from memcache (unless the data 
requested has been LRU'd).

That's a much larger undertaking.

Take care,

terry
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] LabeledSectionTransclusion performance problems

2012-12-05 Thread Platonides
This is obvious, but if the "new lst" allows infinite loops, it MUST NOT
be deployed. Not even if it rendered the Pagina Principale acceptably.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Matthew Flaschen
On 12/05/2012 05:54 PM, Chris Steipp wrote:
> On Wed, Dec 5, 2012 at 1:11 PM, Matthew Flaschen
>  wrote:
>> It makes sense for AbuseFilter and Wikidata to work in conjunction.  But
>> it seems Wikidata should provide a hook that AbuseFilter calls.
> 
> I think we agree on this point, although I'll clarify and say I think
> AbuseFilter should be calling wfRunHooks, and Wikibase should provide
> the functions.

No, we disagree on this.

Wikibase should call wfRunHooks.  This is analogous to the way it is now
for regular wikitext.

For example, AbuseFilter has:

$wgHooks['EditFilterMerged'][] = 'AbuseFilterHooks::onEditFilterMerged';

Then, core MediaWiki calls:

if ( !wfRunHooks( 'EditFilterMerged', array( $this, $this->textbox1,
&$this->hookError, $this->summary ) ) ) {

The same general idea should apply for Wikibase.  The only difference is
that the core functionality of data editing is in Wikibase.

Thus, Wikibase should call wfRunHooks for this.

>> What if someone wants to make spam filter that works differently than
>> AbuseFilter?  For example, it uses its own programmatic rules rather
>> than ones that can be expressed in the Special:AbuseFilter language.
> 
> You are correct, AbuseFilter doesn't currently have hooks to let an
> extension run its own logic, but that wouldn't be too difficult to
> implement.

I don't think it necessarily needs one.  A spam filter with a different
approach (which may not have a rule UI at all) can register its own
hooks, just as AbuseFilter does.

> Although I would be interested to know what kind of rules you have in
> mind, since it's certainly possible that we would want to implement it
> as a AbuseFilter operation.

I don't have an immediate practical suggestion.  But I do know that
modern spam filters use a variety of approaches, including Bayesian
filtering.

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Platonides
On 05/12/12 21:01, Aaron Schulz wrote:
> RDBStore is shelfed as a reference for now. The idea was to partition sql
> table across multiple DB servers using a consistent hash of some column.
> There no longer would be the convenience of autoincrement columns so UIDs
> are a way to make unique ids without a central table or counter.

Why do you need a UID?
The autoincrement id we use in most tables can (should) serve as UID.

It needs a little care when sending the inserts, but it's
straighforward. It can easily be done by a layer on top of our db
classes (transparent to the application).

What table were you planning to partition (and how) that you can't use
the id for the partitioning?



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Refactor of mediawiki/extensions/ArticleFeedbackv5 backend

2012-12-05 Thread Aaron Schulz
I'm seconding that recommendation to be clear. More specifically, I'd suggest
that the AFT classes have two new protected methods:
* getSlaveDB() - wrapper for wfGetLBFactory()->getExternalLB(
$wgArticleFeedBackCluster )->getConnection( DB_SLAVE, array(), $wikiId )
* getMasterDB() - wrapper for wfGetLBFactory()->getExternalLB(
$wgArticleFeedBackCluster )->getConnection( DB_MASTER, array(), $wikiId )
The wrappers could also handle the case where the cluster is the usual wiki
cluster as well (e.g. good old wfGetDB()).

You could then swap out the current wfGetDB() calls with these methods. It
might be easiest to start with the current AFT, do this, and fix up the
excessive queries write queries rather that try to convert the AFT5 code
that used sharding. The name of the cluster would be an AFT configuration
variable (e.g. $wgArticleFeedBackCluster = 'external-aft' ).

This works by adding the new  'external-aft' cluster to the 'externalLoads'
portion of the load balancer configuration. It may make sense to give the
cluster a non-AFT specific name though (like 'external-1'), since I assume
other extensions would use it. Maybe the clusters could be named after
philosophers to be more interesting...

One could instead use wfGetDB( index, array(), 'extension-aft' ), though
this would be a bit hack since:
a) A wiki ID would be used as an external cluster name where there is no
wiki
b) The actual wiki IDs would have to go into table names or a column



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Refactor-of-mediawiki-extensions-ArticleFeedbackv5-backend-tp4990937p4990952.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Chris Steipp
On Wed, Dec 5, 2012 at 1:11 PM, Matthew Flaschen
 wrote:
> It makes sense for AbuseFilter and Wikidata to work in conjunction.  But
> it seems Wikidata should provide a hook that AbuseFilter calls.

I think we agree on this point, although I'll clarify and say I think
AbuseFilter should be calling wfRunHooks, and Wikibase should provide
the functions. I think more 3rd-party wikis will run AbuseFilter than
Wikibase, but that could be my prejudice based on what I work on.

> What if someone wants to make spam filter that works differently than
> AbuseFilter?  For example, it uses its own programmatic rules rather
> than ones that can be expressed in the Special:AbuseFilter language.

You are correct, AbuseFilter doesn't currently have hooks to let an
extension run its own logic, but that wouldn't be too difficult to
implement. Maybe run a new hook from AbuseFilter::checkConditions?
Although I would be interested to know what kind of rules you have in
mind, since it's certainly possible that we would want to implement it
as a AbuseFilter operation.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] November community metrics report

2012-12-05 Thread Quim Gil

Second issue of the MediaWiki community metrics monthly report!

We have added a bunch of bug tracking data in order to highlight some of 
the QA and testing activities. Hopefully next month we will show 
mediawiki.org data to reflect the documentation work.


http://www.mediawiki.org/wiki/Community_metrics/November_2012

The monthly community metrics reports are still under heavy work in 
progress. Your feedback and help is welcome!


--
Quim Gil
Technical Contributor Coordinator
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Jon Robson
This is long overdue and kudos to all involved.
I'm already noticing the more useful side effects such as the url
change, performance and the better formatted emails

On Wed, Dec 5, 2012 at 12:56 PM, Sumana Harihareswara
 wrote:
> On 12/04/2012 11:44 AM, Andre Klapper wrote:
>> bugzilla.wikimedia.org is operational again and is now running the
>> latest stable version (4.2.4, before was 4.0.9).
>>
>> Big thanks to Daniel Zahn from the ops team for upgrading!
>> All fame belongs to him!
>
> Much appreciation to Andre and to Daniel for their many hours of work on
> this, and on their previous security upgrade of Bugzilla.
>
>> I've done some quick testing, and to my surprise stuff like "Weekly bug
>> summary" did not break.
>> However, if you see new issues and problems please file a ticket:
>> http://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia&component=Bugzilla
>>
>> The only (potential) regression is that we did not apply previous
>> changes to Bugmail.pm, described as "Wikimedia Hack! Pretend global
>> watchers are CCs so we can use their prefs to for instance ignore
>> CC-only mails."
>>
>> New features and improvements of this Bugzilla version:
>> http://www.bugzilla.org/releases/4.2.4/release-notes.html#v42_feat
>>
>> Happy bug reporting!
>>
>> andre
>
> I want to emphasize a few of the improvements that I especially love,
> from those release notes:
> * Displaying a bug with many dependencies is now much faster.
> * After you edit a bug, the URL is automatically changed to show_bug.cgi
> instead of process_bug.cgi or the like.
> * User autocompletion is faster (like when you add a cc).
> * Most changes made by BZ admins are now logged to the database, in the
> audit_log table.
> * We can disable older components, versions and milestones.
>
> And we get more customizability to generally improve the look, feel, and
> workflow of Bugzilla.  So, thanks for pushing this, Daniel and Andre!
>
> --
> Sumana Harihareswara
> Engineering Community Manager
> Wikimedia Foundation
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
http://jonrobson.me.uk
@rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Antoine Musso
Le 05/12/12 21:56, Sumana Harihareswara a écrit :
> * After you edit a bug, the URL is automatically changed to show_bug.cgi
> instead of process_bug.cgi or the like.

I have been hit by that one almost daily for a few years now.
Congratulations on the upgrade :)

-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Antoine Musso
Le 05/12/12 20:53, Chad a écrit :
> There's been some idle grumbling (myself included) that the new
> default of HTML e-mails is kind of yucky and people preferred the
> plaintext. Do others have thoughts? Should we turn the default
> back?

I would leave HTML mails to be the default.  I hate them myself but
given them a try for a few days see how my brain adapt :-]

As other said, one can revert back to plain text in its user preferences.

-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Matthew Flaschen
On 12/05/2012 12:28 PM, Chris Steipp wrote:
> On Wed, Dec 5, 2012 at 3:34 AM, Daniel Kinzler  wrote:
>> You really want the spam filter extensions to have internal knowledge of
>> Wikibase? That seems like a nasty cross-dependency, and goes directly against
>> the idea of modularization and separation of concerns...
>>
>> We are running into the "glue code problem" here. We need code that knows 
>> about
>> the spam filters and about wikibase. Should it be in the spam filter, in
>> Wikibase, or in a separate, third extension? That would be cleanest, but a
>> hassle to maintain... Which way would you prefer?
> 
> I think Daniel has correctly stated the problem.
> 
> My perspective:
> 
> One of the directions of the Admin Tools project is to combine some of
> the various tools into AbuseFilter, so I think it's safe to assume
> that AbuseFilter will be around and maintained for some time, and
> Wikidata could easily use the hooks it provides to do a lot of the
> work providing the interface.

It makes sense for AbuseFilter and Wikidata to work in conjunction.  But
it seems Wikidata should provide a hook that AbuseFilter calls.

What if someone wants to make spam filter that works differently than
AbuseFilter?  For example, it uses its own programmatic rules rather
than ones that can be expressed in the Special:AbuseFilter language.

If Wikidata exposes an API, AbuseFilter and other extensions can use it.

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Matthew Flaschen
On 12/05/2012 06:34 AM, Daniel Kinzler wrote:
>> I think that makes sense.  The spam filters will work best if they are
>> aware of how wikidata works, and have access to the full JSON
>> information of the change.
> 
> You really want the spam filter extensions to have internal knowledge of
> Wikibase? That seems like a nasty cross-dependency, and goes directly against
> the idea of modularization and separation of concerns...

I agree it should not have internal implementation knowledge.  I meant
how it works in a different sense.

More specifically, what if Wikidata exposed a JSON object representing
an external version of each change (essentially a data API).

It could allow hooks to register for this (I think is similar to the
EditEntity idea).

Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Sumana Harihareswara
On 12/04/2012 11:44 AM, Andre Klapper wrote:
> bugzilla.wikimedia.org is operational again and is now running the
> latest stable version (4.2.4, before was 4.0.9).
> 
> Big thanks to Daniel Zahn from the ops team for upgrading! 
> All fame belongs to him!

Much appreciation to Andre and to Daniel for their many hours of work on
this, and on their previous security upgrade of Bugzilla.

> I've done some quick testing, and to my surprise stuff like "Weekly bug
> summary" did not break.
> However, if you see new issues and problems please file a ticket:
> http://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia&component=Bugzilla
> 
> The only (potential) regression is that we did not apply previous
> changes to Bugmail.pm, described as "Wikimedia Hack! Pretend global
> watchers are CCs so we can use their prefs to for instance ignore
> CC-only mails."
> 
> New features and improvements of this Bugzilla version:
> http://www.bugzilla.org/releases/4.2.4/release-notes.html#v42_feat
> 
> Happy bug reporting!
> 
> andre

I want to emphasize a few of the improvements that I especially love,
from those release notes:
* Displaying a bug with many dependencies is now much faster.
* After you edit a bug, the URL is automatically changed to show_bug.cgi
instead of process_bug.cgi or the like.
* User autocompletion is faster (like when you add a cc).
* Most changes made by BZ admins are now logged to the database, in the
audit_log table.
* We can disable older components, versions and milestones.

And we get more customizability to generally improve the look, feel, and
workflow of Bugzilla.  So, thanks for pushing this, Daniel and Andre!

-- 
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Željko Filipin
On Wed, Dec 5, 2012 at 8:53 PM, Chad  wrote:

> There's been some idle grumbling (myself included) that the new
> default of HTML e-mails is kind of yucky and people preferred the
> plaintext. Do others have thoughts? Should we turn the default
> back?
>

I prefer HTML mails.

Željko
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Refactor of mediawiki/extensions/ArticleFeedbackv5 backend

2012-12-05 Thread Patrick Reilly
Fellow Wikimedia Developers,

Matthias Mullie has been working hard to refactor the backend of
mediawiki/extensions/ArticleFeedbackv5 to add proper sharding support.

The original approach that he took was to rely on RDBStore that was
first introduced in Change-Id:
Ic1e38db3d325d52ded6d2596af2b6bd3e9b870fe
https://gerrit.wikimedia.org/r/#/c/16696 by Aaron Schulz.

Asher Feldman, Tim Starling and myself reviewed the new class RDBStore
and determined that it wasn't really the best approach for our current
technical architecture and database environment. Aaron Schulz had a
lot of really good ideas included in RDBStore, but it just seemed like
it wasn't a great fit right now. We decided collectively to abandon
the RDBStore work permanently at this time.

So, we're now left with the need to provide Matthias Mullie with some
direction on what is the best solution for the ArticleFeedbackv5
refactor.

One possible solution would be to create a new database cluster for
this type of data. This cluster would be solely for data that is
similar to Article Feedback's and that has the potential of being
spammy in nature. The MediaWiki database abstraction layer could be
used directly via a call to the wfGetDB() function to retrieve a
Database object. A read limitation with this approach will be
particularly evident when we require a complex join. We will need to
eliminate any cross-shard joins.

The reality is that Database Sharding is a very useful technology, but
like other approaches, there are many factors to consider that ensure
a successful implementation. Further, there are some limitations and
Database Sharding will not work well for every type of application.

So, to this point when we truly implement sharding in the future it
will more than likely be benificial to focus on place in core
mediawiki where it will have the greatest impact, such as the
pagelinks and revision tables.

— Patrick

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
RDBStore is shelfed as a reference for now. The idea was to partition sql
table across multiple DB servers using a consistent hash of some column.
There no longer would be the convenience of autoincrement columns so UIDs
are a way to make unique ids without a central table or counter.

In some cases, like when the primary key is the uid column, duplicate
detection can be enforce by the DB since duplicate values would map to the
same partition table and that table would have a unique index, causing a
duplicate key error. This could allow for slightly smaller uids to be used
with the comfort of knowing that in the unlikely event of a rare collision,
it will be detected. This is why it had several uid functions. It might be
nice to add a standard UUID1 and UUID4 function, though they were not useful
for RDB store for B-TREE reasons.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990931.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Brion Vibber
On Wed, Dec 5, 2012 at 11:53 AM, Chad  wrote:

> On Tue, Dec 4, 2012 at 2:44 PM, Andre Klapper 
> wrote:
> > bugzilla.wikimedia.org is operational again and is now running the
> > latest stable version (4.2.4, before was 4.0.9).
> >
>
> There's been some idle grumbling (myself included) that the new
> default of HTML e-mails is kind of yucky and people preferred the
> plaintext. Do others have thoughts? Should we turn the default
> back?
>

I say leave it, let's join the 21st century with links in our emails. :)

Looks like folks can switch their own preferences on/off for this if they
want it changed:
https://bugzilla.wikimedia.org/userprefs.cgi?tab=settings "Preferred email
format"

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Siebrand Mazeland (WMF)
On Wed, Dec 5, 2012 at 8:53 PM, Chad  wrote:

> On Tue, Dec 4, 2012 at 2:44 PM, Andre Klapper 
> wrote:
> > bugzilla.wikimedia.org is operational again and is now running the
> > latest stable version (4.2.4, before was 4.0.9).
> >
>
> There's been some idle grumbling (myself included) that the new
> default of HTML e-mails is kind of yucky and people preferred the
> plaintext. Do others have thoughts? Should we turn the default
> back?


I like that I can finally see what changed. I couldn't see that on my iOS
mail apps. Now I can. So I like that part.

I'd like to see the order of the fields optimized, but that's nothing
critical.

Cheers!

-- 
Siebrand Mazeland
Product Manager Language Engineering
Wikimedia Foundation

M: +31 6 50 69 1239
Skype: siebrand

Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla upgrade [was: bugzilla.wikimedia.org downtime: Now.]

2012-12-05 Thread Chad
On Tue, Dec 4, 2012 at 2:44 PM, Andre Klapper  wrote:
> bugzilla.wikimedia.org is operational again and is now running the
> latest stable version (4.2.4, before was 4.0.9).
>

There's been some idle grumbling (myself included) that the new
default of HTML e-mails is kind of yucky and people preferred the
plaintext. Do others have thoughts? Should we turn the default
back?

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Jon Robson
On Wed, Dec 5, 2012 at 11:18 AM, Ryan Kaldari  wrote:
> If the 'browser' option is set to true, it's supposed to automatically
> accept globals like window and document without having to explicitly exempt
> them. Not sure why that isn't working on Jenkins, though.

and it does.. at least it works for me
show me your .jshintrc configuation on irc - I suspect there is something funky

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Antoine Musso
Le 20/11/12 23:27, Krinkle a écrit :
> TL;DR: jshint is now running from Jenkins on mediawiki/core
> (joining the linting sequence for php and puppet files).

I have also enabled it on a few extensions, will add more of them over
the next days.

The list of linted extensions is listed in mediawiki-extensions.yaml of
integration/jenkins-job-builder-config.git

https://gerrit.wikimedia.org/r/gitweb?p=integration/jenkins-job-builder-config.git;a=blob;f=mediawiki-extensions.yaml

Look at the bottom for the -project key.  Right now the list is:

 - cldr
 - DataValues
 - Diff
 - Echo
 - EtherEditor
 - EventLogging
 - GeoData
 - LabeledSectionTransclusion
 - LiquidThreads
 - MobileFrontend
 - Renameuser
 - Score
 - SVGEdit
 - TimedMediaHandler
 - TitleBlacklist
 - Translate
 - TranslationNotifications
 - UniversalLanguageSelector
 - Validator
 - VisualEditor


-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
I share some blame for the existence of this thread. I spotted the git author
issue after that commit was merged and was too lazy to revert and fix it. I
personally tend to dislike reverting stuff way more than I should (like a
prior schema change that was merged without the site being updated). I
should have just reverted that immediately and left a new patch waiting for
+2.

Patches by person A that just split out a class or function made by person B
should still be looked at by someone other than person A. I think it's a
border case, but leaning on the side of caution is the best bet. It sucks to
have the code break due to something that was accidentally not copied. I
don't think it's worth reverting something like that just for being
self-merged (which is why didn't), but it's good practice to avoid. If a
string of basic follow-ups are needed and people complain, it might be worth
reverting though (like what happened here). We can always add patches back
into master after giving it a second look, so reverting isn't always a huge
deal and need not be stigmatizing. I need to get used to the revert button
more; lesson learned. :)



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990923.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Antoine Musso
Le 05/12/12 19:16, Jon Robson a écrit :
> Thanks Antoine!
> Currently JSHint doesn't get a vote on MobileFrontend
> (http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console
> : SUCCESS (non-voting)).
>
> Is it possible to make it vote and -1 anything which disobeys jshint?
> This would be extremely useful.

That is entirely possible though jslint is non voting per default to
avoid any troubles and let people that jshint is watching at them.

Only VisualEditor receives voting for now.


I would say we wait a bit till people learn to use jshint locally then
we can start making vote.  I don't really want to disturb everyone
workflow :-)


> I'm really excited by this, and I'm looking forward to qunit integration next 
> ;)

That is the next step, Timo has been looking at phantom.js !!

-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Ryan Kaldari
If the 'browser' option is set to true, it's supposed to automatically 
accept globals like window and document without having to explicitly 
exempt them. Not sure why that isn't working on Jenkins, though.


Ryan Kaldari

On 12/5/12 10:57 AM, Jon Robson wrote:

Are you setting this as a global variable?
I had the same issue and realised it needed to be defined as an option

See https://gerrit.wikimedia.org/r/#/c/36919/3/.jshintrc for reference.

On Wed, Dec 5, 2012 at 10:51 AM, Ryan Kaldari  wrote:

That's weird, it looks like browser is already set to true in .jshintrc, but
I still get errors for window being undefined from Jenkins. Is Jenkins using
the .jshintrc file?

Ryan Kaldari


On 12/5/12 10:48 AM, Ryan Kaldari wrote:

Can we set 'browser: true' in the jshint config so that it won't complain
about window being undefined?

Ryan Kaldari

On 12/5/12 10:16 AM, Jon Robson wrote:

Thanks Antoine!
Currently JSHint doesn't get a vote on MobileFrontend

(http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console
: SUCCESS (non-voting)). Is it possible to make it vote and -1
anything which disobeys jshint? This would be extremely useful.

I'm really excited by this, and I'm looking forward to qunit integration
next ;)

On Wed, Dec 5, 2012 at 2:52 AM, Antoine Musso  wrote:

Le 04/12/12 22:22, Jon Robson a écrit :

This is now running on MobileFrontend [1] but needs some tweaking!
It's awesome! Kudos to whoever enabled that.



[1]
https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console

Hello Jon,

I did enable it but most of the credits come to Timo who packaged JSHint
so it can be used by Jenkins :-]

That is still a bit a work in progress though, JSHint results are not
being shown in Jenkins yet beside the console output.

--
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l





___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l






___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Jon Robson
Are you setting this as a global variable?
I had the same issue and realised it needed to be defined as an option

See https://gerrit.wikimedia.org/r/#/c/36919/3/.jshintrc for reference.

On Wed, Dec 5, 2012 at 10:51 AM, Ryan Kaldari  wrote:
> That's weird, it looks like browser is already set to true in .jshintrc, but
> I still get errors for window being undefined from Jenkins. Is Jenkins using
> the .jshintrc file?
>
> Ryan Kaldari
>
>
> On 12/5/12 10:48 AM, Ryan Kaldari wrote:
>>
>> Can we set 'browser: true' in the jshint config so that it won't complain
>> about window being undefined?
>>
>> Ryan Kaldari
>>
>> On 12/5/12 10:16 AM, Jon Robson wrote:
>>>
>>> Thanks Antoine!
>>> Currently JSHint doesn't get a vote on MobileFrontend
>>>
>>> (http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console
>>> : SUCCESS (non-voting)). Is it possible to make it vote and -1
>>> anything which disobeys jshint? This would be extremely useful.
>>>
>>> I'm really excited by this, and I'm looking forward to qunit integration
>>> next ;)
>>>
>>> On Wed, Dec 5, 2012 at 2:52 AM, Antoine Musso  wrote:

 Le 04/12/12 22:22, Jon Robson a écrit :
>
> This is now running on MobileFrontend [1] but needs some tweaking!
> It's awesome! Kudos to whoever enabled that.

 
>
> [1]
> https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console

 Hello Jon,

 I did enable it but most of the credits come to Timo who packaged JSHint
 so it can be used by Jenkins :-]

 That is still a bit a work in progress though, JSHint results are not
 being shown in Jenkins yet beside the console output.

 --
 Antoine "hashar" Musso


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>>
>>>
>>>
>>
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
http://jonrobson.me.uk
@rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Ryan Kaldari
That's weird, it looks like browser is already set to true in .jshintrc, 
but I still get errors for window being undefined from Jenkins. Is 
Jenkins using the .jshintrc file?


Ryan Kaldari

On 12/5/12 10:48 AM, Ryan Kaldari wrote:
Can we set 'browser: true' in the jshint config so that it won't 
complain about window being undefined?


Ryan Kaldari

On 12/5/12 10:16 AM, Jon Robson wrote:

Thanks Antoine!
Currently JSHint doesn't get a vote on MobileFrontend
(http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console 


: SUCCESS (non-voting)). Is it possible to make it vote and -1
anything which disobeys jshint? This would be extremely useful.

I'm really excited by this, and I'm looking forward to qunit 
integration next ;)


On Wed, Dec 5, 2012 at 2:52 AM, Antoine Musso  
wrote:

Le 04/12/12 22:22, Jon Robson a écrit :

This is now running on MobileFrontend [1] but needs some tweaking!
It's awesome! Kudos to whoever enabled that.


[1] 
https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console

Hello Jon,

I did enable it but most of the credits come to Timo who packaged 
JSHint

so it can be used by Jenkins :-]

That is still a bit a work in progress though, JSHint results are not
being shown in Jenkins yet beside the console output.

--
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l








___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Ryan Kaldari
Can we set 'browser: true' in the jshint config so that it won't 
complain about window being undefined?


Ryan Kaldari

On 12/5/12 10:16 AM, Jon Robson wrote:

Thanks Antoine!
Currently JSHint doesn't get a vote on MobileFrontend
(http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console
: SUCCESS (non-voting)). Is it possible to make it vote and -1
anything which disobeys jshint? This would be extremely useful.

I'm really excited by this, and I'm looking forward to qunit integration next ;)

On Wed, Dec 5, 2012 at 2:52 AM, Antoine Musso  wrote:

Le 04/12/12 22:22, Jon Robson a écrit :

This is now running on MobileFrontend [1] but needs some tweaking!
It's awesome! Kudos to whoever enabled that.



[1] 
https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console

Hello Jon,

I did enable it but most of the credits come to Timo who packaged JSHint
so it can be used by Jenkins :-]

That is still a bit a work in progress though, JSHint results are not
being shown in Jenkins yet beside the console output.

--
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l






___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Tyler Romeo
Ah OK. That's my fault, then. I must have missed the initial upload of the
change. By the way, what exactly is the purpose of the RDBStore and
UIDGenerator classes? It looks interesting, but I'm just wondering what the
core or extensions will use it for.

*--*
*Tyler Romeo*
Stevens Institute of Technology, Class of 2015
Major in Computer Science
www.whizkidztech.com | tylerro...@gmail.com



On Wed, Dec 5, 2012 at 1:29 PM, Patrick Reilly wrote:

> Tyler,
>
> It was uploaded originally in the following commit:
> https://gerrit.wikimedia.org/r/#/c/16696/ dated Jul 25, 2012 4:11 PM
> by Aaron Schulz.
>
> The only thing that I did was to break it off into a separate commit:
> https://gerrit.wikimedia.org/r/#/c/36801/
>
> So, the point that I was attempting to make was that it in unaltered
> form was available for review for;
> 132 days or 4 months, 9 days.
>
> The mistake that I made was that I didn't use Forge Author and Forge
> Committer access control rights in Gerrit. As, well as NOT adding it
> to the auto loader initially.
>
> — Patrick
>
> On Wed, Dec 5, 2012 at 10:21 AM, Tyler Romeo  wrote:
> > 132 days? It was uploaded onto Gerrit just recently. Many of the people
> > here (including myself) only get notice of changes if it's discussed on
> the
> > mailing list or if a change is uploaded to Gerrit.
> >
> > *--*
> > *Tyler Romeo*
> > Stevens Institute of Technology, Class of 2015
> > Major in Computer Science
> > www.whizkidztech.com | tylerro...@gmail.com
> >
> >
> >
> > On Wed, Dec 5, 2012 at 1:13 PM, Patrick Reilly  >wrote:
> >
> >> There were 132 days for anybody to review and comment on the technical
> >> approach in the UID class.
> >>
> >> — Patrick
> >>
> >> On Wed, Dec 5, 2012 at 10:09 AM, Aaron Schulz 
> >> wrote:
> >> > Some notes (copied from private email):
> >> > * It only creates the lock file the first time.
> >> > * The functions with different bits are not just the same thing with
> more
> >> > bits. Trying to abstract more just made it more confusing.
> >> > * The point is to also have something with better properties than
> uniqid.
> >> > Also I ran large for loops calling those functions and timed it on my
> >> laptop
> >> > back when I was working on that and found it reasonable (if you
> needed to
> >> > insert faster you'd probably have DB overload anyway).
> >> > * hostid seems pretty common and is on the random wmf servers I
> tested a
> >> > while back. If there is some optimization there for third parties that
> >> don't
> >> > have it, of course it would be welcomed.
> >> > 
> >> > At any rate, I changed the revert summary though Timo beat me to
> actually
> >> > merging the revert. My main issue is the authorship breakage and the
> fact
> >> > that the "split of" change wasn't +2'd by a different person. I was
>  also
> >> > later asked to add tests (36816), which should ideally would have been
> >> > required in the first patch rather than as a second one; not a big
> deal
> >> but
> >> > it's a plus to consolidating the changes after a revert.
> >> >
> >> > That said, the change was actually a class split off verbatim from
> >> > https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for
> ages),
> >> so
> >> > it's not like the change was in gerrit for a split-second and then
> >> merged. I
> >> > think the process should have been better here though it's not a huge
> >> deal
> >> > as it may seem at first glance.
> >> >
> >> >
> >> >
> >> > --
> >> > View this message in context:
> >>
> http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
> >> > Sent from the Wikipedia Developers mailing list archive at Nabble.com.
> >> >
> >> > ___
> >> > Wikitech-l mailing list
> >> > Wikitech-l@lists.wikimedia.org
> >> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Patrick Reilly
Tyler,

It was uploaded originally in the following commit:
https://gerrit.wikimedia.org/r/#/c/16696/ dated Jul 25, 2012 4:11 PM
by Aaron Schulz.

The only thing that I did was to break it off into a separate commit:
https://gerrit.wikimedia.org/r/#/c/36801/

So, the point that I was attempting to make was that it in unaltered
form was available for review for;
132 days or 4 months, 9 days.

The mistake that I made was that I didn't use Forge Author and Forge
Committer access control rights in Gerrit. As, well as NOT adding it
to the auto loader initially.

— Patrick

On Wed, Dec 5, 2012 at 10:21 AM, Tyler Romeo  wrote:
> 132 days? It was uploaded onto Gerrit just recently. Many of the people
> here (including myself) only get notice of changes if it's discussed on the
> mailing list or if a change is uploaded to Gerrit.
>
> *--*
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2015
> Major in Computer Science
> www.whizkidztech.com | tylerro...@gmail.com
>
>
>
> On Wed, Dec 5, 2012 at 1:13 PM, Patrick Reilly wrote:
>
>> There were 132 days for anybody to review and comment on the technical
>> approach in the UID class.
>>
>> — Patrick
>>
>> On Wed, Dec 5, 2012 at 10:09 AM, Aaron Schulz 
>> wrote:
>> > Some notes (copied from private email):
>> > * It only creates the lock file the first time.
>> > * The functions with different bits are not just the same thing with more
>> > bits. Trying to abstract more just made it more confusing.
>> > * The point is to also have something with better properties than uniqid.
>> > Also I ran large for loops calling those functions and timed it on my
>> laptop
>> > back when I was working on that and found it reasonable (if you needed to
>> > insert faster you'd probably have DB overload anyway).
>> > * hostid seems pretty common and is on the random wmf servers I tested a
>> > while back. If there is some optimization there for third parties that
>> don't
>> > have it, of course it would be welcomed.
>> > 
>> > At any rate, I changed the revert summary though Timo beat me to actually
>> > merging the revert. My main issue is the authorship breakage and the fact
>> > that the "split of" change wasn't +2'd by a different person. I was  also
>> > later asked to add tests (36816), which should ideally would have been
>> > required in the first patch rather than as a second one; not a big deal
>> but
>> > it's a plus to consolidating the changes after a revert.
>> >
>> > That said, the change was actually a class split off verbatim from
>> > https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for ages),
>> so
>> > it's not like the change was in gerrit for a split-second and then
>> merged. I
>> > think the process should have been better here though it's not a huge
>> deal
>> > as it may seem at first glance.
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
>> > Sent from the Wikipedia Developers mailing list archive at Nabble.com.
>> >
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Krinkle
On Dec 5, 2012, at 7:13 PM, Patrick Reilly  wrote:

> There were 132 days for anybody to review and comment on the technical
> approach in the UID class.
> 
> — Patrick
> 

Even if all people involved had seen in a 100 times, self-merging is a social 
rule separate from that. That was the reason it was brought up, and the reason 
it was subsequently reverted again. Nothing personal and not (directly) related 
to the contents of the commit itself.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Tyler Romeo
132 days? It was uploaded onto Gerrit just recently. Many of the people
here (including myself) only get notice of changes if it's discussed on the
mailing list or if a change is uploaded to Gerrit.

*--*
*Tyler Romeo*
Stevens Institute of Technology, Class of 2015
Major in Computer Science
www.whizkidztech.com | tylerro...@gmail.com



On Wed, Dec 5, 2012 at 1:13 PM, Patrick Reilly wrote:

> There were 132 days for anybody to review and comment on the technical
> approach in the UID class.
>
> — Patrick
>
> On Wed, Dec 5, 2012 at 10:09 AM, Aaron Schulz 
> wrote:
> > Some notes (copied from private email):
> > * It only creates the lock file the first time.
> > * The functions with different bits are not just the same thing with more
> > bits. Trying to abstract more just made it more confusing.
> > * The point is to also have something with better properties than uniqid.
> > Also I ran large for loops calling those functions and timed it on my
> laptop
> > back when I was working on that and found it reasonable (if you needed to
> > insert faster you'd probably have DB overload anyway).
> > * hostid seems pretty common and is on the random wmf servers I tested a
> > while back. If there is some optimization there for third parties that
> don't
> > have it, of course it would be welcomed.
> > 
> > At any rate, I changed the revert summary though Timo beat me to actually
> > merging the revert. My main issue is the authorship breakage and the fact
> > that the "split of" change wasn't +2'd by a different person. I was  also
> > later asked to add tests (36816), which should ideally would have been
> > required in the first patch rather than as a second one; not a big deal
> but
> > it's a plus to consolidating the changes after a revert.
> >
> > That said, the change was actually a class split off verbatim from
> > https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for ages),
> so
> > it's not like the change was in gerrit for a split-second and then
> merged. I
> > think the process should have been better here though it's not a huge
> deal
> > as it may seem at first glance.
> >
> >
> >
> > --
> > View this message in context:
> http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
> > Sent from the Wikipedia Developers mailing list archive at Nabble.com.
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Jon Robson
Thanks Antoine!
Currently JSHint doesn't get a vote on MobileFrontend
(http://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/10/console
: SUCCESS (non-voting)). Is it possible to make it vote and -1
anything which disobeys jshint? This would be extremely useful.

I'm really excited by this, and I'm looking forward to qunit integration next ;)

On Wed, Dec 5, 2012 at 2:52 AM, Antoine Musso  wrote:
> Le 04/12/12 22:22, Jon Robson a écrit :
>> This is now running on MobileFrontend [1] but needs some tweaking!
>> It's awesome! Kudos to whoever enabled that.
> 
>> [1] 
>> https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console
>
> Hello Jon,
>
> I did enable it but most of the credits come to Timo who packaged JSHint
> so it can be used by Jenkins :-]
>
> That is still a bit a work in progress though, JSHint results are not
> being shown in Jenkins yet beside the console output.
>
> --
> Antoine "hashar" Musso
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
http://jonrobson.me.uk
@rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Patrick Reilly
There were 132 days for anybody to review and comment on the technical
approach in the UID class.

— Patrick

On Wed, Dec 5, 2012 at 10:09 AM, Aaron Schulz  wrote:
> Some notes (copied from private email):
> * It only creates the lock file the first time.
> * The functions with different bits are not just the same thing with more
> bits. Trying to abstract more just made it more confusing.
> * The point is to also have something with better properties than uniqid.
> Also I ran large for loops calling those functions and timed it on my laptop
> back when I was working on that and found it reasonable (if you needed to
> insert faster you'd probably have DB overload anyway).
> * hostid seems pretty common and is on the random wmf servers I tested a
> while back. If there is some optimization there for third parties that don't
> have it, of course it would be welcomed.
> 
> At any rate, I changed the revert summary though Timo beat me to actually
> merging the revert. My main issue is the authorship breakage and the fact
> that the "split of" change wasn't +2'd by a different person. I was  also
> later asked to add tests (36816), which should ideally would have been
> required in the first patch rather than as a second one; not a big deal but
> it's a plus to consolidating the changes after a revert.
>
> That said, the change was actually a class split off verbatim from
> https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for ages), so
> it's not like the change was in gerrit for a split-second and then merged. I
> think the process should have been better here though it's not a huge deal
> as it may seem at first glance.
>
>
>
> --
> View this message in context: 
> http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
> Sent from the Wikipedia Developers mailing list archive at Nabble.com.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Aaron Schulz
Some notes (copied from private email):
* It only creates the lock file the first time.
* The functions with different bits are not just the same thing with more
bits. Trying to abstract more just made it more confusing.
* The point is to also have something with better properties than uniqid.
Also I ran large for loops calling those functions and timed it on my laptop
back when I was working on that and found it reasonable (if you needed to
insert faster you'd probably have DB overload anyway).
* hostid seems pretty common and is on the random wmf servers I tested a
while back. If there is some optimization there for third parties that don't
have it, of course it would be welcomed.

At any rate, I changed the revert summary though Timo beat me to actually
merging the revert. My main issue is the authorship breakage and the fact
that the "split of" change wasn't +2'd by a different person. I was  also
later asked to add tests (36816), which should ideally would have been
required in the first patch rather than as a second one; not a big deal but
it's a plus to consolidating the changes after a revert.

That said, the change was actually a class split off verbatim from
https://gerrit.wikimedia.org/r/#/c/16696/ (which was pending for ages), so
it's not like the change was in gerrit for a split-second and then merged. I
think the process should have been better here though it's not a huge deal
as it may seem at first glance.



--
View this message in context: 
http://wikimedia.7.n6.nabble.com/Really-Fast-Merges-tp4990838p4990911.html
Sent from the Wikipedia Developers mailing list archive at Nabble.com.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Chris Steipp
On Wed, Dec 5, 2012 at 3:34 AM, Daniel Kinzler  wrote:
> You really want the spam filter extensions to have internal knowledge of
> Wikibase? That seems like a nasty cross-dependency, and goes directly against
> the idea of modularization and separation of concerns...
>
> We are running into the "glue code problem" here. We need code that knows 
> about
> the spam filters and about wikibase. Should it be in the spam filter, in
> Wikibase, or in a separate, third extension? That would be cleanest, but a
> hassle to maintain... Which way would you prefer?

I think Daniel has correctly stated the problem.

My perspective:

One of the directions of the Admin Tools project is to combine some of
the various tools into AbuseFilter, so I think it's safe to assume
that AbuseFilter will be around and maintained for some time, and
Wikidata could easily use the hooks it provides to do a lot of the
work providing the interface. That being said, expanding AbuseFilter
to work on non-article data has already been requested a few times, so
I think we can make AbuseFilter much easier for Wikidata, and AFT to
plug into.

Maybe to start with, we can find out what functionality from
AbuseFilter there is common between AFT and Wikibase, and try to build
in most of the overlapping pieces into AbuseFilter. Then each can also
use the AbuseFilter hooks to complete the functionality?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Clone a specific extension version

2012-12-05 Thread Daniel Kinzler
On 05.12.2012 14:39, Aran Dunkley wrote:
> Hi Guys,
> How do I get a specific version of an extension using git?
> I want to get Validator 0.4.1.4 and Maps 1.0.5, but I can't figure out
> how to use git to do this...

git always clones the entire repository, including all version. So, you clone,
and then use git checkout to get whatever branch or tag you want.

-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Clone a specific extension version

2012-12-05 Thread Aran Dunkley
Hi Guys,
How do I get a specific version of an extension using git?
I want to get Validator 0.4.1.4 and Maps 1.0.5, but I can't figure out
how to use git to do this...
Thanks,
Aran

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Chad
On Wed, Dec 5, 2012 at 5:05 AM, Krinkle  wrote:
> On Dec 4, 2012, at 9:46 PM, Daniel Friesen  wrote:
>
>> On Tue, 04 Dec 2012 12:37:02 -0800, Chad  wrote:
>>
>>> On Tue, Dec 4, 2012 at 3:27 PM, Chad  wrote:
 On Tue, Dec 4, 2012 at 3:24 PM, Tyler Romeo  wrote:
> Don't we have some sort of policy about an individual merging commits that
> he/she uploaded?
>

 Yes. We've been over this a dozen times--if you're on a repository
 that has multiple maintainers (ie: you're not the only one, so you're
 always self-merging), you should almost never merge your own
 code unless you're fixing an immediate problem (site outage, sytax
 errors).

>>>
>>> In fact, I'm tired of repeating this problem, so I started a change to
>>> actually enforce this policy[0]. We'll probably need to tweak it further
>>> to allow for the exceptions we actually want. Review welcome.
>>>
>>> -Chad
>>>
>>> [0] https://gerrit.wikimedia.org/r/#/c/36815/
>>
>> Doesn't TWN's bot self-review? Might need to add an exception for that 
>> before merging.
>
> I'm not sure in which part of the flow rules.pl is applied but maybe it can 
> be enforced the other way around?
>
> Instead of restricting Submit, restrict CR scores. Submission in turn only 
> has to be restricted to CR+2.
>
> But yeah, we need to either whitelist L10n-bot from this restriction or make 
> those commits auto-merge in a different way.
>

And behold, there are docs:

https://gerrit-review.googlesource.com/Documentation/prolog-cookbook.html

https://gerrit-review.googlesource.com/Documentation/prolog-change-facts.html

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Spam filters for wikidata.org

2012-12-05 Thread Daniel Kinzler
On 04.12.2012 18:20, Matthew Flaschen wrote:
> On 12/04/2012 04:52 AM, Daniel Kinzler wrote:
>> 4) just add another hook, similar to EditFilterMergedContent, but more 
>> generic,
>> and call it in EditEntity (and perhaps also in EditPage!). If we want a spam
>> filter extension to work with non-text content, it will have to implement 
>> that
>> new hook.
> 
> I think that makes sense.  The spam filters will work best if they are
> aware of how wikidata works, and have access to the full JSON
> information of the change.

You really want the spam filter extensions to have internal knowledge of
Wikibase? That seems like a nasty cross-dependency, and goes directly against
the idea of modularization and separation of concerns...

We are running into the "glue code problem" here. We need code that knows about
the spam filters and about wikibase. Should it be in the spam filter, in
Wikibase, or in a separate, third extension? That would be cleanest, but a
hassle to maintain... Which way would you prefer?

-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Merging a branch and then pushing it

2012-12-05 Thread Antoine Musso
Le 04/12/12 23:20, Jeroen De Dauw a écrit :
> I have a feature branch with two dozen commits which I merged into master
> locally and now want to push directly to git. All commits have been
> reviewed, so going via gerrit makes no sense. (In fact it complains about
> the stuff already being closed it I try that.) 

Message:
 ! [remote rejected] master -> master (can not update the reference as a
fast forward)

That means your master is apparently not based on Gerrit master. I would
try merging again:

Start out using a clean version of latest origin master:

 git remote update
 git checkout -b featuremerge -t origin/master

Then merge in your feature branch:

  git merge featurebranch

Your local featuremerge branch should be now be a merge commit with the
previous commit being origin/master.  Push it for review to Gerrit and
submit the resulting change:

 git push origin featuremerge:refs/for/master

If the commits already got reviewed, I guess they are in a branch known
to Gerrit?


-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Jenkins now lints javascript!

2012-12-05 Thread Antoine Musso
Le 04/12/12 22:22, Jon Robson a écrit :
> This is now running on MobileFrontend [1] but needs some tweaking!
> It's awesome! Kudos to whoever enabled that.

> [1] 
> https://integration.mediawiki.org/ci/job/mwext-MobileFrontend-jslint/6/console

Hello Jon,

I did enable it but most of the credits come to Timo who packaged JSHint
so it can be used by Jenkins :-]

That is still a bit a work in progress though, JSHint results are not
being shown in Jenkins yet beside the console output.

-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Antoine Musso
Le 04/12/12 21:24, Tyler Romeo a écrit :
> Don't we have some sort of policy about an individual merging commits that
> he/she uploaded? Because these three changes:
> https://gerrit.wikimedia.org/r/36801
> 
> 
> https://gerrit.wikimedia.org/r/36812
> 
> https://gerrit.wikimedia.org/r/36813
> 
> Were all uploaded and submitted in a matter of minutes by the same person,
> and each is a fix for errors in the commit before it. It kind of defeats
> the point of having code review in the first place.

Overall there is a lot of issues in this class such as :

 - shelling out to possible non existent command (slooow)
 - using file locks (everytime you want a new UID it writes a file, lock
it, generate the uid, unlock the file and delete it.  That seems slow to
me and probably not going to scale.
 - There is ton of code duplication when the number of bits we want
should be a parameter to a generic function.

uniqid( ) will probably give you what you want without having to shell
out.  It is based on microtime() but you could add more entropy by
passing a string prefix as first arg and true as second arg to add in
pseudo random value.


Anyway that looks like a work in progress, I have submitted a change to
revert the commits from master:

 https://gerrit.wikimedia.org/r/36961


-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Really Fast Merges

2012-12-05 Thread Krinkle
On Dec 4, 2012, at 9:46 PM, Daniel Friesen  wrote:

> On Tue, 04 Dec 2012 12:37:02 -0800, Chad  wrote:
> 
>> On Tue, Dec 4, 2012 at 3:27 PM, Chad  wrote:
>>> On Tue, Dec 4, 2012 at 3:24 PM, Tyler Romeo  wrote:
 Don't we have some sort of policy about an individual merging commits that
 he/she uploaded?
 
>>> 
>>> Yes. We've been over this a dozen times--if you're on a repository
>>> that has multiple maintainers (ie: you're not the only one, so you're
>>> always self-merging), you should almost never merge your own
>>> code unless you're fixing an immediate problem (site outage, sytax
>>> errors).
>>> 
>> 
>> In fact, I'm tired of repeating this problem, so I started a change to
>> actually enforce this policy[0]. We'll probably need to tweak it further
>> to allow for the exceptions we actually want. Review welcome.
>> 
>> -Chad
>> 
>> [0] https://gerrit.wikimedia.org/r/#/c/36815/
> 
> Doesn't TWN's bot self-review? Might need to add an exception for that before 
> merging.

I'm not sure in which part of the flow rules.pl is applied but maybe it can be 
enforced the other way around?

Instead of restricting Submit, restrict CR scores. Submission in turn only has 
to be restricted to CR+2.

But yeah, we need to either whitelist L10n-bot from this restriction or make 
those commits auto-merge in a different way.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The rest of the SMWCon conference is on YouTube

2012-12-05 Thread Stephan Gambke
Hi Yury,

great news. This must have been a lot of work. Thanks for the effort!

One issue: I included the youtube link, but there seems to be a
problem with the template and properties. Could you have a look?
(http://semantic-mediawiki.org/wiki/SMWCon_Fall_2012/Filtered_result_format)

Cheers,
Stephan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l