Re: [Wikitech-l] Code Review tools

2011-03-29 Thread Platonides
Tim Starling wrote:
 In the last week, I've been reviewing extensions that were written
 years ago, and were never properly looked at. I don't think it's
 appropriate to measure success in code review solely by the number of
 new revisions after the last branch point.
 
 Code review of self-contained projects becomes easier the longer you
 leave it. This is because you can avoid reading code which was
 superseded, and because it becomes possible to read whole files
 instead of diffs. So maintaining some amount of review backlog means
 that you can make more efficient use of reviewer time.

I agree. But that only works for extensions since:
* They are self-contained
* They are relatively small
* They are not deployment blockers

And still they are harder to fix months later when the author has moved
on (think in the poolcounterd bug).

I don't think that would work as well for core MediaWiki, albeit it may
be feasible for not-so-big features with a kill switch. Large commits
changing many files would need a branch to be reviewable in a set.
However, our problem with branches is that it removes almost all peer
review and testing, and merges are likely to introduce subtle bugs.
The late review drawbacks are also there.


 Our current system links version control with review. After a
 developer has done a substantial amount of work, they commit it. That
 doesn't necessarily mean they want their code looked at at that point,
 they may just want to make a backup.

How do you propose to fix it? The committer deferring its own revision?
It may be worth making a list of review requests at mediawiki.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-28 Thread Tim Starling
On 26/03/11 14:56, Mark A. Hershberger wrote:
 So, other than switching to the mythical GIT, where all is rainbows and
 roses, what can we do to improve code review now?

It's no mystery. After the 1.17 deployment, the team that was doing
code review was disassembled. If you want code review to happen
faster, then getting people to work on it would be a good start.

 If code is to survive past a week in the repository, it has to be
 reviewed.
 
 If you want to make a commit that depends on un-reviewed code, you have
 to find someone to review it.  Otherwise, your commit will break trunk
 when that code is reverted.

Find someone to review it? If the experienced developers on the WMF
payroll aren't assigned to code review, then under your proposal, the
only option for avoiding a revert will be to get someone with no clue
about anything to rubber-stamp the code.

However, volunteer developers aren't always the most capable people at
navigating bureaucracy. In practice, a lot of people would commit
code, have it reverted, and leave.

If the code review manpower is there, we can be friendly and
encouraging to our developers, not threaten them with a revert unless
they can make at least one developer be their friend within seven days.

The WMF really is central in this, because we have a policy of hiring
as many experienced developers as possible from the volunteer
community. So that is where the expertise is concentrated.

 FIXMEs would disappear.  FIXMEs would be up for reversion almost
 immediately.  Give the committer a day to fix the code, but if it
 survives 24 hours as a FIXME, it gets reverted.

By definition, our volunteer developers have lives outside of
MediaWiki. We have to fit in with their schedules. I don't think we
should give them a kick in the teeth just because they committed
something on Sunday and have to go to school on Monday.

If a commit is insecure, or changes interfaces in a way that will be
disruptive to other developers, or breaks key functionality, then
sure, we should revert it right away. There's no need to wait 24
hours. But I don't think we need to be issuing death sentences for
typos in comments.

A fixme status just means that there is something wrong with the
commit, however minor, it doesn't mean that any urgent action is required.

Your proposal seems to be based on the idea that review under Git is
many times better than review with CodeReview and Subversion. I don't
think that's true, I think it's very slightly better. Whether you use
Git or Subversion, you still need people with brains reading code.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-28 Thread Roan Kattouw
2011/3/28 Tim Starling tstarl...@wikimedia.org:
 By definition, our volunteer developers have lives outside of
 MediaWiki. We have to fit in with their schedules. I don't think we
 should give them a kick in the teeth just because they committed
 something on Sunday and have to go to school on Monday.

 If a commit is insecure, or changes interfaces in a way that will be
 disruptive to other developers, or breaks key functionality, then
 sure, we should revert it right away. There's no need to wait 24
 hours. But I don't think we need to be issuing death sentences for
 typos in comments.

+1

Reverting is a blunt instrument and should only be used when
appropriate. I think it's perhaps a bit underused currently, but that
doesn't mean we should swing to the other end of the spectrum.
Reverting a revision is appropriate if it breaks things or if its
presence in the repository causes other problems, like Tim said. Also,
if a revision is problematic and can't be fixed quickly, it should be
reverted, not left in a fixme state for two weeks. OTOH reverting
things for minor issues is needlessly disruptive (not to mention
demotivating), and reverting a *volunteer* developer's revision simply
because *paid* reviewers (most of them are paid anyway) didn't get
around to reviewing it is the kind of dickish behavior that will scare
off volunteers very effectively.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-28 Thread Aryeh Gregor
On Fri, Mar 25, 2011 at 11:56 PM, Mark A. Hershberger
mhershber...@wikimedia.org wrote:
 As far as I can see, the main reason that people think code review
 works better under GIT is because the committer is responsible for
 getting xyr[2] code reviewed *before* it is merged.  The committer is
 motivated to find get xyr code reviewed because if xe doesn't, the code
 won't be used, and others will not experience its beauty.

I don't think that's the right way to put it.  In a
properly-functioning review-then-commit system, it should be easy to
get code reviewed.  The advantage of reviewing the code first is that
psychologically, it's much easier to say Fix these minor things and
then I'll approve it than to say Fix these minor things or else I'll
revert it.  The first gives positive incentives, while the second
gives negative incentives, and people appreciate positive incentives a
lot more.  In a review-first system, you're going to routinely have
reviewers asking that the patch author write better comments or
conform to style guidelines or simplify the logic a bit before they
give approval -- or even restructure the changes entirely.  In a
commit-first system, reviewers are going to be reluctant to revert
code that works, even if it has some minor deficiencies, so committers
have little incentive to fix minor code issues.  Code quality suffers
as a result.

 And just to be clear, there would be a not-too-distant “later”.  I
 propose a week.

 If code is to survive past a week in the repository, it has to be
 reviewed.

 If you want to make a commit that depends on un-reviewed code, you have
 to find someone to review it.  Otherwise, your commit will break trunk
 when that code is reverted.

This is a terrible idea.  Review needs to be something that everyone
is guaranteed to get without effort on their part.  You cannot design
a code review system on the theory that code authors are supposed to
somehow get their code reviewed when no one is formally required or
expected to review it.  I'm all for giving people incentives to do the
right thing, but incentives are pointless if the person being
incentivized has no way to do what you're trying to get them to do.
Incentives have to be placed on the code *reviewers*, because they're
the only ones who can decide to review a given patch.  Conveniently,
almost all the code reviewers happen to be employed by Wikimedia, so
the incentive can be the good old conventional your boss told you
to.

If, as Tim says, Wikimedia developers were un-assigned from code
review after the 1.17 deployment, *that* is the problem that needs to
be fixed.  We need a managerial decision that all relatively
experienced developers employed by Wikimedia need to set aside their
other work to do as much code review as necessary to keep current.  If
commits are not, as a general rule, consistently reviewed within two
or three days, the system is broken.  I don't know why this isn't
clear to everyone yet.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code Review tools

2011-03-28 Thread MZMcBride
Tim Starling wrote:
 If the code review manpower is there, we can be friendly and
 encouraging to our developers, not threaten them with a revert unless
 they can make at least one developer be their friend within seven days.
 
 The WMF really is central in this, because we have a policy of hiring
 as many experienced developers as possible from the volunteer
 community. So that is where the expertise is concentrated.

You're one of the most senior developers and you're a Wikimedia employee,
yet some of your writing comes off as though you're on the outside. Yes,
Wikimedia needs to devote more manpower to code review. This has been pretty
apparent for quite some time. The central question at this point seems to
be: what's the hold-up?

Long ago I lost track of who's in charge of what, but I'm told there is some
sort of hierarchy in place in the tech department. Who's empowered to start
assigning people to review code in a reasonable timeframe? Like Aryeh, I
find this entire thread a bit baffling.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-28 Thread Tim Starling
On 29/03/11 11:26, MZMcBride wrote:
 Long ago I lost track of who's in charge of what, but I'm told there is some
 sort of hierarchy in place in the tech department. Who's empowered to start
 assigning people to review code in a reasonable timeframe? Like Aryeh, I
 find this entire thread a bit baffling.

The hierarchy is CTO - EPMs - regular plebs like me. The EPMs are
Rob Lanphier, CT Woo, Mark Bergsma and Alolita Sharma. General
MediaWiki work is mostly Rob Lanphier's responsibility, which is why
he's been so active in this thread.

Rob doesn't know as much about MediaWiki as me, but he knows the
people who work on it and how to manage them. I think his response
with subject The priority of code review was entirely reasonable.

I'm not saying that I think MediaWiki code review should be the
highest priority task for the Foundation, or that it's important to
review all commits within a few days, as Aryeh contends:

Aryeh wrote:
 If commits are not, as a general rule, consistently reviewed within two
 or three days, the system is broken.  I don't know why this isn't
 clear to everyone yet.

I'm saying that the Git/Subversion debate is peripheral, and that
human factors like assignment of labour and level of motivation are
almost entirely responsible for the rate of code review.

In the last week, I've been reviewing extensions that were written
years ago, and were never properly looked at. I don't think it's
appropriate to measure success in code review solely by the number of
new revisions after the last branch point.

Code review of self-contained projects becomes easier the longer you
leave it. This is because you can avoid reading code which was
superseded, and because it becomes possible to read whole files
instead of diffs. So maintaining some amount of review backlog means
that you can make more efficient use of reviewer time.

Our current system links version control with review. After a
developer has done a substantial amount of work, they commit it. That
doesn't necessarily mean they want their code looked at at that point,
they may just want to make a backup.

It's useful to look at such intermediate code for the purposes of
mentoring, but that's not the same sort of task as a review prior to a
tarball release or deployment, and it shouldn't have the same priority.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-28 Thread Mark A. Hershberger
Tim Starling tstarl...@wikimedia.org writes:

 On 26/03/11 14:56, Mark A. Hershberger wrote:
 If code is to survive past a week in the repository, it has to be
 reviewed.

 If you want to make a commit that depends on un-reviewed code, you have
 to find someone to review it.  Otherwise, your commit will break trunk
 when that code is reverted.

 Find someone to review it? If the experienced developers on the WMF
 payroll aren't assigned to code review, then under your proposal, the
 only option for avoiding a revert will be to get someone with no clue
 about anything to rubber-stamp the code.

Thanks for pointing out the things I hadn't considered in my
suggestions.  I was focused on making junior developers motivated to
find reviewers, but neglected to thoroughly consider the results of my
suggestion.

 Your proposal seems to be based on the idea that review under Git is
 many times better than review with CodeReview and Subversion. I don't
 think that's true, I think it's very slightly better. Whether you use
 Git or Subversion, you still need people with brains reading code.

To be clear, I don't think Git is vastly superior to any other VCS for
getting code review done.  I do think that since Gerrit, for example,
appears to be more widely used and supported than than MediaWiki's
CodeReview extension, that many people come to this conclusion.

I could be wrong.  I'm probably am.  But I'd like to fix the Code Review
problem so that it can no longer be used as an excuse to change our
VCS.

Using Subversion has its pluses and minuses.  Code review should not be
one of them.

Mark.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-27 Thread Bryan Tong Minh
On Sun, Mar 27, 2011 at 3:33 AM, Platonides platoni...@gmail.com wrote:
 Daniel Friesen wrote:
 http://www.mediawiki.org/wiki/Special:Code/MediaWiki/80248
 Comment gives a Tesla link saying something broke. However the Tesla
 link does not identify that commit as the guaranteed commit that
 actually broke code. The commit was followed up with several fixmes
 already and it's unknown if the breakage is still present. The commit is
 potentially perfectly functional, hit by Tesla catching a completely
 unrelated commit, or marking a bug that's already fixed.

 Come on. It is easy enough to check if your revision is the culprit.

 svn up -r r80247
 cd tests/phpunit/
 make noparser

Which takes approximately one hour to run.
We should fix this, because otherwise nobody is going to run the unit
tests before committing something.


Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-27 Thread Platonides
Bryan Tong Minh wrote:
 On Sun, Mar 27, 2011 at 3:33 AM, Platonides wrote:
 Come on. It is easy enough to check if your revision is the culprit.

 svn up -r r80247
 cd tests/phpunit/
 make noparser

 Which takes approximately one hour to run.
 We should fix this, because otherwise nobody is going to run the unit
 tests before committing something.
 
 
 Bryan

$ time make noparser
Tests: 823, Assertions: 9512, Failures: 8, Incomplete: 42, Skipped: 3.
make: *** [noparser] Error 1

real0m45.697s
user0m10.389s
sys 0m1.523s

I have mysql tmpdir set to a tmpfs filesystem (mysql doesn't support
in-memory tables with BLOBs).
Using a different hardware, a cold cache and creating the temporary
tables on disk, it may take a few minutes, but not an hour.

On the other hand, running phpunit parser tests can take that long.
Whereas the good old parserTests.php takes ~44s, too. All the other time
is db overhead droping and duplicating tables, inserting articles and
waiting for the db answer.
I tested performing a new mysql connection instead of dropping each
table separatedly, but it was slower. A change that could improve
perfomance would be to insert everything on a main temporary table, and
clone that with its content for each parser test.
Or we could try to remove the db dependency altogether for parser tests.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-27 Thread Brion Vibber
On Fri, Mar 25, 2011 at 8:56 PM, Mark A. Hershberger 
mhershber...@wikimedia.org wrote:

 Platonides platoni...@gmail.com writes:

  And no, nobody wants our review paradigm to be let's spend several
  months on the backlog every time we want to release. It was just the
  best we managed to afford.

 We've been doing a little better for the past month, but Robla's
 chart[1] is still looking ugly.

 So, other than switching to the mythical GIT, where all is rainbows and
 roses, what can we do to improve code review now?


tl;dr summary: The biggest single improvement that can be made is to *ship
code faster*.

When new code comes in, there's basically a few things it can do:
* break in an obvious and visible way
* break in some circumstances, which might not be obvious on first look
* mostly work, but be inefficient or have other negative side effects that
need fixing
* mostly work, but cause HORRIBLE DATA CORRUPTION that's not noticed for
some time
* work pretty well

Because we're afraid of letting hard-to-find bugs go through, we're holding
*everything* back for too long in the hopes that we'll somehow develop the
ability to find hard-to-find bugs easily. As a result, the code paths don't
get exercised until a giant last-minute review-and-push comes through months
later, and finding the actual source of the bugs becomes even *more*
difficult because you have 6 months of changes to search all at once instead
of a few days.

Ship sooner - fail faster - fix quickly. Update the live site no less
frequently than weekly; update test sites more frequently than that. Make
sure those test sites include things that developers are actually
dogfooding. Encourage other testers to run on trunk and report issues.


A smaller, but still relevant issue is to see if we can change how we think
about review priorities: something that changes the core of the parser or
how pages get saves might well cause HORRIBLE DATA CORRUPTION, but changes
in UI code probably won't. Changes in UI code might cause an XSS
vulnerability, however... so when thinking about how much attention code
needs, we should be considering the module rather than laying down blanket
policies.

Some more explicit reviewer module 'ownership' could indeed be helpful --
especially if we have a more explicit review process for 'big changes', but
even for less formal review.

As far as I can see, the main reason that people think code review
 works better under GIT is because the committer is responsible for
 getting xyr[2] code reviewed *before* it is merged.  The committer is
 motivated to find get xyr code reviewed because if xe doesn't, the code
 won't be used, and others will not experience its beauty.


That isn't specific to git; the same methodology works in SVN or CVS or
whatever where you're reviewing patches submitted through email, bug tracker
systems, etc. The advantage git has here is that your intermediate work is
easier to keep and share within the revision control system, as opposed to
having to keep your work *outside* the version control system until it's
been approved by someone else.

IMO that's a big advantage, but you can still do review-first with SVN, and
we always have for patches submitted through bugzilla or the mailing list by
non-committers.

If review and application of submitted patches can be made consistent and
reasonably speedy, that would again be a big improvement without requiring a
toolset change: getting more good stuff through, with no danger of it
breaking things _before_ approval  merging.

So, while Subversion doesn't support review-first, it can incorporate
 revert-later.  We can even use our current CodeReview tool.  We just
 need to be more aggressive reverting unreviewed code.

 And just to be clear, there would be a not-too-distant “later”.  I
 propose a week.

 If code is to survive past a week in the repository, it has to be
 reviewed.


 If you want to make a commit that depends on un-reviewed code, you have
 to find someone to review it.  Otherwise, your commit will break trunk
 when that code is reverted.


This is actually a lot harder than it might sound; even in only a week,
trimming out dependency on dependency on dependency can be extremely
difficult, especially if some change involved lots of giant whitespace
cleanup or variable renames or other things that play hell with patch
resolution.

Reverting generically questionable code should probably happen a lot faster
than after a week.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code Review tools

2011-03-27 Thread Mark A. Hershberger
Brion Vibber br...@pobox.com writes:

 On Fri, Mar 25, 2011 at 8:56 PM, Mark A. Hershberger 
 If you want to make a commit that depends on un-reviewed code, you have
 to find someone to review it.  Otherwise, your commit will break trunk
 when that code is reverted.

 This is actually a lot harder than it might sound; even in only a week,
 trimming out dependency on dependency on dependency can be extremely
 difficult, especially if some change involved lots of giant whitespace
 cleanup or variable renames or other things that play hell with patch
 resolution.

 Reverting generically questionable code should probably happen a lot faster
 than after a week.

I did suggest that we revert it with-in 24 hours of it being marked
FIXME.  I'd even be fine with immediate reversion.

You suggest putting up test servers and deploying code quicker.  Which
I'm all in favor of.  TranslateWiki does this somewhat for us, but I
think setting up a prototype where this would happen more regularly and
with a configuration more like WMF wikis would be a good idea.

Mark.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread Ashar Voultoiz
On 26/03/11 05:48, Daniel Friesen wrote:
 What about the fixmes left open since it's not clear if anything is even
 still broken currently.

If it is unclear: it either need a clarification or deserve a reversion. 
We already have enough lines hiding in the fog, read to jump at you when 
you get out of the path.

 The fixmes for things like extra things like new tests should be added,
  but the actual commit in question isn't broken in any way.
 
  The fixmes for things which are perfectly functional, but need
 a minor bit of tweaking since they work perfectly find, but don't use
 the best practice methods to do it.

Do we even have fixmes for the last two cases?  Anyway for tests, they 
might be required just to make sure other developers using the feature 
will use it as intended. There are always funny corner cases to handle, 
specially with PHP.

-- 
Ashar Voultoiz


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread Roan Kattouw
2011/3/26 Mark A. Hershberger mhershber...@wikimedia.org:
 If code is to survive past a week in the repository, it has to be
 reviewed.

This is basically what I suggested in the other thread, except I added
a few other conditions that have to be satisfied before we can start
using such a paradigm (relating to reviewer allocation, discipline and
assignment).

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread Daniel Friesen
On 11-03-25 11:57 PM, Ashar Voultoiz wrote:
 On 26/03/11 05:48, Daniel Friesen wrote:
 What about the fixmes left open since it's not clear if anything is even
 still broken currently.
 If it is unclear: it either need a clarification or deserve a reversion.
 We already have enough lines hiding in the fog, read to jump at you when
 you get out of the path.

 The fixmes for things like extra things like new tests should be added,
 but the actual commit in question isn't broken in any way.
   
 The fixmes for things which are perfectly functional, but need
 a minor bit of tweaking since they work perfectly find, but don't use
 the best practice methods to do it.
 Do we even have fixmes for the last two cases?  Anyway for tests, they
 might be required just to make sure other developers using the feature
 will use it as intended. There are always funny corner cases to handle,
 specially with PHP.
I pretty much described all my commits with a fixme tagged on them:

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/81928
Waiting for me to have some time to turn uses of echo into $this-output 
so that the built in --quiet will work, instead of my own custom 
implementation of --quiet (I didn't know -output existed).

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/80248
Comment gives a Tesla link saying something broke. However the Tesla 
link does not identify that commit as the guaranteed commit that 
actually broke code. The commit was followed up with several fixmes 
already and it's unknown if the breakage is still present. The commit is 
potentially perfectly functional, hit by Tesla catching a completely 
unrelated commit, or marking a bug that's already fixed.

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/79639
Perfectly functional, just waiting for me to have time to add a small 
parser test for the behavior.

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/79433
Of all my fixmes this one is the most bug like... that being said, it's 
an if() anyone could add I haven't had time to do.

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/79383
The commit is perfectly functional, SkinTemplateNavigation and 
SkinTemplateTabs existed before and after the commit, I just replaced 
SkinTemplateTabs with SkinTemplateNavigation. The fixme is for the fact 
that Legacy skins are still using a hack that uses SkinTemplateTabs that 
also needs to be updated... which, to be honest isn't a good reason to 
revert a commit, it's pretty much orthogonal to the functionality of the 
commit.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread Platonides
Daniel Friesen wrote:
 http://www.mediawiki.org/wiki/Special:Code/MediaWiki/80248
 Comment gives a Tesla link saying something broke. However the Tesla 
 link does not identify that commit as the guaranteed commit that 
 actually broke code. The commit was followed up with several fixmes 
 already and it's unknown if the breakage is still present. The commit is 
 potentially perfectly functional, hit by Tesla catching a completely 
 unrelated commit, or marking a bug that's already fixed.

Come on. It is easy enough to check if your revision is the culprit.

svn up -r r80247
cd tests/phpunit/
make noparser


There was 1 failure:

1) NewParserTest::testParserTests
Bad images - basic functionality (failed: Bad images - basic functionality

There were 9 incomplete tests:

1) ApiUploadTest::testUpload
RandomImageGenerator: dictionary file not found or not specified properly

2) ApiUploadTest::testUploadSameFileName
RandomImageGenerator: dictionary file not found or not specified properly

3) ApiUploadTest::testUploadSameContent
RandomImageGenerator: dictionary file not found or not specified properly

4) ApiUploadTest::testUploadStash
RandomImageGenerator: dictionary file not found or not specified properly

5) ApiTest::testApiGotCookie
The server can't do external HTTP requests, and the internal one won't
give cookies

6) ApiWatchTest::testWatchEdit
Broken

7) ApiWatchTest::testWatchProtect
Broken

8) ApiWatchTest::testWatchRollback
Only one author to 'UTPage', cannot test rollback

9) ApiWatchTest::testWatchDelete
Broken

There were 2 skipped tests:

1) ApiTest::testApiListPages
This test depends on ApiTest::testApiGotCookie to pass.

2) ApiWatchTest::testWatchClear
This test depends on ApiWatchTest::testWatchEdit to pass.


cd ../..
svn up -r r80248
cd tests/phpunit/
make noparser

There were 2 errors:

1) ApiBlockTest::testMakeNormalBlock
htmlspecialchars() expects parameter 1 to be string, object given


2) NewParserTest::testFuzzTests
MWException: Out of memory:

--


There were 3 failures:

1) TitlePermissionTest::testQuickPermissions
Failed asserting that two arrays are equal.
--- Expected
+++ Actual
@@ @@
 Array
 (
 [0] = Array
 (
 [0] = badaccess-groups
-[1] = *, [[Local:Users|Users]]
+[1] = *, Users
 [2] = 2
 )

 )

2) TitlePermissionTest::testPageRestrictions
Failed asserting that two arrays are equal.
--- Expected
+++ Actual
@@ @@
 Array
 (
 [0] = Array
 (
 [0] = badaccess-groups
-[1] = *, [[Local:Users|Users]]
+[1] = *, Users
 [2] = 2
 )

 [1] = Array
 (
 [0] = protectedpagetext
 [1] = bogus
 )

 [2] = Array
 (
 [0] = protectedpagetext
 [1] = protect
 )

 [3] = Array
 (
 [0] = protectedpagetext
 [1] = protect
 )

 )

3) NewParserTest::testParserTests
Bad images - basic functionality (failed: Bad images - basic functionality
Failed asserting that text is equal to string:.
Bad images - basic functionality)
Failed asserting that boolean:false is true.


So r80248 did break three tests.

cd ../..
svn up
cd tests/phpunit

php phpunit.php includes/api/ApiBlockTest.php
OK (1 test, 4 assertions)

php phpunit.php includes/TitlePermissionTest.php
There was 1 failure:

1) TitlePermissionTest::testUserBlock
Failed asserting that two arrays are equal.

This is a different test, which expects infinite and now returns a
Message Object.

The problem was fixed in trunk.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread Platonides
Roan Kattouw wrote:
 2011/3/26 Mark A. Hershberger mhershber...@wikimedia.org:
 If code is to survive past a week in the repository, it has to be
 reviewed.

 This is basically what I suggested in the other thread, except I added
 a few other conditions that have to be satisfied before we can start
 using such a paradigm (relating to reviewer allocation, discipline and
 assignment).
 
 Roan Kattouw (Catrope)

You mentioned reverting broken code.

Mark proposes reverting *unreviewed* code.

We are generally polite by marking fixme the code from others, and
avoiding reverting as much as possible. I agree with the proposal of
reverting after a few days with an important fixme. But reverting new
revisions because noone reviewed it, seems going too far (at least at
this moment).

It would make much more sense to draft some process where you have to
review the previous revision of the files you are changing. However,
that would forbid fast fixes (eg. fixing the whitespace of the previous
commit) without fully reviewing it, which is also undesirable (the
revision keeps unreviewed, and with the wrong whitespace).


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-26 Thread MZMcBride
Roan Kattouw wrote:
 2011/3/26 Mark A. Hershberger mhershber...@wikimedia.org:
 If code is to survive past a week in the repository, it has to be
 reviewed.
 
 This is basically what I suggested in the other thread, except I added
 a few other conditions that have to be satisfied before we can start
 using such a paradigm (relating to reviewer allocation, discipline and
 assignment).

A number of people, for quite some time, have been urging MediaWiki code
development to get back to the Brion/Tim style of revert if broken. I'm
certainly among them, so I'm thrilled to see this discussion finally
happening. Next step is action. :-)

In addition to the other benefits, more regular reverts will (hopefully)
reduce the stigma of being reverted. The wiki model has always encouraged
boldness, but it has also equally encouraged the ability to pull back
changes as necessary. The tendency to not revert nearly as much made a
reversion a much bigger deal, from what I've seen. Even more so (or perhaps
exclusively so) when it has involved paid work (i.e., work done by
Wikimedia Foundation employees/contractors). A move toward more reverts, as
long as it doesn't discourage new or old contributors, is definitely the way
forward, I think.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Code Review tools

2011-03-25 Thread Mark A. Hershberger
Platonides platoni...@gmail.com writes:

 And no, nobody wants our review paradigm to be let's spend several
 months on the backlog every time we want to release. It was just the
 best we managed to afford.

We've been doing a little better for the past month, but Robla's
chart[1] is still looking ugly.

So, other than switching to the mythical GIT, where all is rainbows and
roses, what can we do to improve code review now?

As far as I can see, the main reason that people think code review
works better under GIT is because the committer is responsible for
getting xyr[2] code reviewed *before* it is merged.  The committer is
motivated to find get xyr code reviewed because if xe doesn't, the code
won't be used, and others will not experience its beauty.

Subversion doesn't support this review-first model.  Not unless we set
up another branch that only took reviewed code.  But then, we're back in
the same boat.  Most people would run from trunk and the committer knows
that xyrs code will be used by a lot of people and extensions will
probably be developed that depend upon it, etc.

So, while Subversion doesn't support review-first, it can incorporate
revert-later.  We can even use our current CodeReview tool.  We just
need to be more aggressive reverting unreviewed code.

And just to be clear, there would be a not-too-distant “later”.  I
propose a week.

If code is to survive past a week in the repository, it has to be
reviewed.

If you want to make a commit that depends on un-reviewed code, you have
to find someone to review it.  Otherwise, your commit will break trunk
when that code is reverted.

FIXMEs would disappear.  FIXMEs would be up for reversion almost
immediately.  Give the committer a day to fix the code, but if it
survives 24 hours as a FIXME, it gets reverted.

I suggest we implement this ASAP.  If we start this policy on April 4th,
we would be doing the first round of reverts April 11th.  We should
grandfather in the current code, of course.  It would be exempt from
grim reversion reaper, but it should still be reviewed.

This solution would mean pain, but I think it would be manageable pain.
And it would be more workable than the changing the vcs that the twn
people have to work with.

Thoughts?

Mark.


Footnotes:
[1] http://toolserver.org/~robla/crstats/crstats.html — the problem is
easiest to see if you unclick “ok”.  Then you'll see the red “new”
line is creeping up again.

[2] http://en.wikipedia.org/wiki/Gender-neutral_pronoun, equivalent to
“his or her” but only jarring (till you get used to it) and not
cumbersome.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code Review tools

2011-03-25 Thread Chad
On Fri, Mar 25, 2011 at 11:56 PM, Mark A. Hershberger
mhershber...@wikimedia.org wrote:
 I suggest we implement this ASAP.  If we start this policy on April 4th,
 we would be doing the first round of reverts April 11th.  We should
 grandfather in the current code, of course.  It would be exempt from
 grim reversion reaper, but it should still be reviewed.


I see no reason to grandfather in current code. TBH, the list of
fixmes is appalling and we should make a sprint at saying fix
this or it'll be reverted in 24 hours for every last one of them.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Code Review tools

2011-03-25 Thread Daniel Friesen
On 11-03-25 09:23 PM, Chad wrote:
 On Fri, Mar 25, 2011 at 11:56 PM, Mark A. Hershberger
 mhershber...@wikimedia.org  wrote:
 I suggest we implement this ASAP.  If we start this policy on April 4th,
 we would be doing the first round of reverts April 11th.  We should
 grandfather in the current code, of course.  It would be exempt from
 grim reversion reaper, but it should still be reviewed.

 I see no reason to grandfather in current code. TBH, the list of
 fixmes is appalling and we should make a sprint at saying fix
 this or it'll be reverted in 24 hours for every last one of them.

 -Chad
All the fixmes?

What about the fixmes left open since it's not clear if anything is even 
still broken currently. The fixmes for things like extra things like new 
tests should be added, but the actual commit in question isn't broken in 
any way. The fixmes for things which are perfectly functional, but need 
a minor bit of tweaking since they work perfectly find, but don't use 
the best practice methods to do it.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


-- 
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l