[Wikitech-l] Defining a configuration for regression testing

2009-07-08 Thread dan nessett

I am setting up a testing environment for mediawiki and the first thing that 
came to mind is testing new extensions against a "regression test 
configuration". That raises the question of what should constitute such a 
configuration. One issue is which extensions should be loaded.

There are over 2000 extensions in the mediawiki extensions matrix and 512 
stable extensions. It would be impractical to run a configuration with all of 
either class. So, I asked around and received a suggestion that at the very 
least the extensions on the wikimedia servers should be loaded. I went to  
http://noc.wikimedia.org/conf/ and copied CommonSettings.php. From it I 
extracted 75 extensions that are used on wikimedia's servers. I list these 
below.

A question for readers of this list is: should a regression test configuration 
load only these extensions or should it load others? Another question is: what 
other settings should define a regression test configuration.

Wikimedia installed extensions:

Timeline, wikihiero, SiteMatrix, CharInsert, CheckUser,
SpecialMakesysop, Makebot, ParserFunctions, Cite, InputBox,
ExpandTemplates, ImageMap, SyntaxHighlight_GeSHi, DoubleWiki, Poem,
PovWatch, AjaxTest, UnicodeConverter, CategoryTree, ProofreadPage, lst,
SpamBlacklist, UploadBlacklist, TitleBlacklist, Quiz, Gadgets,
OggHandler, AssertEdit, FormPreloadPostCache, SkinPerPage, Schulenburg,
Tomas, ContributionReporting, ContributionTracking, ContactPage,
ExtensionDistributor, GlobalBlocking, TrustedXFF, ContactPage,
SecurePoll, OAIRepo, DynamicPageList, Nogomatch,
SpecialCrossNamespaceLinks, SpecialRenameuser, SpecialNuke, AntiBot,
TorBlock, CookieBlock, ScanSet, SpecialCite, FixedImage, UserThrottle,
ConfirmEdit, FancyCaptcha, HideRevision, AntiSpoof, CentralAuth,
DismissableSiteNotice, UsernameBlacklist, MiniDonation, CentralNotice,
TitleKey, WikimediaMessages, SimpleAntiSpam, Collection, NewUserMessage,
CodeReview, Drafts, Configure, AbuseFilter, ClientSide, CommunityVoice,
PdfHandler, UsabilityInitiative

Regards,

Dan Nessett



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Defining a configuration for regression testing

2009-07-09 Thread dan nessett

Hi Gerard,

I am very interested in the tool you mention. Let's keep the discussion on 
list, since I suspect there are others who might want to set up a regression 
test environment either now or later.

Can you provide some pointers how to use this tool? Is it described in the 
standard SVN documentation or is that located somewhere else? What is its name? 
Would its use allow testing against the most recent version in trunk?

My initial thoughts for such a regression test installation are:

I think some extensions involve database schema changes. My initial idea is to 
create a new installation, make all of the schema changes necessary for any 
extensions in the regression test set and then dump the database. This dump 
could then be used to set up new regression test installations (or reinitialize 
existing test installations). Perhaps the dump could be placed in Subversion in 
a "test" area.

Just loading a set of extensions in an installation doesn't really provide much 
in the way of verifying they work together. The test needs to use them 
together. One way to do this would be to create a set of pages that 
concurrently access the extensions when the pages are rendered. Knowing which 
extensions to use together is a key to this approach. Perhaps there is 
information in Bugzilla that would help determine that. Also, extension authors 
might provide some tests that could be incorporated into the test set. If you 
or others have some ideas on how to test extensions that would be very helpful.

When I worked on Solaris at Sun in the mid-90s, developers were required to 
regression test their changes before submitting them (through a gatekeeper) for 
inclusion in the nightly build. Those who failed to do so and broke the build 
had their hands slapped. Perhaps something similar might be established for the 
mediawiki development process. Extension authors might be required to: 1) 
provide some extension tests that could included in the regression test set (if 
their extensions ever become important enough to do that), and 2) run their 
extension tests and the standard tests against a standard regression test 
installation and provide evidence that there are no problems before their 
extensions are included in the mediawiki extensions matrix.

Dan

--- On Wed, 7/8/09, Gerard Meijssen  wrote:

> From: Gerard Meijssen 
> Subject: Re: [Wikitech-l] Defining a configuration for regression testing
> To: "Wikimedia developers" 
> Cc: "Kim Bruning" 
> Date: Wednesday, July 8, 2009, 10:28 PM
> Hoi.
> In Subversion there is a tool created for the setup of
> environments. What it
> does well is setup an environment with a specific
> configuration. This
> configuration for a specific environment can be found in a
> file. In this way
> it is possible to define a specific revision or tag for
> either MediaWiki
> itself or for an extension. The software is such that you
> can specify
> multiple languages for an environment.. As there are
> important differences
> because of the language, the script you want to be able to
> test for a key
> subset of wikis. Duplication of an initial created wiki
> works well.
> 
> When you look at ALL the extensions used whereever on WMF
> projects, you will
> find that they are not an homongous bunch; they are not
> used together. This
> means that you may want to have multiple environments
> configured. At this
> time there is a Wikipeida configuration and a Usability
> Initiative
> configuration. Given that the configuration is in a file,
> there is room to
> indicate a specific revision..
> 
> As you can imagine, there are scripts to install particular
> extensions that
> can not be installed in a default way.
> 
> When you have an interest, contact me or ask on this list.
> Thanks,
>       GerardM
> 
> 2009/7/9 dan nessett 
> 
> >
> > I am setting up a testing environment for mediawiki
> and the first thing
> > that came to mind is testing new extensions against a
> "regression test
> > configuration". That raises the question of what
> should constitute such a
> > configuration. One issue is which extensions should be
> loaded.
> >
> > There are over 2000 extensions in the mediawiki
> extensions matrix and 512
> > stable extensions. It would be impractical to run a
> configuration with all
> > of either class. So, I asked around and received a
> suggestion that at the
> > very least the extensions on the wikimedia servers
> should be loaded. I went
> > to  http://noc.wikimedia.org/conf/ and copied
> CommonSettings.php. From it
> > I extracted 75 extensions that are used on wikimedia's
> servers. I list these
> > below.
> >
> > A question for readers of this l

Re: [Wikitech-l] Defining a configuration for regression testing

2009-07-09 Thread dan nessett

--- On Thu, 7/9/09, Chad  wrote:

> 
> Hell, we barely have unit tests for Mediawiki itself, much
> less the many many
> extensions in SVN. I can't think of a single one, offhand.
> 
> FWIW, handling updates between versions is a mess. There
> are two accepted
> and documented ways to apply an extension's schema updates.
> There needs
> to be one, period. There also needs to be a cleaner Update
> interface so things
> like this can be handled more cleanly.
> 
> It's nice and great to talk about automated regression
> testing of the software,
> but in reality there is no clean way to do it right now. I
> really admire Gerard
> and Kim's work on this, but it's really a hack on top of a
> system that should
> support this stuff natively.
> 
> Regression testing should be automatic, the test cases
> should be standardized,
> and extensions should have an easy way to add their own
> tests to the core
> set of them. None of these are currently the case. There's
> a bug open about
> running parserTests and/or test cases in CodeReview so we
> can easily and
> verifiably track regressions in the software. Can't do that
> until the system
> makes some sense to begin with :)
> 
> -Chad

Hmm. Not the perfect situation :D . But, as a manager once told me, baby steps, 
Dan, baby steps. So, I think an informal plan to incrementally improve testing 
of Mediawiki would be useful. One idea is to broadcast an appeal for testing 
engineers to help rectify the situation. I am retired myself and I suspect 
there are bunch of retired testing engineers out there that might be willing to 
help. Of course, figuring out how to reach them is the main problem.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] How do you run the parserTests?

2009-07-10 Thread dan nessett

I have been struggling to figure out how to run the parser tests. From the very 
limited documentation in the code, it appears you are supposed to run them from 
a terminal. However, when I cd to the maintenance directory and type "php 
parserTests.php" I get the following error message.

Parse error: parse error, expecting `T_OLD_FUNCTION' or `T_FUNCTION' or `T_VAR' 
or `'}'' in /Users/dnessett/Sites/Mediawiki/maintenance/parserTests.inc on line 
43

Either there is some setup necessary that I haven't done; parserTests.php is 
not the appropriate "top-level" target for the execution; you are not supposed 
to run these tests from the terminal; or there is something else I am doing 
wrong.

I tried to find documentation on how to run the tests without success. If I 
simply haven't looked in the right place, a quick pointer to the appropriate 
instructions would be great. Otherwise, I wonder if someone could instruct me 
how to run them.

Thanks,

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How do you run the parserTests?

2009-07-10 Thread dan nessett

--- On Fri, 7/10/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] How do you run the parserTests?
> To: "Wikimedia developers" 
> Date: Friday, July 10, 2009, 1:51 PM
> On Fri, Jul 10, 2009 at 3:59 PM, dan
> nessett
> wrote:
> >
> > I have been struggling to figure out how to run the
> parser tests. From the very limited documentation in the
> code, it appears you are supposed to run them from a
> terminal. However, when I cd to the maintenance directory
> and type "php parserTests.php" I get the following error
> message.
> >
> > Parse error: parse error, expecting `T_OLD_FUNCTION'
> or `T_FUNCTION' or `T_VAR' or `'}'' in
> /Users/dnessett/Sites/Mediawiki/maintenance/parserTests.inc
> on line 43
> >
> > Either there is some setup necessary that I haven't
> done; parserTests.php is not the appropriate "top-level"
> target for the execution; you are not supposed to run these
> tests from the terminal; or there is something else I am
> doing wrong.
> 
> You're running them correctly.  It sounds like your
> installation is
> broken.  Please say exactly what version of MediaWiki
> you're using
> (from Special:Version), and make sure that parserTests.inc
> was
> correctly copied from the download.  Paste line 43 and
> the surrounding
> lines here.

MediaWiki   1.14.0
PHP 5.2.6 (apache2handler)
MySQL   5.0.41

I am running this on my Macintosh using MAMP 
(http://www.mamp.info/en/index.html). parserTests.inc looks OK to me (but I 
don't know what to look for). Lines 28-48 are:

=

$options = array( 'quick', 'color', 'quiet', 'help', 'show-output', 'record' );
$optionsWithArgs = array( 'regex', 'seed' );

require_once( 'commandLine.inc' );
require_once( "$IP/maintenance/parserTestsParserHook.php" );
require_once( "$IP/maintenance/parserTestsStaticParserHook.php" );
require_once( "$IP/maintenance/parserTestsParserTime.php" );

/**
 * @ingroup Maintenance
 */
class ParserTest {
/**
 * boolean $color whereas output should be colorized
 */
private $color;

/**
 * boolean $showOutput Show test output
 */
private $showOutput;

=

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How do you run the parserTests?

2009-07-10 Thread dan nessett

It occurred to me that you perhaps meant line 43 of parserTests.php. Here are 
lines 28-53 of it (like the previous message the blank line between = and 
teh code doesn't count in line numbering):

if( isset( $options['help'] ) ) {
echo << Run test cases from a custom file instead of parserTests.txt
  --record Record tests in database
  --compareCompare with recorded results, without updating the database.
  --setversion When using --record, set the version string to use (useful
   with git-svn so that you can get the exact revision)
  --keep-uploads   Re-use the same upload directory for each test, don't delete 
it
  --fuzz   Do a fuzz test instead of a normal test
  --seedStart the fuzz test from the specified seed
  --help   Show this help message


ENDS;
exit( 0 );
}


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How do you run the parserTests?

2009-07-10 Thread dan nessett

--- On Fri, 7/10/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] How do you run the parserTests?
> To: "Wikimedia developers" 
> Date: Friday, July 10, 2009, 3:01 PM
> On Fri, Jul 10, 2009 at 5:46 PM, dan
> nessett
> wrote:
> > MediaWiki       1.14.0
> > PHP     5.2.6 (apache2handler)
> > MySQL   5.0.41
> >
> > ...
> >        private $color;
> 
> I'm going to bet that php on your command line is actually
> PHP 4, not
> PHP 5.  Try php -v to check this.  Using "php5
> parserTests.php" might
> work.
> 

Good guess. It appears I have both php4 and php5 installed.

Thanks.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Is this the right list to ask questions about parserTests

2009-07-10 Thread dan nessett

I don't want to irritate people by asking inappropriate questions on this list. 
So please direct me to the right list if this is the wrong one for this 
question.

I ran parserTests and 45 tests failed. The result was:

Passed 559 of 604 tests (92.55%)... 45 tests failed!

I expect this indicates a problem, but sometimes test suites are set up so 
certain tests fail. Is this result good or bad?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is this the right list to ask questions about parserTests

2009-07-14 Thread dan nessett

Can anyone tell me which of the parser tests are supposed to fail? Also, is 
there a trunk version for which only these tests fail?

--- On Fri, 7/10/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about 
> parserTests
> To: "Wikimedia developers" 
> Date: Friday, July 10, 2009, 3:49 PM
> On Fri, Jul 10, 2009 at 6:35 PM, dan
> nessett
> wrote:
> > I don't want to irritate people by asking
> inappropriate questions on this list. So please direct me to
> the right list if this is the wrong one for this question.
> >
> > I ran parserTests and 45 tests failed. The result
> was:
> >
> > Passed 559 of 604 tests (92.55%)... 45 tests failed!
> >
> > I expect this indicates a problem, but sometimes test
> suites are set up so certain tests fail. Is this result good
> or bad?
> 
> We usually have about 14 failures.  We should really
> be able to mark
> them as expected, but our testing framework doesn't support
> that at
> the moment.  The current workaround is to use --record
> and --compare,
> but that's a pain for a few reasons.
> 
> I get 49 test failures.  It looks like someone broke a
> lot of stuff.
> It happens; frankly, we don't take testing too seriously
> right now.
> There are no real automated warnings.  Brion used to
> have a bot post
> parser test results daily to wikitech-l, but that was
> discontinued.
> So people tend to break parser tests without noticing.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is this the right list to ask questions about parserTests

2009-07-14 Thread dan nessett

Thanks. From your response I'm not sure if these tests are "supposed" to fail 
(there are test suites that have tests like that) or they are supposed to 
succeed but there are bugs in the parser or other code that cause them to fail. 
Can you clarify?

--- On Tue, 7/14/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about 
> parserTests
> To: "Wikimedia developers" 
> Date: Tuesday, July 14, 2009, 3:40 PM
> On Tue, Jul 14, 2009 at 5:16 PM, dan
> nessett
> wrote:
> > Can anyone tell me which of the parser tests are
> supposed to fail? Also, is there a trunk version for which
> only these tests fail?
> 
> These are the perpetual failures:
> 
>   13 still FAILING test(s) :(
>       * Table security: embedded pipes
> (http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html)
>  [Has never passed]
>       * Link containing double-single-quotes
> '' (bug 4598)  [Has never passed]
>       * HTML bullet list, unclosed tags (bug
> 5497)  [Has never passed]
>       * HTML ordered list, unclosed tags
> (bug 5497)  [Has never passed]
>       * HTML nested bullet list, open tags
> (bug 5497)  [Has never passed]
>       * HTML nested ordered list, open tags
> (bug 5497)  [Has never passed]
>       * Inline HTML vs wiki block
> nesting  [Has never passed]
>       * dt/dd/dl test  [Has never
> passed]
>       * Images with the "|" character in the
> comment  [Has never passed]
>       * Bug 6200: paragraphs inside
> blockquotes (no extra line breaks)
>  [Has never passed]
>       * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> open)  [Has never passed]
>       * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> close)  [Has never passed]
>       * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> open and close)  [Has never passed]
> 
> r51509 is a revision on which they're the only failures,
> but it's
> pretty old (there's probably a somewhat more recent
> one).  The
> breakage looks like it occurred in r52213 and r52726,
> according to
> 
> git bisect start trunk `git svn find-rev r51509` &&
> git bisect run php
> phase3/maintenance/parserTests.php --regex 'Section
> headings with TOC'
> git bisect start trunk `git svn find-rev r51509` &&
> git bisect run php
> phase3/maintenance/parserTests.php --regex
> ' after
> '
> 
> (yay git!).
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is this the right list to ask questions about parserTests

2009-07-14 Thread dan nessett

Hm. Sounds like an opportunity. How about Mediawiki issuing a grand challenge. 
Create a well-documented/structured (open source) parser that produces the same 
results as the current parser on 98% of Wikipedia pages. The prize is bragging 
rights and a letter of commendation from someone or other. I suspect there are 
a bunch of graduate students out there that would find the challenge 
interesting.

Rationalizing the parser would help the development process. For the 2% of the 
pages that fail, challenge others to fix them. They key is not getting stuck in 
the "we need a formal syntax" debate. If the challengers want to create a 
formal syntax that is up to them. Mediawiki should only be interested in the 
final results.

--- On Tue, 7/14/09, Aryeh Gregor  wrote:
 
> They're supposed to pass, in theory, but never have. 
> Someone wrote
> the tests and the expected output at some point as a sort
> of to-do
> list.  I don't know why we keep them, since they just
> confuse
> everything and make life difficult.  (Using the
> --record and --compare
> options helps, but they're not that convenient.)  All
> of them would
> require monkeying around with the parser that nobody's
> willing to do,
> since the parser is a hideous mess that no one understands
> or wants to
> deal with unless absolutely necessary.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is this the right list to ask questions about parserTests

2009-07-14 Thread dan nessett

Well, its just an idea. I'm not going to bet my house on its acceptance. But, 
here are some thoughts why it might work.

Mediawiki powers an awful lot of wikis, some used by businesses that cannot 
afford instability in its operation. It is in their interest to ensure it 
remains maintainable. So, they might be willing to provide some funding. In 
addition I'm sure Mediawiki is used by some parts of the government (both US 
and other countries), so there might be some funding available through those 
channels.

As to whether it is an interesting challenge, I agree writing a new parser in 
and of itself isn't. But, reengineering a heavily used software product that 
has to keep working during the process is a significant software reengineering 
headache. I once worked on a system that attempted to do that and we failed. It 
took us 10 years to transition (we actually got it into production for while) 
and by that time everything had changed. They ultimately through it away. The 
grand challenge is to do "rapid" software reengineering.

In regards to the 2%, you could stipulate that the solution must provide tools 
to automatically convert the 2% (or the vast majority of them).

Anyway, its only an idea. I think the biggest impediment is it requires someone 
with both a commitment to it and significant juice to spearhead it. That is 
probably why it wouldn't work.

--- On Tue, 7/14/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about 
> parserTests
> I suspect nobody's going to stand a chance without
> funding.
> 
> $ cat includes/parser/*.php | wc -l
> 11064
> 
> That's not the kind of thing most people write for an
> interesting challenge.
> 
> Also, you realize that 2% of pages would mean 350,000 pages
> on the
> English Wikipedia alone?  Probably a million pages
> across all
> Wikimedia wikis?  And who knows how many if you
> include third-party
> wikis?
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] MW QA

2009-07-16 Thread dan nessett

I have never been a QA engineer. However, it doesn't require great experience 
to see that the MW software development process is broken. I provide the 
following comments not in a destructive spirit. The success of the MW software 
is obvious. However, in my view unless the development process introduces some 
QA procedures, the code eventually will collapse and its reputation will 
degrade.

My interest in MW (the software, not the organization) is driven by a desire to 
provide an enhancement in the form of an extension. So, I began by building a 
small development environment on my machine (a work in progress). Having 
developed software for other organizations, I intuitively sought out what I 
needed in terms of testing in order to provide a good quality extension. This 
meant I needed to develop unit tests for my extension and also to perform 
regression testing on the main code base after installing it. Hence some of my 
previous questions to this email list.

It soon became apparent that the MW development process has little or no 
testing procedures. Sure, there are the parser tests, but I couldn't find any 
requirement that developers had to run them before submitting patches.

Out of curiosity, I decided to download 1.16a (r52088), use the LocalSettings 
file from my local installation (1.14) and run some parser tests. This is not a 
scientific experiment, since the only justification for using these extensions 
in the tests is I had them installed in my personal wiki. However, there is at 
least one thing to learn from them. The results are:

Mediawiki 52088 Parser Tests

Extensions : 1) Nuke, 2) Renameuser, 3) Cite, 4) ParserFunctions, 5) CSS Style 
Sheets, 6) ExpandTemplates, 7) Gadgets, 8) Dynamic Page List, 9) Labeled 
Section Transclusion. The last extension has 3 require_once files: a) lst.php, 
b) lsth.php, and c) compat.php.

TestExtensions  ParserTests Test Fails

1   1,2,3,4,5,6,7,8,9   19
2   1   14
3   2   14
4   3   14
5   4   14
6   5   14
7   6   14
8   7   14
9   8   14
10  9 (abc) 19
11  9 (a)   18
12  9 (ab)  19
13  1,2,3,4,6,7 14

Note that the extension that introduces all of the unexpected parser test 
failures is Labeled Section Transclusion. According to its documentation, it is 
installed on *.wikisource.org, test.wikipedia.org, and en.wiktionary.org.

I am new to this development community, but my guess is since there are no 
testing requirements for extensions, its author did not run parser tests before 
publishing it. (I don't mean to slander him and I am open to the correction 
that it ran without unexpected errors on the MW version he tested against.)

This rather long preamble leads me to the point of this email. The MW software 
development process needs at least some rudimentary QA procedures. Here are 
some thoughts on this. I offer these to initiate debate on this issue, not as 
hard positions.

* Before a developer commits a patch to the code base, he must run parser tests 
against the change. The patch should not be committed if it increases the 
number of parser test failures. He should document the results in the bugzilla 
bug report.

* If a developer commits a patch without running parser tests or commits a 
patch that increases the number of parser test failures, he should be warned. 
If he does this another time with some time interval (say 6 months), his commit 
privileges are revoked for some period of time (say 6 months). The second time 
he becomes a candidate for commit privilege revocation, they will be revoked 
permanently.

* An extension developer also should run parser tests against a MW version with 
the extension installed. The results of this should be provided in the 
extension documentation. An extension should not be added to the extension 
matrix unless it provides this information.

* A test harness that performs regression tests (currently only parser tests) 
against every trunk versions committed in the last 24 hours should be run 
nightly. The installed extensions should be those used on the WMF machines. The 
results should be published on some page on the Mediawiki site. If any version 
increases the number of parser test failures, the procedure described above for 
developers is initiated.

* A group of developers should have the responsibility of reviewing the nightly 
test results to implement this QA process.

I am sure there are a whole bunch of other things that might be done to improve 
MW QA. The point of this message is to initiate a discussion on what those 
might be.


  

__

Re: [Wikitech-l] MW QA

2009-07-16 Thread dan nessett

I understand. This was pointed out in a previous thread (see "Is this the right 
list to ask questions about parserTests").

--- On Thu, 7/16/09, Michael Rosenthal  wrote:

> From: Michael Rosenthal 
> Subject: Re: [Wikitech-l] MW QA
> To: "Wikimedia developers" 
> Date: Thursday, July 16, 2009, 8:59 AM
> Please note that there are some
> parser tests which in theory should
> pass but never did in any version (thus they were not
> implemented in
> the software).
> 
> On Thu, Jul 16, 2009 at 5:55 PM, dan nessett
> wrote:
> >
> > I have never been a QA engineer. However, it doesn't
> require great experience to see that the MW software
> development process is broken. I provide the following
> comments not in a destructive spirit. The success of the MW
> software is obvious. However, in my view unless the
> development process introduces some QA procedures, the code
> eventually will collapse and its reputation will degrade.
> >
> > My interest in MW (the software, not the organization)
> is driven by a desire to provide an enhancement in the form
> of an extension. So, I began by building a small development
> environment on my machine (a work in progress). Having
> developed software for other organizations, I intuitively
> sought out what I needed in terms of testing in order to
> provide a good quality extension. This meant I needed to
> develop unit tests for my extension and also to perform
> regression testing on the main code base after installing
> it. Hence some of my previous questions to this email list.
> >
> > It soon became apparent that the MW development
> process has little or no testing procedures. Sure, there are
> the parser tests, but I couldn't find any requirement that
> developers had to run them before submitting patches.
> >
> > Out of curiosity, I decided to download 1.16a
> (r52088), use the LocalSettings file from my local
> installation (1.14) and run some parser tests. This is not a
> scientific experiment, since the only justification for
> using these extensions in the tests is I had them installed
> in my personal wiki. However, there is at least one thing to
> learn from them. The results are:
> >
> > Mediawiki 52088 Parser Tests
> >
> > Extensions : 1) Nuke, 2) Renameuser, 3) Cite, 4)
> ParserFunctions, 5) CSS Style Sheets, 6) ExpandTemplates, 7)
> Gadgets, 8) Dynamic Page List, 9) Labeled Section
> Transclusion. The last extension has 3 require_once files:
> a) lst.php, b) lsth.php, and c) compat.php.
> >
> > Test    Extensions                    
>  ParserTests Test Fails
> >
> > 1       1,2,3,4,5,6,7,8,9               19
> > 2       1                            
>   14
> > 3       2                            
>   14
> > 4       3                            
>   14
> > 5       4                            
>   14
> > 6       5                            
>   14
> > 7       6                            
>   14
> > 8       7                            
>   14
> > 9       8                            
>   14
> > 10      9 (abc)                        
> 19
> > 11      9 (a)                        
>   18
> > 12      9 (ab)                        
>  19
> > 13      1,2,3,4,6,7                    
> 14
> >
> > Note that the extension that introduces all of the
> unexpected parser test failures is Labeled Section
> Transclusion. According to its documentation, it is
> installed on *.wikisource.org, test.wikipedia.org, and
> en.wiktionary.org.
> >
> > I am new to this development community, but my guess
> is since there are no testing requirements for extensions,
> its author did not run parser tests before publishing it. (I
> don't mean to slander him and I am open to the correction
> that it ran without unexpected errors on the MW version he
> tested against.)
> >
> > This rather long preamble leads me to the point of
> this email. The MW software development process needs at
> least some rudimentary QA procedures. Here are some thoughts
> on this. I offer these to initiate debate on this issue, not
> as hard positions.
> >
> > * Before a developer commits a patch to the code base,
> he must run parser tests against the change. The patch
> should not be committed if it increases the number of parser
> test failures. He should document the results in the
> bugzilla bug report.
> >
> > * If a developer commits a patch without running
> parser tests or commits a patch that increases the number of
> parser test failures, he should be warned. If he does this
> another time with some time interval (say

Re: [Wikitech-l] MW QA

2009-07-16 Thread dan nessett

Sure. I'll give it a try. It would be a good way to learn more about the code 
base.

--- On Thu, 7/16/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] MW QA
> To: "Wikimedia developers" 
> Date: Thursday, July 16, 2009, 9:40 AM
> dan nessett wrote:
> > I understand. This was pointed out in a previous
> thread (see "Is this
> > the right list to ask questions about parserTests").
> 
> Dan, I certainly can say it'd be great to extend the parser
> test suite 
> with a "known to fail" switch we can add to the tests which
> are known to 
> fail. This would be helpful to folks trying to sort out
> their changes 
> from longstanding minor issues without just losing the
> information in 
> those tests [eg, that we know there are cases where we
> produce bad tag 
> nesting, etc].
> 
> The test suite can definitely use some love; interested in
> poking at it?
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] database schema error

2009-07-16 Thread dan nessett

I tried to run parserTests on the latest version in Trunk (r53382). However, I 
get the following error:

A database error has occurred
Query: SELECT  lc_value  FROM `mw_l10n_cache`  WHERE lc_lang = 'en' AND lc_key 
= 'deps'  LIMIT 1  
Function: LCStore_DB::get
Error: 1146 Table 'wikidb.mw_l10n_cache' doesn't exist (localhost)

Since I can run the tests on 1.16a (r52088), this suggests there is a database 
schema change contemplated between 1.16a and REL1_16. Is this correct? If so, 
how do I update my database to conform to the new schema?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] database schema error

2009-07-16 Thread dan nessett

Thanks. Just so I completely understand, update.php will not affect the use of 
the underlying database by my personal wiki installation, which uses 1.14 (as 
opposed to the test wikis I will work on during development), right?

--- On Thu, 7/16/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] database schema error
> To: "Wikimedia developers" 
> Date: Thursday, July 16, 2009, 5:20 PM
> On Thu, Jul 16, 2009 at 6:25 PM, dan
> nessett
> wrote:
> > I tried to run parserTests on the latest version in
> Trunk (r53382). However, I get the following error:
> >
> > A database error has occurred
> > Query: SELECT  lc_value  FROM `mw_l10n_cache`
>  WHERE lc_lang = 'en' AND lc_key = 'deps'  LIMIT 1
> > Function: LCStore_DB::get
> > Error: 1146 Table 'wikidb.mw_l10n_cache' doesn't exist
> (localhost)
> >
> > Since I can run the tests on 1.16a (r52088), this
> suggests there is a database schema change contemplated
> between 1.16a and REL1_16. Is this correct? If so, how do I
> update my database to conform to the new schema?
> 
> You need to run maintenance/update.php every time you
> upgrade the
> software.  This will apply schema changes, mostly, as
> well as anything
> else that happens to be necessary.  Running it when
> you don't need to
> is harmless, so if you want to be on the safe side you
> could run it on
> every svn up.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] "known to fail" switch added to parserTests

2009-07-20 Thread dan nessett

I have modified parserTests to take a "known to fail" switch so those tests 
that have always failed now pass. It was pretty simple. It only required 3 
changes to parserTests.inc and some editing on parserTests.txt. I added an 
option for each test called flipresult. When this option is specified, the test 
succeeds when it fails and vice versa.

I have tested the modified parserTest on 1.16a running over a 1.14 schema 
database. However, I have run into a problem attempting to install the latest 
trunk revision so I can test against it. Specifically, I added a database 
called wikitestdb so I would have a production and test wiki. However, when I 
checked out the latest trunk revision, ran the install script and update.php, 
and then accessed http:///index.php the installation gets into a 
infinite redirect loop. When I attempted to debug this (using netbeans 6.7 and 
Xdebug) the redirect doesn't appear. That is, Main_Page is rendered and 
displayed. The only difference between the two URLs are the first uses 
http:///index.php (which redirects to http:///index.php/Main_Page), while the debug session specifies 
http://localhost/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php?XDEBUG_SESSION_START=netbeans-xdebug.

I need some help figuring out what is happening. I imagine using this list for 
that purpose would be inappropriate. So, if someone would volunteer to help me 
(email me at the from address in this email), I can get the parserTest changes 
tested against the newest revision. I can then open a bug (or use an already 
open bug) and attach the patch and edited parserTests.txt file to it.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "known to fail" switch added to parserTests

2009-07-20 Thread dan nessett

Correction: When I tried to create a patch for parserTests.inc and 
parserTest.txt I discovered the directory I was working from was not under SVN 
control. So, I created a new working copy of 52088 and when I ran the tests, 
they reported a databased selection error. So, I cannot claim the tests work 
for 1.16a under a 1.14 schema. I am now checking out REL1_14 and will run the 
modified parserTest under it.

--- On Mon, 7/20/09, dan nessett  wrote:

> From: dan nessett 
> Subject: [Wikitech-l] "known to fail" switch added to parserTests
> To: wikitech-l@lists.wikimedia.org
> Date: Monday, July 20, 2009, 8:09 AM
> 
> I have modified parserTests to take a "known to fail"
> switch so those tests that have always failed now pass. It
> was pretty simple. It only required 3 changes to
> parserTests.inc and some editing on parserTests.txt. I added
> an option for each test called flipresult. When this option
> is specified, the test succeeds when it fails and vice
> versa.
> 
> I have tested the modified parserTest on 1.16a running over
> a 1.14 schema database. However, I have run into a problem
> attempting to install the latest trunk revision so I can
> test against it. Specifically, I added a database called
> wikitestdb so I would have a production and test wiki.
> However, when I checked out the latest trunk revision, ran
> the install script and update.php, and then accessed
> http:///index.php the installation gets
> into a infinite redirect loop. When I attempted to debug
> this (using netbeans 6.7 and Xdebug) the redirect doesn't
> appear. That is, Main_Page is rendered and displayed. The
> only difference between the two URLs are the first uses
> http:///index.php (which redirects to
> http:///index.php/Main_Page), while the
> debug session specifies 
> http://localhost/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php?XDEBUG_SESSION_START=netbeans-xdebug.
> 
> I need some help figuring out what is happening. I imagine
> using this list for that purpose would be inappropriate. So,
> if someone would volunteer to help me (email me at the from
> address in this email), I can get the parserTest changes
> tested against the newest revision. I can then open a bug
> (or use an already open bug) and attach the patch and edited
> parserTests.txt file to it.
> 
> 
>       
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "known to fail" switch added to parserTests

2009-07-20 Thread dan nessett

OK. I think I am now in sync. I checked out 1.14.1 (from the branches arm of 
r53568) and merged the local changes to parserTests.inc and parserTests.txt. 
Out of the box, 1.14.1 generates 40 parser test failures. With the code 
modifications, it generates 24 (i.e., 14 fewer). I have the patches for 1.14.1 
and can attach them to an appropriate bug or create a new bug and attach them 
there.

I also have the patches to r53551, which I can't test because, as I stated 
previously, my testwiki gets into a redirect loop. No one has stepped forward 
to help, so I can either provide these for someone else to test or wander 
around trying to figure out why the redirect loop occurs on my own. One further 
bit of info on this - when I get to the main page using the debugger, any local 
link I attempt to navigate to leads me back to the main page, which dies 
because of the redirect loop. This suggests something is stuck in the redirect 
logic and everything (including the main page) is redirecting back to the main 
page.

If someone could let me know how to proceed with the patches, that would help.

--- On Mon, 7/20/09, dan nessett  wrote:

> From: dan nessett 
> Subject: Re: [Wikitech-l] "known to fail" switch added to parserTests
> To: "Wikimedia developers" 
> Date: Monday, July 20, 2009, 11:01 AM
> 
> Correction: When I tried to create a patch for
> parserTests.inc and parserTest.txt I discovered the
> directory I was working from was not under SVN control. So,
> I created a new working copy of 52088 and when I ran the
> tests, they reported a databased selection error. So, I
> cannot claim the tests work for 1.16a under a 1.14 schema. I
> am now checking out REL1_14 and will run the modified
> parserTest under it.
> 
> --- On Mon, 7/20/09, dan nessett 
> wrote:
> 
> > From: dan nessett 
> > Subject: [Wikitech-l] "known to fail" switch added to
> parserTests
> > To: wikitech-l@lists.wikimedia.org
> > Date: Monday, July 20, 2009, 8:09 AM
> > 
> > I have modified parserTests to take a "known to fail"
> > switch so those tests that have always failed now
> pass. It
> > was pretty simple. It only required 3 changes to
> > parserTests.inc and some editing on parserTests.txt. I
> added
> > an option for each test called flipresult. When this
> option
> > is specified, the test succeeds when it fails and
> vice
> > versa.
> > 
> > I have tested the modified parserTest on 1.16a running
> over
> > a 1.14 schema database. However, I have run into a
> problem
> > attempting to install the latest trunk revision so I
> can
> > test against it. Specifically, I added a database
> called
> > wikitestdb so I would have a production and test
> wiki.
> > However, when I checked out the latest trunk revision,
> ran
> > the install script and update.php, and then accessed
> > http:///index.php the installation
> gets
> > into a infinite redirect loop. When I attempted to
> debug
> > this (using netbeans 6.7 and Xdebug) the redirect
> doesn't
> > appear. That is, Main_Page is rendered and displayed.
> The
> > only difference between the two URLs are the first
> uses
> > http:///index.php (which redirects
> to
> > http:///index.php/Main_Page), while
> the
> > debug session specifies 
> > http://localhost/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php?XDEBUG_SESSION_START=netbeans-xdebug.
> > 
> > I need some help figuring out what is happening. I
> imagine
> > using this list for that purpose would be
> inappropriate. So,
> > if someone would volunteer to help me (email me at the
> from
> > address in this email), I can get the parserTest
> changes
> > tested against the newest revision. I can then open a
> bug
> > (or use an already open bug) and attach the patch and
> edited
> > parserTests.txt file to it.
> > 
> > 
> >       
> > 
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > 
> 
> 
>       
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "known to fail" switch added to parserTests

2009-07-21 Thread dan nessett

The change I made was to add a "flipresult" option that simply turns a success 
into a failure and a failure into a success. This is what I understood I was 
asked to do. On the plus side, this approach also allows the addition of parser 
tests that are supposed to fail (not just have always failed). On the negative 
side it does hide problems that perhaps should remain in the open.

I just looked at the code and it shouldn't be hard to add a knowntofail option 
that acts like flipresult and then add a new category of test result that 
specifies how many known to fail results occurred. However, one issue is 
whether known to fail should count against success/failure (when computing the 
percentage of tests that failed) or whether these results should not count 
toward either.

Would someone tell me where the redirect from index.php to index.php/Main_Page 
occurs in the page processing flow?

--- On Tue, 7/21/09, Kwan Ting Chan  wrote:

> From: Kwan Ting Chan 
> Subject: Re: [Wikitech-l] "known to fail" switch added to parserTests
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 4:43 AM
> Aryeh Gregor wrote:
> > On Tue, Jul 21, 2009 at 9:56 AM, Gerard
> > Meijssen
> wrote:
> >> There is no point having a perfect score when it
> is actually a lie. It seems
> >> to me that Brion is against the removal of these
> tests because he wants them
> >> to pass. Having a third state of "known to fail"
> makes sense, just changing
> >> them to pass makes it necessary to add a "citation
> needed" because it is
> >> just not true.
> > 
> > Nobody's changing them to pass.
> 
> I can understand where Gerard got the impression:
> 
> "I have modified parserTests to take a "known to fail"
> switch so those tests that have always failed now pass." -
> dan nessett 2009-07-20 16:09
> 
> KTC
> 
> -- Experience is a good school but the fees are high.
>     - Heinrich Heine
> 
> -Inline Attachment Follows-
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Continually falling through InitializeSpecialCases else-if

2009-07-21 Thread dan nessett

Can someone please help. I have only been working on this code for 5 days and I 
do not yet understand it. It turns out that the redirect happens in 
InitializeSpecialCases. The reason I get into a redirect loop is the code 
continually falls into the else if statement that has the following conditions:

else if( $action == 'view' && !$request->wasPosted() &&
( !isset($this->GET['title']) || 
$title->getPrefixedDBKey() != $this->GET['title'] ) &&
!count( array_diff( array_keys( $this->GET ), array( 
'action', 'title' ) ) ) )

This is a sufficiently complex expression that I have no idea what each term 
represents. There must be someone out there who understands it. I just need 
someone to explain it so I can figure out what is going wrong.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Continually falling through InitializeSpecialCases else-if

2009-07-21 Thread dan nessett

Thanks! Here are the values that cause entry into the else-if statement:

$targetUrl === 'http://localhost/MediawikiTest/Latest Trunk 
Version/phase3/index.php/Main_Page'

$action === 'view'

$request->data === 

$this->GET === 

$title->mDbkeyform === 'Main_Page'

_SERVER[REQUEST_METHOD] === GET

I can see why the redirect else-if is entered (no title= parameter, $action === 
view), but the targetUrl looks OK to me. I'm not sure why the logic should 
analyze this case as a redirect.

P.S. When I previously dumped the get request using httpfox it showed (I 
haven't figured out how to configure netbeans to use FF, so this dump is from a 
separate run not using the debugger):

(Request-Line)    GET 
/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page HTTP/1.1
Host    localhost
User-Agent    Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US; rv:1.9.0.11) 
Gecko/2009060214 Firefox/3.0.11
Accept    text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language    en-us,en;q=0.5
Accept-Encoding    gzip,deflate
Accept-Charset    ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive    300
Connection    keep-alive




--- On Tue, 7/21/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] Continually falling through InitializeSpecialCases 
> else-if
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 10:09 AM
> On Tue, Jul 21, 2009 at 12:34 PM, dan
> nessett
> wrote:
> > else if( $action == 'view' &&
> !$request->wasPosted() &&
> >                        (
> !isset($this->GET['title']) ||
> $title->getPrefixedDBKey() != $this->GET['title'] )
> &&
> >                        !count( array_diff(
> array_keys( $this->GET ), array( 'action', 'title' ) ) )
> )
> 
> $action == 'view': This is a normal page view (not edit,
> history, etc.)
> 
> !$request->wasPosted(): This is a GET request, not
> POST.
> 
> !isset($this->GET['title']) ||
> $title->getPrefixedDBKey() !=
> $this->GET['title']: Either the title= parameter in the
> URL is unset,
> or it's set but not to the same thing as $title.
> 
> !count( array_diff( array_keys( $this->GET ), array(
> 'action', 'title'
> ) ) ) ): There is no URL query parameter other than "title"
> and
> "action" (e.g., no oldid=, diff=, . . .).
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l





  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "known to fail" switch added to parserTests

2009-07-21 Thread dan nessett

Not sure the post-commit hook running the parser is a good idea. The software 
could have been broken by a previous committer. From that point on parserTests 
will report errors until the problem is fixed, so committers will just learn to 
ignore the message.

--- On Tue, 7/21/09, Brian  wrote:

> From: Brian 
> Subject: Re: [Wikitech-l] "known to fail" switch added to parserTests
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 9:57 AM
> On Tue, Jul 21, 2009 at 10:51 AM,
> Aryeh Gregor <
> simetrical+wikil...@gmail.com
> >
> wrote:

... And ideally there would be a post-commit hook that
> runs the parser
> tests and e-mails the committer letting them know they have
> just broken the
> software!
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] "known to fail" switch added to parserTests

2009-07-21 Thread dan nessett

Better ideas. Another possibility is every 24hrs to run parser tests (and any 
other regression tests that might exist) against all revisions committed into 
trunk since the last run. Post the results and keep track of the number of bugs 
each committer has introduced into the code base for the past running 6 month 
period. Post the names of committers and the number of bugs they have 
introduced on a "hall of shame" page ordering the list by number of bugs.

Sometimes social pressure can be a very effective behavior modifier.

--- On Tue, 7/21/09, Brian  wrote:

> From: Brian 
> Subject: Re: [Wikitech-l] "known to fail" switch added to parserTests
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 11:10 AM
> 
> Right, well, a pre-commit hook that rejects all commits
> which break the
> software. Or a memory of what commits broke which tests and
> a conditional.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] There appears to be a bug in WebRequest::extractTitle()

2009-07-21 Thread dan nessett

I think I have found the problem causing the continuous redirect on my test 
wiki. However, since I am new at this, I want to run this past someone with a 
better understanding of the code to make sure I have it right.

At line 145 of WebRequest::extractTitle() [r53551] is the following test:

if( substr( $path, 0, $baseLen ) == $base )

This is checking that the string in $base is the prefix of the last element in 
$path. However, the code in this method does not take into account that the 
pathname in the URL might have spaces translated into '%20' escape codes.

The URL to my testwiki is:

'/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page'

This is the value in $path. However, the value in $base is:

'/MediawikiTest/Latest Trunk Version/phase3/index.php/'

So, the call to substr fails and the code that sets 'title' in $matches never 
executes (which means $_GET never gets a 'title' entry). The solution is to 
either convert the '%20' escapes in $path to blanks or convert the blanks in 
$base to '%20' escapes. This bug could be fixed in extractTitle() or since 
$path is an argument of this method and $base is extracted from an argument, 
perhaps it should be fixed elsewhere.

If someone confirms this is a bug, I will open a bug report in Bugzilla.




  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Continually falling through InitializeSpecialCaseselse-if

2009-07-21 Thread dan nessett

--HM--

Thanks. I have never really gotten around to setting up short URLs.

But, see my latest message. I think there is a bug in 
WebRequest::extractTitle(). If this turns out to be correct, I will either need 
to locally patch my installations or change the location of the test phase3 
directories using a path that doesn't include blanks in path element srings. 
The former is probably best, since using the latter strategy, each time I 
updated the test code to the latest version I would have to reapply a 
non-standard patch.

Cheers,

Dan

--- On Tue, 7/21/09, Happy-melon  wrote:

> From: Happy-melon 
> Subject: Re: [Wikitech-l] Continually falling through 
> InitializeSpecialCaseselse-if
> To: wikitech-l@lists.wikimedia.org
> Date: Tuesday, July 21, 2009, 2:35 PM
> At a guess, you haven't set up short
> URLs properly.  If you have no active 
> apache rewrite rules, then that is a URL that will trigger
> the internal 
> redirect.  However, if your MW installation *thinks*
> that it's supposed to 
> serve short URLs in that format, then it will happily form
> the redirect URL 
> *in that format*; and there's your redirect loop.
> 
> --HM
> 
> "dan nessett" 
> wrote in message 
> news:790738.46469...@web32507.mail.mud.yahoo.com...
> >
> > Thanks! Here are the values that cause entry into the
> else-if statement:
> >
> > $targetUrl === 'http://localhost/MediawikiTest/Latest Trunk 
> > Version/phase3/index.php/Main_Page'
> >
> > $action === 'view'
> >
> > $request->data === 
> >
> > $this->GET === 
> >
> > $title->mDbkeyform === 'Main_Page'
> >
> > _SERVER[REQUEST_METHOD] === GET
> >
> > I can see why the redirect else-if is entered (no
> title= parameter, 
> > $action === view), but the targetUrl looks OK to me.
> I'm not sure why the 
> > logic should analyze this case as a redirect.
> >
> > P.S. When I previously dumped the get request using
> httpfox it showed (I 
> > haven't figured out how to configure netbeans to use
> FF, so this dump is 
> > from a separate run not using the debugger):
> >
> > (Request-Line)    GET 
> >
> /MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page
> 
> > HTTP/1.1
> > Host    localhost
> > User-Agent    Mozilla/5.0 (Macintosh; U; PPC
> Mac OS X 10.4; en-US; 
> > rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11
> > Accept   
> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> > Accept-Language    en-us,en;q=0.5
> > Accept-Encoding    gzip,deflate
> > Accept-Charset   
> ISO-8859-1,utf-8;q=0.7,*;q=0.7
> > Keep-Alive    300
> > Connection    keep-alive
> >
> >
> >
> >
> > --- On Tue, 7/21/09, Aryeh Gregor 
> wrote:
> >
> >> From: Aryeh Gregor 
> >> Subject: Re: [Wikitech-l] Continually falling
> through 
> >> InitializeSpecialCases else-if
> >> To: "Wikimedia developers" 
> >> Date: Tuesday, July 21, 2009, 10:09 AM
> >> On Tue, Jul 21, 2009 at 12:34 PM, dan
> >> nessett
> >> wrote:
> >> > else if( $action == 'view' &&
> >> !$request->wasPosted() &&
> >> >           
>             (
> >> !isset($this->GET['title']) ||
> >> $title->getPrefixedDBKey() !=
> $this->GET['title'] )
> >> &&
> >> >           
>             !count(
> array_diff(
> >> array_keys( $this->GET ), array( 'action',
> 'title' ) ) )
> >> )
> >>
> >> $action == 'view': This is a normal page view (not
> edit,
> >> history, etc.)
> >>
> >> !$request->wasPosted(): This is a GET request,
> not
> >> POST.
> >>
> >> !isset($this->GET['title']) ||
> >> $title->getPrefixedDBKey() !=
> >> $this->GET['title']: Either the title=
> parameter in the
> >> URL is unset,
> >> or it's set but not to the same thing as $title.
> >>
> >> !count( array_diff( array_keys( $this->GET ),
> array(
> >> 'action', 'title'
> >> ) ) ) ): There is no URL query parameter other
> than "title"
> >> and
> >> "action" (e.g., no oldid=, diff=, . . .).
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> >
> >
> >
> >      = 
> 
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] There appears to be a bug in WebRequest::extractTitle()

2009-07-21 Thread dan nessett

If I am reading the report correctly, fixing this bug is going to require much 
more than a change here or there. It may be better to simply require that URLs 
to wikis contain no blanks. In any case, the quickest way for me to fix the 
problem is to change the name of the directory where I store the latest trunk 
version.

--- On Tue, 7/21/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] There appears to be a bug in 
> WebRequest::extractTitle()
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 3:05 PM
> dan nessett wrote:
> > The URL to my testwiki is:
> > 
> >
> '/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page'
> > 
> > This is the value in $path. However, the value in
> $base is:
> > 
> > '/MediawikiTest/Latest Trunk
> Version/phase3/index.php/'
> > 
> > So, the call to substr fails and the code that sets
> 'title' in $matches never executes (which means $_GET never
> gets a 'title' entry). The solution is to either convert the
> '%20' escapes in $path to blanks or convert the blanks in
> $base to '%20' escapes. This bug could be fixed in
> extractTitle() or since $path is an argument of this method
> and $base is extracted from an argument, perhaps it should
> be fixed elsewhere.
> > 
> > If someone confirms this is a bug, I will open a bug
> report in Bugzilla.
> 
> See https://bugzilla.wikimedia.org/show_bug.cgi?id=10787
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] There appears to be a bug in WebRequest::extractTitle()

2009-07-21 Thread dan nessett

This raises the issue that started this going in the first place. Suppose I 
make the necessary modifications so "$wgScriptPath gets properly escaped when 
initialized." There could be all kinds of dependencies on this global scattered 
all over the place. Just running parserTests and getting them to pass isn't a 
sufficient regression test of the bug fix. Not only could parts of MW unrelated 
to the parser have built in assumptions about the way $wgScriptPath is 
formatted, there are a whole bunch of extensions that might also have similar 
assumptions.

MW really needs a better set of regression tests.

--- On Tue, 7/21/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] There appears to be a bug in 
> WebRequest::extractTitle()
> To: "Wikimedia developers" 
> Date: Tuesday, July 21, 2009, 3:35 PM
> dan nessett wrote:
> > If I am reading the report correctly, fixing this bug
> is going to
> > require much more than a change here or there. It may
> be better to
> > simply require that URLs to wikis contain no blanks.
> In any case, the
> > quickest way for me to fix the problem is to change
> the name of the
> > directory where I store the latest trunk version.
> 
> I *think* it should just take making sure $wgScriptPath
> gets properly 
> escaped when initialized... but there may be exciting
> snags. :)
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Known to fail interactions with compare and record

2009-07-22 Thread dan nessett

I have added an application option ktf-to-fail that when specified accumulates 
tests with known-to-fail status as if they failed. When this option is missing, 
failure statistics do not include known-to-fail results and there is a summary 
at the end of parserTests that specifies how many known-to-fail tests were run 
(unless that number is zero). I also have modified parserTests to indicate the 
known-to-fail status when that option is specified.

But, there is still an issue. How should the per-test known-to-fail option 
interact with the compare and record application options? Should parserTests be 
modified to record and compare known-to-fail results? Or, should these results 
be silent for recording purposes and treated as failures only if the 
ktf-to-fail application options is specified?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Known to fail interactions with compare and record

2009-07-22 Thread dan nessett

Right. The option is a bit cryptic. I first thought of knowntofail-to-failures, 
but that was way too long.

--- On Wed, 7/22/09, Happy-melon  wrote:

> From: Happy-melon 
> Subject: Re: [Wikitech-l] Known to fail interactions with compare and record
> To: wikitech-l@lists.wikimedia.org
> Date: Wednesday, July 22, 2009, 3:44 PM
> I assume "ktf" is short for Known To
> Fail?  You've got a bit of RAS syndrome 
> going on there... :-D
> 
> --HM
> 
> "dan nessett" 
> wrote in message 
> news:842292.50373...@web32504.mail.mud.yahoo.com...
> >
> > I have added an application option ktf-to-fail that
> when specified 
> > accumulates tests with known-to-fail status as if they
> failed. When this 
> > option is missing, failure statistics do not include
> known-to-fail results 
> > and there is a summary at the end of parserTests that
> specifies how many 
> > known-to-fail tests were run (unless that number is
> zero). I also have 
> > modified parserTests to indicate the known-to-fail
> status when that option 
> > is specified.
> >
> > But, there is still an issue. How should the per-test
> known-to-fail option 
> > interact with the compare and record application
> options? Should 
> > parserTests be modified to record and compare
> known-to-fail results? Or, 
> > should these results be silent for recording purposes
> and treated as 
> > failures only if the ktf-to-fail application options
> is specified? 
> 
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Known to fail interactions with compare and record

2009-07-22 Thread dan nessett

If anyone can come up with a better option name, I would be happy to replace 
ktf-to-fail. I generally don't like cryptic abbreviations. However, 
"with-known-to-fail" doesn't really get at the underlying meaning of the 
option. It specifies that known-to-fail test results are accumulated as 
failures for the purpose of parserTest statistics.

--- On Wed, 7/22/09, Chad  wrote:

> From: Chad 
> Subject: Re: [Wikitech-l] Known to fail interactions with compare and record
> To: "Wikimedia developers" 
> Date: Wednesday, July 22, 2009, 4:15 PM
> Why not just call it
> --with-known-to-fail? Easy.
> 
> -Chad
> 
> On Jul 22, 2009 7:11 PM, "dan nessett" 
> wrote:
> 
> 
> Right. The option is a bit cryptic. I first thought of
> knowntofail-to-failures, but that was way too long.
> 
> --- On Wed, 7/22/09, Happy-melon 
> wrote:
> 
> > From: Happy-melon 
> > Subject: Re: [Wikitech-l] Known to fail interactions
> with compare and
> record
> > To: wikitech-l@lists.wikimedia.org
> > Date: Wednesday, July 22, 2009, 3:44 PM
> 
> > I assume "ktf" is short for Known To > Fail? 
> You've got a bit of RAS
> syndrome > going on there.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Known to fail interactions with compare and record

2009-07-22 Thread dan nessett

Well, it isn't all that clear to me, but I really don't care. I'll change it to 
whatever people want. "Call me anything you like, but don't call me late for 
dinner."

Can someone tell me how the --fuzz option is supposed to behave? I am 
cross-testing the new parserTests parameter in conjunction with its other 
parameters. I have tested --quick and --quiet. They seem to work fine with 
ktf-to-fail. When I test --fuzz, parserTests seems to go on walkabout in the 
Great Australian desert periodically spewing out stuff like:

100: 100/100 (mem: 36%)
200: 200/200 (mem: 37%)
300: 300/300 (mem: 37%)
400: 400/400 (mem: 37%)
500: 500/500 (mem: 38%)
600: 600/600 (mem: 38%)


Is this expected behavior? Is parserTests supposed to finish when you use 
--fuzz or is this some kind of stress test that the never finishes?

--- On Wed, 7/22/09, Chad  wrote:

> From: Chad 
> Subject: Re: [Wikitech-l] Known to fail interactions with compare and record
> To: "Wikimedia developers" 
> Date: Wednesday, July 22, 2009, 4:25 PM
> Which is exactly what my param means.
> Its expected
> that failures will be reported, and --with-known-to-fail
> would indicate that known failures will be added.
> 
> --knowntofail-to-fail and --ktf-to-fail certainly aren't
> any
> clearer.
> 
> -Chad
> 
> On Jul 22, 2009 7:22 PM, "dan nessett" 
> wrote:
> 
> 
> If anyone can come up with a better option name, I would be
> happy to replace
> ktf-to-fail. I generally don't like cryptic abbreviations.
> However,
> "with-known-to-fail" doesn't really get at the underlying
> meaning of the
> option. It specifies that known-to-fail test results are
> accumulated as
> failures for the purpose of parserTest statistics.
> 
> --- On Wed, 7/22/09, Chad 
> wrote:
> 
> > From: Chad 
> 
> > Subject: Re: [Wikitech-l] Known to fail interactions
> with compare and
> record
> > To: "Wikimedia developers" 
> > Date: Wednesday, July 22, 2009, 4:15 PM
> 
> > Why not just call it > --with-known-to-fail? Easy.
> > > -Chad > > On Jul
> 22, 2009 7:11 PM, "dan n...
> 
> > ___ >
> Wikitech-l mailing list
> > wikitec...@lists.wikim...
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Known to fail interactions with compare and record

2009-07-23 Thread dan nessett

Thanks. Just to clarify, I am not changing --fuzz. I am testing --ktf-to-fail 
in conjunction with other parserTests options to ensure there is no 
interference. The chances of such interference is very small, but since I have 
been preaching the importance of regression testing, I thought I should eat my 
own dog-food.

--- On Wed, 7/22/09, Tim Starling  wrote:

> From: Tim Starling 
> Subject: Re: [Wikitech-l] Known to fail interactions with compare and record
> To: wikitech-l@lists.wikimedia.org
> Date: Wednesday, July 22, 2009, 10:17 PM
> dan nessett wrote:
> > Well, it isn't all that clear to me, but I really
> don't care. I'll
> > change it to whatever people want. "Call me anything
> you like, but
> > don't call me late for dinner."
> > 
> > Can someone tell me how the --fuzz option is supposed
> to behave? I
> > am cross-testing the new parserTests parameter in
> conjunction with
> > its other parameters. I have tested --quick and
> --quiet. They seem
> > to work fine with ktf-to-fail. When I test --fuzz,
> parserTests
> > seems to go on walkabout in the Great Australian
> desert
> > periodically spewing out stuff like:
> > 
> > 100: 100/100 (mem: 36%) 200: 200/200 (mem: 37%) 300:
> 300/300 (mem:
> > 37%) 400: 400/400 (mem: 37%) 500: 500/500 (mem: 38%)
> 600: 600/600
> > (mem: 38%) 
> > 
> > Is this expected behavior? Is parserTests supposed to
> finish when
> > you use --fuzz or is this some kind of stress test
> that the never
> > finishes?
> 
> It runs forever, unless it runs out of memory or hits a
> fatal PHP
> error. It's not a stress test, it's a fuzz test, hence the
> name. It
> logs exceptions generated by the parser for random input.
> 
> Maybe if there's an undocumented option that you don't
> understand, you
> should leave it alone. Otherwise some day your wiki will
> end up with
> all its articles deleted, or with all the text converted
> to
> ISO-2022-JP or something.
> 
> -- Tim Starling
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

A fundamental principle of medicine is "do no harm." It has a long history and 
you can find it in the Hippocratic oath with a slightly different wording.

This is also an important principle of software development. If you add a new 
feature or fix a bug, make sure the resulting code isn't worse off than before. 
Do no harm is the basic motivation behind regression testing.

I have been thinking about Brion's suggestion of fixing the bug in 
WebRequest::extractTitle(). It is a reasonable point. Don't just whine about a 
problem. Fix it. He even provided the best strategy for accomplishing this. 
Make "sure $wgScriptPath gets properly escaped when initialized." I am sure 
doing this would not require a significant amount of coding. But, how would 
changing the way $wgScriptPath is formatted affect the rest of the code base?

I decided to do a multi-file search for $wgScriptPath in phase3 and extensions 
[r53650]. There are 439 references to it in phase3 and extensions combined. In 
phase3 alone, there are 47 references. Roughly 1/3 of these are in global 
declarations, so phase3 has about 30 "active" references and in phase3 and 
extensions combined there are roughly 300. [By "active" I mean references in 
which the value of $wgScriptPath affects the code's logic.]

So, if I were to change the formatting of $wgScriptPath, there potentially are 
30 places in the main code and 300 places in extensions where problems might 
occur. To ensure the change does no harm, I would have to observe the effect of 
the change on at least 30 and up to 330 places in the distribution. This is 
pretty onerous requirement. My guess is very few developers would take the time 
to do it.

On the other hand, if there were regression tests for the main code and for the 
most important extensions, I could make the change, run the regression tests 
and see if any break. If some do, I could focus my attention on those problems. 
I would not have to find every place the global is referenced and see if the 
change adversely affects the logic.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

True. Regressions tests do not guarantee bug are not introduced by changes. 
However, they are a fundamental piece of the QA puzzle.

--- On Thu, 7/23/09, Gregory Maxwell  wrote:

> From: Gregory Maxwell 
> Subject: Re: [Wikitech-l] Do no harm
> To: "Wikimedia developers" 
> Date: Thursday, July 23, 2009, 9:50 AM
> On Thu, Jul 23, 2009 at 11:07 AM, dan
> nessett
> wrote:
> [snip]
> > On the other hand, if there were regression tests for
> the main code and for the most important extensions, I could
> make the change, run the regression tests and see if any
> break. If some do, I could focus my attention on those
> problems. I would not have to find every place the global is
> referenced and see if the change adversely affects the
> logic.
> 
> This only holds if the regression test would fail as a
> result of the
> change. This is far from a given for many changes and many
> common
> tests.
> 
> Not to mention the practical complications— many
> extensions have
> complicated configuration and/or external
> dependencies.  "make
> test_all_extensions" is not especially realistic.
> 
> Automated tests are good, necessary even, but they don't
> relieve you
> of the burden of directly evaluating the impact of a broad
> change.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

The reason I started this conversation is I want to write an extension. I also 
want to be a good citizen and do this in a way that doesn't break things (this 
would also have the desirable effect of making it more likely that some MW 
installation would use the extension).

So, since, as you point out, everyone agrees that regression tests are 
beneficial and since, except for parserTests, there doesn't seem to be any 
substantive regression tests available, what are some practical steps that 
would improve the situation?

--- On Thu, 7/23/09, Aryeh Gregor  wrote:

> From: Aryeh Gregor 
> Subject: Re: [Wikitech-l] Do no harm
> To: "Wikimedia developers" 
> Date: Thursday, July 23, 2009, 9:51 AM
> On Thu, Jul 23, 2009 at 11:07 AM, dan
> nessett
> wrote:
> > On the other hand, if there were regression tests for
> the main code and for the most important extensions, I could
> make the change, run the regression tests and see if any
> break. If some do, I could focus my attention on those
> problems. I would not have to find every place the global is
> referenced and see if the change adversely affects the
> logic.
> 
> We are all aware of the benefits of regression tests.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Do no harm

2009-07-23 Thread dan nessett

Sounds like a plan. Be my guest.

--- On Thu, 7/23/09, William Allen Simpson  
wrote:

> From: William Allen Simpson 
> Subject: Re: [Wikitech-l] Do no harm
> To: "Wikimedia developers" 
> Date: Thursday, July 23, 2009, 10:21 AM
> Here's what I do in similar
> circumstances.  Create another variable,
> $wgScriptPathEscaped.  Then, gradually make the
> changes.  Wait for tests.
> Change some more.  Eventually, most of the old ones
> will be gone.
> 
> By inspection, many of the uses will be terminal, not
> passed to other
> routines, with no side effects.  Those should be done
> first.
> 
> Sure, it might take a month or three.  But wishing for
> some universal
> regression suite is going to be about the same as waiting
> for the
> single pass parser.
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] What to do about --compare and --record. Second request for comments

2009-07-23 Thread dan nessett

So far no one has responded to my question about how --ktf-to-fail should 
interact with --compare and --record. Also, at least one commenter has 
suggested a different name for --ktf-to-fail. Before I open a bug and attach 
the patches, I would like to resolve these issues. Since Brion suggested this 
task, would he comment?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] What to do about --compare and --record. Second request for comments

2009-07-23 Thread dan nessett

parserTests is a developer tool to stay informed about potential problems in a 
release. Consequently, its customer base is the developer community. I am 
working on the assumption that user requirements for changes to it should come 
from this community (hence the use of this list for discussions about them).

Originally, I changed parserTests by adding a flipresult option that changed a 
success to a failure and a failure to a success. This was universally panned 
and a requirement for a know-to-fail status indicated (see emails from Meijssen 
7/21 12:19am and 2:56am and Gregor 7/21 4:09am and 9:51am). So, I added this 
status. In addition, Stephen Bain suggested a command line option that controls 
whether known-to-fail test results are accumulated in the failure or success 
statistics. So, I added --ktf-to-fail (there is controversy whether this is the 
best name. I am open to change it to whatever people want or to get rid of it 
if that is the consensus).

Your suggestions are to use the disabled option to turn off those tests known 
to fail and to add a --run-disabled command line switch that when set runs the 
disabled tests. I can change parserTests to do this is if that is what people 
want, but I see at least one issue. You may want to disable specific tests for 
reasons other than they are known to fail (e.g., a test exercises functionality 
undergoing modification and currently not working). So, you loose the option of 
separating the known to fail cases from others.

The interactions with --compare and --record are as follows. If we keep the 
status of known-to-fail, then should its statistics accumulate with success, 
failure or neither. The motivation of the --ktf-to-fail switch is to specify 
that known-to-fail tests accumulate with failure statistics. If it is missing, 
do you want the records to indicate that all of a sudden you have 14 more 
successes? If that is acceptable, then the issue is resolved. Or should 
known-to-fail results accumulate against neither? In either case, what 
information do you want to record and compare against in the testrun and 
testitem tables? Do you want to add a column to testrun that indicates whether 
the --ktf-to-fail or --run-disabled flags were set? Do you want to add a column 
to testitem that records known-to-fail status? Do you want to leave these 
tables as they are?

In any case, as I stated previously, I can do pretty much whatever the 
community thinks is right. I just need a concrete set of user requirements.

--- On Thu, 7/23/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] What to do about --compare and --record. Second 
> request for comments
> To: wikitech-l@lists.wikimedia.org
> Date: Thursday, July 23, 2009, 4:09 PM
> On 07/23/2009 11:00 AM, dan nessett
> wrote:
> >
> > So far no one has responded to my question about how
> --ktf-to-fail should interact with --compare and --record.
> Also, at least one commenter has suggested a different name
> for --ktf-to-fail. Before I open a bug and attach the
> patches, I would like to resolve these issues. Since Brion
> suggested this task, would he comment?
> 
> Offhand I'm not sure I see a need for a switch
> specifically.
> 
> Couple thoughts offhand:
> 
> * There appears to already be a "disabled" option which can
> be added to 
> test cases. Since this already exists, it doesn't need to
> be developed 
> and could simply be added to the tests we know don't
> currently work.
> 
> * If there's a desire to run those tests anyway, I'd
> probably call the 
> option --run-disabled. This should be easy to add.
> 
> * Not sure there's any need for specific handling w/
> compare and record; 
> we can just record whatever we run.
> 
> 
> If on the other hand we want to run and record these tests,
> but not 
> whinge at the user about them, then we'd want another
> option on them. 
> Probably just having another completion state for the
> output would do it 
> (grouping known-to-fail tests separately from others that
> fail). I'm not 
> sure how important that is, though.
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Speak up people

2009-07-24 Thread dan nessett

So far no one has responded to Brion's comments about how parserTests should be 
modified. You are the customers. What do you want?

Should I just modify parserTests.txt to disable those tests that always have 
failed? Should I do that and add an option to run disabled tests?

Or should I retain the known-to-fail status? If so, should I retain the option 
that known-to-fail results accumulate as fails? If so, what should the option 
be called? Right now it is --ktf-to-fail. There is one proposal to change it to 
--with-known-to-fail. Any others? Which do people prefer?

Should we note anything new in the testrun and testitem tables? Specifically, 
should we add a column to testrun that records whether --ktf-to-fail (or 
whatever we call it) or --run-disabled was set (obviously the table would be 
modified only to note one of these depending on which implementation option 
people choose). If we go with the known-to-fail status, should we add a column 
to testitem that indicates that a test returned a known-to-fail status? Should 
we leave these tables alone?

I do not have enough experience to determine which of these are the best 
options. You do, so you need to choose. [Of course, you can remain silent, 
which I will interpret to mean you really don't care]


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] parserTests patches for known to fail problem attached to bug 19957

2009-07-27 Thread dan nessett

It appears there is no clear consensus in the developer community how to 
resolve the problem of the parserTests known-to-fail tests. Consequently, the 
most useful thing I can do is provide the patches I have developed and explain 
how they can be used to implement one of a number of options. These are neither 
the only options nor is one of them necessarily the best. They simply 
correspond to suggestions made on the wikitech-1 email list (the second option 
is additional). There is, of course, the possibility of developing an option 
not specified here and implementing it.

[Mark Clements added another option this morning - 7/27/09 - that requires 
additional development. If that is what people want, I (or others) can do the 
necessary implementation. However, I thought it best to get out what I have 
done so far in case it meets consensus needs.]

I have attached the patches to bug 19957 (I couldn't find an appropriate 
existing bug for this issue, so I created one. Also, I attached a tar file with 
7 patches. When I looked at the bug after submitting it, it wasn't clear the 
tar file made it. Any advice?). These patches can be used to implement the 
options specified below. Details of the patches are found in text I have added 
to the bug report.

+ Install none of these patches and do nothing else:

  + Pros: parserTests behavior remains as currently defined.
  + Cons: parserTests behavior remains as currently defined.
  
+ Install none of these patches and define successful output as that which the 
Parser currently returns (this would require changing parserTests.txt using a 
unprovided patch):

  + Pros: parserTests is brought into compliance with the test set.
  + Cons: It is generally bad practice to define success without understanding 
the logic behind it.
  
+ Install none of these patches, file bugs against the 14 failing tests and 
then fix the bugs:

  + Pros: parserTests is brought into compliance with the test set.
  + Cons: It has been suggested that the parserTests logic is sufficiently 
complex that it may be difficult to modify it to pass the known to fail tests.

+ File bugs against the 14 failing tests and patch parserTests.txt to comment 
out the known to fail tests:

  + Pros: Testing the Parser using parserTests does not return confusing test 
results. Developers can concentrate on those tests that fail due to the 
addition of new functionality or intefering bug fixes.
  + Cons: Since the known to fail tests no longer report their presence, it may 
be easy for the community to forget they are still a problem.
  
+ Patch parserTests.txt so the known to fail tests are disabled. Optionally 
file bugs against these tests:

  + Pros: Testing the Parser using parserTests does not return confusing test 
results. Developers can concentrate on those tests that fail due to the 
addition of new functionality or intefering bug fixes.
  + Cons: There are other disabled tests in parserTests.txt. These could become 
confused with the known to fail tests.
  
+ Patch parserTests|.inc|.php to provide a --run-disabled option:

  + Pros: This would allow disabled known to fail tests to run without editing 
parserTests.txt
  + Cons: Specifciation of this option runs all disabled tests, not just those 
known to fail. Test results using this option might confuse developers.
  
+ Patch parserTests|.inc|.txt to provide a knowntofail option:

  + Pros: parserTests output and summary statistics indicate which tests 
succeeded, failed and returned known-to-fail status.
  + Cons: Creates a new test status for what should be a temporary problem. 
This is a questionable software architecture decision.
  
+ Patch parserTests.php to support a ktf-in-fail option:

  + Pros: those of the previous option. Adds the ability to accumulate ktf 
failure status in failure statistics.
  + Cons: those of the previous option. Raises the question of what is the 
proper statistics strategy when the known-to-fail option is missing (on runs 
with this option missing, known-to-fail tests accumulate against success 
statistics).
  
The provided patches do not address whether the 'testrun' and 'testitem' tables 
should be modifed to note the specification of statistics changing application 
options (i.e., either --ktf-in-fail or --run-disabled) or the use of the 
knowntofail option for a particular test. These tables are not modified by the 
patches.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] parserTests code coverage statistics

2009-07-29 Thread dan nessett

I decided to investigate how well parserTests exercises the MW code. So, I 
threw together a couple of MacGyver tools that use xdebug's code coverage 
capability and analyzed the results. The results are very, very preliminary, 
but I thought I would get them out so others can look them over. In the next 
couple of days I hope to post more detailed results and the tools themselves on 
the Mediawiki wiki. (If someone could tell me the appropriate page to use that 
would be useful. Otherwise, I will just create a page in my own namespace).

The statistics (again very preliminary) are:

Number of files exercised: 141  Number of lines in those files: 85606
Lines covered: 59489  Lines not covered: 26117  Percentage covered:  
0.694916244188

So, parserTests is getting (at best) about 70% code coverage. This is better 
than I expected, but still it means parserTests does not test 26117 lines of 
code. What I mean by "at best" is xdebug just notes whether a line of code is 
visited. It doesn't do any logic analysis on which branches are taken. 
Furthermore, parserTests may not visit some files that are critical to the 
operation of the MW software. Obviously, xdebug can only gather statistics on 
visited files.

I want to emphasize that there may be errors in these results due to bad 
assumptions on my part or bad coding. However, it is a place to start.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] parserTests code coverage statistics

2009-07-29 Thread dan nessett

I failed to mention that xdebug ignores non-executable lines of code. So, the 
statistics are for executable lines of code and do not include lines like 
comments (in either the covered or uncovered counts).

--- On Wed, 7/29/09, dan nessett  wrote:

> From: dan nessett 
> Subject: [Wikitech-l] parserTests code coverage statistics
> To: wikitech-l@lists.wikimedia.org
> Date: Wednesday, July 29, 2009, 4:36 PM
> 
> I decided to investigate how well parserTests exercises the
> MW code. So, I threw together a couple of MacGyver tools
> that use xdebug's code coverage capability and analyzed the
> results. The results are very, very preliminary, but I
> thought I would get them out so others can look them over.
> In the next couple of days I hope to post more detailed
> results and the tools themselves on the Mediawiki wiki. (If
> someone could tell me the appropriate page to use that would
> be useful. Otherwise, I will just create a page in my own
> namespace).
> 
> The statistics (again very preliminary) are:
> 
> Number of files exercised: 141  Number of lines in
> those files: 85606
> Lines covered: 59489  Lines not covered: 26117 
> Percentage covered:  0.694916244188
> 
> So, parserTests is getting (at best) about 70% code
> coverage. This is better than I expected, but still it means
> parserTests does not test 26117 lines of code. What I mean
> by "at best" is xdebug just notes whether a line of code is
> visited. It doesn't do any logic analysis on which branches
> are taken. Furthermore, parserTests may not visit some files
> that are critical to the operation of the MW software.
> Obviously, xdebug can only gather statistics on visited
> files.
> 
> I want to emphasize that there may be errors in these
> results due to bad assumptions on my part or bad coding.
> However, it is a place to start.
> 
> 
>       
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] parserTests code coverage statistics

2009-07-30 Thread dan nessett

True. However, knowing the coverage of parserTests and knowing which code isn't 
even being visited by it is the first step in understanding where the holes are 
in testing. Code coverage is a primitive metric. But, it's a place to start.

--- On Thu, 7/30/09, Victor Vasiliev  wrote:

> From: Victor Vasiliev 
> Subject: Re: [Wikitech-l] parserTests code coverage statistics
> To: "Wikimedia developers" 
> Date: Thursday, July 30, 2009, 1:28 AM
> dan nessett wrote:
> > I decided to investigate how well parserTests
> exercises the MW code. So, I threw together a couple of
> MacGyver tools that use xdebug's code coverage capability
> and analyzed the results. The results are very, very
> preliminary, but I thought I would get them out so others
> can look them over. In the next couple of days I hope to
> post more detailed results and the tools themselves on the
> Mediawiki wiki. (If someone could tell me the appropriate
> page to use that would be useful. Otherwise, I will just
> create a page in my own namespace).
> > 
> > The statistics (again very preliminary) are:
> > 
> > Number of files exercised: 141  Number of lines
> in those files: 85606
> > Lines covered: 59489  Lines not covered:
> 26117  Percentage covered:  0.694916244188
> > 
> > So, parserTests is getting (at best) about 70% code
> coverage. This is better than I expected, but still it means
> parserTests does not test 26117 lines of code. What I mean
> by "at best" is xdebug just notes whether a line of code is
> visited. It doesn't do any logic analysis on which branches
> are taken. Furthermore, parserTests may not visit some files
> that are critical to the operation of the MW software.
> Obviously, xdebug can only gather statistics on visited
> files.
> > 
> > I want to emphasize that there may be errors in these
> results due to bad assumptions on my part or bad coding.
> However, it is a place to start.
> > 
> 
> Well, they are *parser* tests, they are not intended to
> cover
> Special:Version or something else.
> 
> --vvv
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] parserTests code coverage statistics

2009-07-30 Thread dan nessett

I have no argument with your points. However, figuring out the code coverage of 
parserTests is low hanging fruit, i.e., relatively easy to determine and at 
least somewhat valuable. By knowing which files are touched by parserTests and 
how much of their code is covered is a first step in figuring out what needs to 
be done (e.g., identifying those files that parserTests doesn't even visit). 
After all, parserTests is all we have at the moment.

--- On Thu, 7/30/09, Happy-melon  wrote:

> From: Happy-melon 
> Subject: Re: [Wikitech-l] parserTests code coverage statistics
> To: wikitech-l@lists.wikimedia.org
> Date: Thursday, July 30, 2009, 4:26 AM
> I suspect that much of this has to do
> with the way Parser.php is eleven 
> thousand lines of programmatic sewage, and the way the
> ParserTests 
> infrastructure requires a lot of the rest of MediaWiki to
> be initialised in 
> order to run the tests.  As long as the rest of the
> system 'works' well 
> enough to allow the parser to parse, ParserTests is happy:
> I wouldn't be 
> confident to say that it "tests" anything outside of
> Parser.php; as you say, 
> it only marks lines 'visited'.
> 
> --HM
> 
> "dan nessett" 
> wrote in message 
> news:922893.25678...@web32507.mail.mud.yahoo.com...
> >
> > I failed to mention that xdebug ignores non-executable
> lines of code. So, 
> > the statistics are for executable lines of code and do
> not include lines 
> > like comments (in either the covered or uncovered
> counts).
> >
> > --- On Wed, 7/29/09, dan nessett 
> wrote:
> >
> >> From: dan nessett 
> >> Subject: [Wikitech-l] parserTests code coverage
> statistics
> >> To: wikitech-l@lists.wikimedia.org
> >> Date: Wednesday, July 29, 2009, 4:36 PM
> >>
> >> I decided to investigate how well parserTests
> exercises the
> >> MW code. So, I threw together a couple of MacGyver
> tools
> >> that use xdebug's code coverage capability and
> analyzed the
> >> results. The results are very, very preliminary,
> but I
> >> thought I would get them out so others can look
> them over.
> >> In the next couple of days I hope to post more
> detailed
> >> results and the tools themselves on the Mediawiki
> wiki. (If
> >> someone could tell me the appropriate page to use
> that would
> >> be useful. Otherwise, I will just create a page in
> my own
> >> namespace).
> >>
> >> The statistics (again very preliminary) are:
> >>
> >> Number of files exercised: 141  Number of
> lines in
> >> those files: 85606
> >> Lines covered: 59489  Lines not covered:
> 26117
> >> Percentage covered:  0.694916244188
> >>
> >> So, parserTests is getting (at best) about 70%
> code
> >> coverage. This is better than I expected, but
> still it means
> >> parserTests does not test 26117 lines of code.
> What I mean
> >> by "at best" is xdebug just notes whether a line
> of code is
> >> visited. It doesn't do any logic analysis on which
> branches
> >> are taken. Furthermore, parserTests may not visit
> some files
> >> that are critical to the operation of the MW
> software.
> >> Obviously, xdebug can only gather statistics on
> visited
> >> files.
> >>
> >> I want to emphasize that there may be errors in
> these
> >> results due to bad assumptions on my part or bad
> coding.
> >> However, it is a place to start.
> >>
> >>
> >>
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> >
> >
> >      = 
> 
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] parserTests code coverage statistics

2009-07-30 Thread dan nessett

Right. I have looked at both t/ and tests/ and agree that they could use some 
work. But when starting on a trip its best to walk in one direction to start. 
Otherwise you end up going around in circles.

--- On Thu, 7/30/09, Chad  wrote:

> From: Chad 
> Subject: Re: [Wikitech-l] parserTests code coverage statistics
> To: "Wikimedia developers" 
> Date: Thursday, July 30, 2009, 8:18 AM
> On Thu, Jul 30, 2009 at 10:53 AM, dan
> nessett
> wrote:
> >
> > True. However, knowing the coverage of parserTests and
> knowing which code isn't even being visited by it is the
> first step in understanding where the holes are in testing.
> Code coverage is a primitive metric. But, it's a place to
> start.
> >
> > --- On Thu, 7/30/09, Victor Vasiliev 
> wrote:
> >
> >> From: Victor Vasiliev 
> >> Subject: Re: [Wikitech-l] parserTests code
> coverage statistics
> >> To: "Wikimedia developers" 
> >> Date: Thursday, July 30, 2009, 1:28 AM
> >> dan nessett wrote:
> >> > I decided to investigate how well
> parserTests
> >> exercises the MW code. So, I threw together a
> couple of
> >> MacGyver tools that use xdebug's code coverage
> capability
> >> and analyzed the results. The results are very,
> very
> >> preliminary, but I thought I would get them out so
> others
> >> can look them over. In the next couple of days I
> hope to
> >> post more detailed results and the tools
> themselves on the
> >> Mediawiki wiki. (If someone could tell me the
> appropriate
> >> page to use that would be useful. Otherwise, I
> will just
> >> create a page in my own namespace).
> >> >
> >> > The statistics (again very preliminary) are:
> >> >
> >> > Number of files exercised: 141  Number of
> lines
> >> in those files: 85606
> >> > Lines covered: 59489  Lines not covered:
> >> 26117  Percentage covered:  0.694916244188
> >> >
> >> > So, parserTests is getting (at best) about
> 70% code
> >> coverage. This is better than I expected, but
> still it means
> >> parserTests does not test 26117 lines of code.
> What I mean
> >> by "at best" is xdebug just notes whether a line
> of code is
> >> visited. It doesn't do any logic analysis on which
> branches
> >> are taken. Furthermore, parserTests may not visit
> some files
> >> that are critical to the operation of the MW
> software.
> >> Obviously, xdebug can only gather statistics on
> visited
> >> files.
> >> >
> >> > I want to emphasize that there may be errors
> in these
> >> results due to bad assumptions on my part or bad
> coding.
> >> However, it is a place to start.
> >> >
> >>
> >> Well, they are *parser* tests, they are not
> intended to
> >> cover
> >> Special:Version or something else.
> >>
> >> --vvv
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> >
> >
> >
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> 
> For more generic unit tests, check out the stuff in /t/ and
> /tests/
> Those could probably use improvement.
> 
> -Chad
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Statistics now on MediaWiki page

2009-07-30 Thread dan nessett

I am not finished with the analysis (MacGyver) tool, but I thought I would put 
up what I have so far on the MediaWiki site. I have created a web page in my 
user space for the Parser Test code coverage analysis -

http://www.mediawiki.org/wiki/User:Dnessett/Parser_Tests/Code_Coverage

I would appreciate it if someone familiar with the parser would at least glance 
at the per file statistics for a sanity check. Some things that worry me are:

* parserTests seems to visit Special:Nuke. Does this make sense?

* Only about 72% of Parser.php is exercised. Is this reasonable?

* Xdebug is reporting that the Parser only has 2975 lines of executable code. 
This contrasts to the report by Happy-Mellon that there is 11,000 lines of code 
in Parser.php. Are there really that many non-executable lines of code in the 
parser or is Xdebug missing a whole bunch?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Code Coverage MacGyver tools now on Mediawiki wiki

2009-07-31 Thread dan nessett

I have just posted the two MacGyver tools I used to generate the parserTests 
code coverage statistics. They are found at:

http://www.mediawiki.org/wiki/User:Dnessett/Parser_Tests/MacGyver_Tests


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Parser test result integration for CodeReview coming

2009-07-31 Thread dan nessett

Great news!

--- On Fri, 7/31/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: [Wikitech-l] Parser test result integration for CodeReview coming
> To: "Wikimedia developers" 
> Date: Friday, July 31, 2009, 11:04 AM
> Just a quick heads-up:
> 
> I spent a little time this week hacking up integration of
> test data for 
> the CodeReview extension, and the ability for ParserTests
> to upload its 
> invocation data there.
> 
> Once configured live, parser tests will be automatically
> run on all 
> checkins to MediaWiki trunk and the results will be visible
> on the list 
> & detail views at http://www.mediawiki.org/wiki/Special:Code/MediaWiki
> 
> This should help in identifying regressions. :)
> 
> The system allows for data on multiple test suites to be
> sent in, so we 
> can rig up similar tests for the unit test suites (if they
> get fixed up 
> to actually work right!) and client-side Selenium-based
> tests we've been 
> talking about but have yet to implement.
> 
> If we don't have time to set it up today, the parser test
> integration 
> should be ready by early next week.
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Statistics now on MediaWiki page

2009-08-01 Thread dan nessett

Thanks. I must admit Xdebug has a pretty strange definition of executable code. 
For example, variable initializations are not counted.

--- On Sat, 8/1/09, Happy-melon  wrote:

> From: Happy-melon 
> Subject: Re: [Wikitech-l] Statistics now on MediaWiki page
> To: wikitech-l@lists.wikimedia.org
> Date: Saturday, August 1, 2009, 3:44 PM
> This is something of a hyperbole,
> it's true; my apologies.  Parser.php 
> itself has ~5,200 lines of code (total, including
> comments); combined with 
> the preprocessors (Preprocessor_DOM.php and
> Preprocessor_Hash.php, ~1,500 
> and ~1,600 lines), CoreParserFunctions.php (~650 lines),
> and the rest of the 
> 'parser' related files in the /parser directory (~300 lines
> each), you get 
> around 11,000.  This is total lines, including
> comments.  ~3,000 executable 
> lines in Parser.php sounds plausible.
> 
> --HM
> 
> "dan nessett" 
> wrote in message 
> news:459215.97119...@web32507.mail.mud.yahoo.com...
> >
> > I am not finished with the analysis (MacGyver) tool,
> but I thought I would 
> > put up what I have so far on the MediaWiki site. I
> have created a web page 
> > in my user space for the Parser Test code coverage
> analysis -
> >
> > http://www.mediawiki.org/wiki/User:Dnessett/Parser_Tests/Code_Coverage
> >
> > I would appreciate it if someone familiar with the
> parser would at least 
> > glance at the per file statistics for a sanity check.
> Some things that 
> > worry me are:
> >
> > * parserTests seems to visit Special:Nuke. Does this
> make sense?
> >
> > * Only about 72% of Parser.php is exercised. Is this
> reasonable?
> >
> > * Xdebug is reporting that the Parser only has 2975
> lines of executable 
> > code. This contrasts to the report by Happy-Mellon
> that there is 11,000 
> > lines of code in Parser.php. Are there really that
> many non-executable 
> > lines of code in the parser or is Xdebug missing a
> whole bunch? 
> 
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Is there any evidence that the tests in phase3/maintenance/Tests ever worked?

2009-08-03 Thread dan nessett

I am working on the tests in ../phase3/maintenance/tests. I have found 2 
problems (one of which may be only locally relevant). When I follow the logic 
initiated in run-tests.php (which according to the target in the Makefile seems 
to be the initiator of the tests) there are a lot of includes, but nothing in 
these appears to lead to anything that might start the tests.

I am beginning to suspect that these tests never worked. However, I am open to 
correction. Has anyone ever run these tests successfully?


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is there any evidence that the tests in phase3/maintenance/Tests ever worked? (errata)

2009-08-03 Thread dan nessett

That is, the tests in ../phase3/tests.

--- On Mon, 8/3/09, dan nessett  wrote:

> From: dan nessett 
> Subject: [Wikitech-l] Is there any evidence that the tests in 
> phase3/maintenance/Tests ever worked?
> To: wikitech-l@lists.wikimedia.org
> Date: Monday, August 3, 2009, 11:02 AM
> 
> I am working on the tests in ../phase3/maintenance/tests. I
> have found 2 problems (one of which may be only locally
> relevant). When I follow the logic initiated in
> run-tests.php (which according to the target in the Makefile
> seems to be the initiator of the tests) there are a lot of
> includes, but nothing in these appears to lead to anything
> that might start the tests.
> 
> I am beginning to suspect that these tests never worked.
> However, I am open to correction. Has anyone ever run these
> tests successfully?
> 
> 
>       
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is there any evidence that the tests in phase3/maintenance/Tests ever worked?

2009-08-03 Thread dan nessett

"make test" just uses run-test.php as a command for the all target. However, I 
have them running (although some fail).

--- On Mon, 8/3/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] Is there any evidence that the tests in 
> phase3/maintenance/Tests ever worked?
> To: "Wikimedia developers" 
> Date: Monday, August 3, 2009, 1:13 PM
> On 8/3/09 11:02 AM, dan nessett
> wrote:
> >
> > I am working on the tests in
> ../phase3/maintenance/tests. I have
> > found 2 problems (one of which may be only locally
> relevant). When I
> > follow the logic initiated in run-tests.php (which
> according to the
> > target in the Makefile seems to be the initiator of
> the tests) there
> > are a lot of includes, but nothing in these appears to
> lead to
> > anything that might start the tests.
> >
> > I am beginning to suspect that these tests never
> worked. However, I
> > am open to correction. Has anyone ever run these tests
> successfully?
> 
> make test
> 
> (Note that they may not work _currently_ as there's been a
> lot of 
> bitrot, and there's an external dependency on PHPUnit which
> may have 
> undergone a lot of development since it was last touched.)
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Is there any evidence that the tests in phase3/maintenance/Tests ever worked? (errata)

2009-08-03 Thread dan nessett

Thanks. It turns out (something I discovered through a series of odd 
coincidences) that TextUI/Command.php used to execute 
PHPUnit_TextUI_Command::main at the bottom. This was removed some time ago so 
the PHPUnit_TextUI_Command class could be extended (by including the file for 
that purpose).

--- On Mon, 8/3/09, Alexandre Emsenhuber  wrote:

> From: Alexandre Emsenhuber 
> Subject: Re: [Wikitech-l] Is there any evidence that the tests in 
> phase3/maintenance/Tests ever worked? (errata)
> To: "Wikimedia developers" 
> Date: Monday, August 3, 2009, 1:42 PM
> The code that starts the tests is in
> PHPUnit (which is required to run  
> these tests) and is triggered by the line "require(
> 'PHPUnit/TextUI/ 
> Command.php' );" in run-tests.php.
> 
> Cheers!
> Alexandre Emsenhuber
> 
> Le 3 août 09 à 20:06, dan nessett a écrit :
> 
> >
> > That is, the tests in ../phase3/tests.
> >
> > --- On Mon, 8/3/09, dan nessett 
> wrote:
> >
> >> From: dan nessett 
> >> Subject: [Wikitech-l] Is there any evidence that
> the tests in  
> >> phase3/maintenance/Tests ever worked?
> >> To: wikitech-l@lists.wikimedia.org
> >> Date: Monday, August 3, 2009, 11:02 AM
> >>
> >> I am working on the tests in
> ../phase3/maintenance/tests. I
> >> have found 2 problems (one of which may be only
> locally
> >> relevant). When I follow the logic initiated in
> >> run-tests.php (which according to the target in
> the Makefile
> >> seems to be the initiator of the tests) there are
> a lot of
> >> includes, but nothing in these appears to lead to
> anything
> >> that might start the tests.
> >>
> >> I am beginning to suspect that these tests never
> worked.
> >> However, I am open to correction. Has anyone ever
> run these
> >> tests successfully?
> >>
> >>
> >>
> >>
> >> ___
> >> Wikitech-l mailing list
> >> Wikitech-l@lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >>
> >
> >
> >
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] getThumbVirtualUrl() behavior

2009-08-04 Thread dan nessett

After fixing some bugs in /test/run-test, it now works. There are 7 PHPUnit 
tests in /tests/. 4 pass and 3 have various failures. I am working on 
LocalFileTest.php. It returns:

There was 1 failure:

1) testGetThumbVirtualUrl(LocalFileTest)
Failed asserting that two strings are equal.
expected string 
difference  <  x???>
got string  

This problem arises in the function getThumbVirtualUrl() in 
/phase3/includes/filerepo/File.php in the statement:

$path = $this->repo->getVirtualUrl() . '/thumb/' . $this->getUrlRel();

Obviously, the function is coded to return something different than the test 
expects. The problem is, I don't know if this is a bug in the test or in the 
code.

I have read the README file in the /filerepo/ directory and searched Bugzilla 
for getThumbVirtualUrl(). The former didn't help with this problem and the 
latter returned nothing.

Correcting this problem is easy. Either change what the test expects or change 
the function to return what the test expects. I strongly suspect the correct 
action is the former, but I have no idea what the code in /filerepo/ does. So, 
I wonder if somewhat might either confirm or deny my strong suspicion? [Since 
Tim Starling wrote the README file, perhaps he might comment]


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] getThumbVirtualUrl() behavior

2009-08-04 Thread dan nessett

Great and thanks! After I fix this I will have all of the tests in /tests/ 
working (it turns out there are 6 not 7 tests - one of the .php files only held 
an abstract class).

--- On Tue, 8/4/09, Brion Vibber  wrote:

> From: Brion Vibber 
> Subject: Re: [Wikitech-l] getThumbVirtualUrl() behavior
> To: "Wikimedia developers" 
> Date: Tuesday, August 4, 2009, 4:51 PM
> On 8/4/09 11:22 AM, dan nessett
> wrote:
> > 1) testGetThumbVirtualUrl(LocalFileTest) Failed
> asserting that two
> > strings are equal. expected
> > string
> difference<
> > x???> got
> string
> [snip]
> > Obviously, the function is coded to return something
> different than
> > the test expects. The problem is, I don't know if this
> is a bug in
> > the test or in the code.
> 
> Looks like an old test that needs to be updated. IIRC Tim
> recently 
> changed the system to use a separate virtual path for
> thumbs so we can 
> easily configure the system to store thumbs in a different
> path (and 
> hence different file server) from the primary media files.
> 
> -- brion
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] PHPUnit tests now fixed

2009-08-06 Thread dan nessett
I have fixed 5 bugs in /tests/ and added one feature to run-tests.php (a 
--runall option so testers can run the PHPUnit tests without using make - 
although make test still works). With these changes all of the tests in /tests/ 
now work. A unified diff patch is attached to bug ticket 20077.

I had to make an architectural decision when enhancing run-test.php with the 
--runall option. The bug ticked describes this decision and suggests two other 
ways to achieve the same objective. I chose the approach implemented by the 
patch because it required no changes to the directory structure of /tests/. 
However, I actually prefer the second possibility. So, if senior developers 
could look at the bug ticket description and give me some feedback (especially 
if they also think the second option is better), that would be great.

I also would appreciate some feedback on the following question. One of the 
tests referenced the global variables $wgDBadminname and $wgDBadminuser. When I 
ran the configuration script during Mediawiki installation on my machine, the 
LocalSettings.php file created defined the globals $wgDBname and $wgDBuser. So, 
I changed the test to use these variables rather than the 'admin' versions. 
However, I don't remember if the script gave me a choice to use the 'admin' 
versions or not. Also, if the configuration script has changed, then some 
installations may use the 'admin' versions and some may not. In either case, I 
would have to modify the bug fix to accept both types of global variable. If 
someone would fill me in, I can make any required changes to the bug fix.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] How do I get $wgLocalisationCacheConf in scope for getLocalisationCache()?

2009-08-06 Thread dan nessett
I am working on the tests in /t/. One, Revision.t, attempts to create a 
Language object and croaks in getLocalisationCache(), which is called by the 
Language class constructor. The problem is $wgLocalisationCacheConf is 
undefined, but referenced. When I Googled $wgLocalisationCacheConf I got the 
page:

http://www.mediawiki.org/wiki/Manual:$wgLocalisationCacheConf

It states that this global is introduced in 1.16.0. It isn't in my 
LocalSettings.php file (understandable, since I ran the install script on a 
version of MW prior to r52503, in which the variable is introduced). I'm not 
sure how the localization cache works, so would someone let me know what I have 
to do to get this variable in the scope of getLocalisationCache()?

Thanks.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How do I get $wgLocalisationCacheConf in scope for getLocalisationCache()?

2009-08-06 Thread dan nessett
Never mind. I found it. It is in DefaultSettings.php

--- On Thu, 8/6/09, dan nessett  wrote:

> From: dan nessett 
> Subject: [Wikitech-l] How do I get $wgLocalisationCacheConf in scope for 
> getLocalisationCache()?
> To: wikitech-l@lists.wikimedia.org
> Date: Thursday, August 6, 2009, 2:46 PM
> I am working on the tests in /t/.
> One, Revision.t, attempts to create a Language object and
> croaks in getLocalisationCache(), which is called by the
> Language class constructor. The problem is
> $wgLocalisationCacheConf is undefined, but referenced. When
> I Googled $wgLocalisationCacheConf I got the page:
> 
> http://www.mediawiki.org/wiki/Manual:$wgLocalisationCacheConf
> 
> It states that this global is introduced in 1.16.0. It
> isn't in my LocalSettings.php file (understandable, since I
> ran the install script on a version of MW prior to r52503,
> in which the variable is introduced). I'm not sure how the
> localization cache works, so would someone let me know what
> I have to do to get this variable in the scope of
> getLocalisationCache()?
> 
> Thanks.
> 
> 
>       
> 
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] tests in /phase3/t/inc/ now work. Patch attached to bug ticket 20112

2009-08-07 Thread dan nessett
I have fixed 8 bugs (some fairly trivial) in /phase3/t/ so all of the tests in 
/t/inc/ now work. I have opened a bug ticket (20112) and attached a patch that 
fixes the bugs. I have not fixed problems with the tests in /t/maint/ because: 
1) they seem to test OS functionality, not the MW software, and 2) they are 
written in PERL not PHP and my IDE (Netbeans 6.7.1) doesn't support PERL. 
Perhaps someone who has an IDE with PERL support will develop an interest in 
fixing them.

This completes the task Brion asked me to take on. What happens next is up to 
the developer community. I have some ideas about test harness architecture (not 
radical ideas, but more than just fixing dings in the software) that I will run 
up the flag pole in subsequent posts. So far, only a few have responded to my 
posts (admittedly, some were a bit silly, such as the one asking where 
$wgLocalisationCacheConf is defined), but it seems few developers are 
interested in MW QA. Well, that isn't exactly unusual. Features are always more 
interesting that code stability. But, the history of software is littered with 
products possessing really neat features that eventually failed because 
developers did not take care of quality.

Enough preaching.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] A potential land mine

2009-08-10 Thread dan nessett
For various reasons I have noticed that several files independently compute the 
value of $IP. For example, maintenance/Command.inc and includes/WebStart.php 
both calculate its value. One would expect this value to be computed in one 
place only and used globally. The logical place is LocalSettings.php.

Sprinkling the computation of $IP all over the place is just looking for 
trouble. At some point the code used to make this computation may diverge and 
you will have bugs introduced. My first reaction to this problem was to wonder 
why these files didn't just require LocalSettings.php. However, since it is a 
fairly complex file doing so might not be desirable because: 1) there are 
values in LocalSettings.php that would interfere with values in these files, 2) 
there is some ordering problem that might occur, or 3) there are performance 
considerations.

If it isn't possible and or desirable to replace the distributed computation of 
$IP with require_once('LocalSettings.php'), then I suggest breaking 
LocalSettings into two parts, say LocalSettingsCore.php and 
LocalSettingsNonCore.php (I am sure someone can come up with better names). 
LocalSettingsCore.php would contain only those calculations and definitions 
that do not interfere with the core MW files. LocalSettingsNonCore.php would 
contain everything else now in LocalSettings.php. Obviously, the first 
candidate for inclusion in LocalSettingsCore.php is the computation of $IP. 
Once such a separation is carried out, files like maintenance/Command.inc and 
includes/WebStart.php can require_once('LocalSettingsCore.php') instead of 
independently computing $IP.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Mon, 8/10/09, Tim Starling  wrote:

> No, the reason is because LocalSettings.php is in the
> directory
> pointed to by $IP, so you have to work out what $IP is
> before you can
> include it.
> 
> Web entry points need to locate WebStart.php, and command
> line scripts
> need to locate maintenance/commandLine.inc. Then either of
> those two
> entry scripts can locate the rest of MediaWiki.

Fair enough, but consider the following.

I did a global search over the phase3 directory and got these hits for the 
string "$IP =" :

.../phase3/config/index.php:30:  $IP = dirname( dirname( __FILE__ ) );
.../phase3/config/index.php:1876:   \$IP = MW_INSTALL_PATH;
.../phase3/config/index.php:1878:   \$IP = dirname( __FILE__ );
.../phase3/includes/WebStart.php:61:  $IP = getenv( 'MW_INSTALL_PATH' );
.../phase3/includes/WebStart.php:63:$IP = realpath( '.' );
.../phase3/js2/mwEmbed/php/noMediaWikiConfig.php:11:  $IP = 
realpath(dirname(__FILE__).'/../');
.../phase3/LocalSettings.php:17:$IP = MW_INSTALL_PATH;
.../phase3/LocalSettings.php:19:$IP = dirname( __FILE__ );
.../phase3/maintenance/language/validate.php:16:  $IP = dirname( __FILE__ ) . 
'/../..';
.../phase3/maintenance/Maintenance.php:336: $IP = strval( 
getenv('MW_INSTALL_PATH') ) !== ''

So, it appears that $IP computation is occurring in 6 files. In addition, $IP 
is adjusted by the relative place of the file in the MW source tree (e.g., in 
validate.php, $IP is set to dirname( __FILE__ ) . '/../..';) Adjusting paths 
according to where a file exists in a source tree is fraught with danger. If 
you ever move the file for some reason, the code breaks.

Why not isolate at least $IP computation in a single function? (Perhaps 
breaking up LocalSettings.php into two parts is overkill, but certainly 
cleaning up $IP computation isn't too radical an idea.) Of course, there is the 
problem of locating the file of the function that does this. One approach is to 
recognize that php.ini already requires potential modification for MW use. 
Specifically, the path to PEAR must occur in 'include_path'. It would be a 
simple matter to add another search directory for locating the initialization 
code.

Or maybe there is a better way of locating MW initialization code. How its done 
is an open issue. I am simply arguing that computing the value of $IP by 
relying on the position of the php file in a source tree is not good software 
architecture. Experience shows that this kind of thing almost always leads to 
bugs.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad  wrote:

> 
> The problem with "putting it in a single function" is you
> still have
> to find where that function is to begin with (I'd assume
> either
> GlobalFunctions or install-utils would define this). At
> which point
> you're back to the original problem: defining $IP yourself
> so you
> can find this.
> 
> Yes, we should probably do this all a little more cleanly
> (at least
> one unified style would be nice), but constructing it
> manually is
> pretty much a given for anything trying to find an entry
> point, as
> Tim points out.

I'm probably missing something since I have only been programming in PHP for 
about 4 weeks, but if you set include_path in php.ini so it includes the root 
of the MW tree, put a php file at that level that has a function (or a method 
in a class) that returns the MW root path, wouldn't that work? For example, if 
you modified include_path in php.ini to include , added 
the file MWInit.php to the MW root directory and in MWInit.php put a function 
MWInit() that computes and returns $IP, wouldn't that eliminate the necessity 
of manually figuring out the value of $IP [each place where you now compute $IP 
could require_once('MWInit.php') and call MWInit()]?

Of course, it may be considered dangerous for the MW installation software to 
fool around with php.ini. But, even if you require setting the MW root manually 
in php.ini::include_path (abusing the php namespace disambiguation operator 
here) that would be an improvement. You should only have to do this once and 
could upgrade MW without disturbing this binding.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
I just tried the following two approaches. I created the file MWInit.php in my 
.../phase3 directory with the following code:



This worked. $IP was set to the correct value.

However, Dmitriy Sintsov makes the excellent point that not everyone can change 
php.ini. So, I then put MWInit.php into the PEAR directory and modified it as 
follows:

https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
Brion Vibber  wrote:

> Unless there's some reason to do otherwise, I'd recommend dropping the 
> $IP from the autogen'd LocalSettings.php and pulling in 
> DefaultSettings.php from the level above. (Keeping in mind that we 
> should retain compat with existing LocalSettings.php files that are 
> still pulling it.)

Better, but what about /config/index.php, noMediaWikiConfig.php, validate.php 
and Maintenance.php? Having only two different places where $IP is computed is 
a definite improvement (assuming you fix the 4 files just mentioned), but it 
still means the code in WebStart.php and Command.inc is file position 
dependent. If this is the best that can be done, then that is that. However, it 
would really be better if all position dependent code could be eliminated.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Aryeh Gregor  wrote:

> Then you're doing almost exactly the same thing we're doing
> now,
> except with MWInit.php instead of LocalSettings.php. 
> $IP is normally
> set in LocalSettings.php for most page views.  Some
> places still must
> figure it out independently in either case, e.g.,
> config/index.php.
> 

I want to avoid seeming obsessed by this issue, but file position dependent 
code is a significant generator of bugs in other software. The difference 
between MWInit.php and LocalSettings.php is if you get the former into a 
directory that PHP uses for includes, you have a way of getting the root path 
of MW without the caller knowing anything about the relative structure of the 
code distribution tree hierarchy. As you pointed out previously, the reason you 
need to compute $IP before including/requiring LocalSettings is you don't know 
where it is.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Brion Vibber  wrote:

> I'm not sure there's a compelling reason to even have $IP
> set in 
> LocalSettings.php anymore; the base include path should
> probably be 
> autodetected in all cases, which is already being done in
> WebStart.php 
> and commandLine.inc, the web and CLI initialization
> includes based on 
> their locations in the file tree.

I started this thread because two of the fixes in the patch for bug ticket 
20112 (those for Database.t and Global.t) move the require of LocalSettings.php 
before the require of AutoLoader.php. This is necessary because AutoLoader.php 
eventually executes: 
require_once("$IP/js2/mwEmbed/php/jsAutoloadLocalClasses.php").

This is a perfect example of how file position dependent code can introduce 
bugs. If $IP computation is eliminated from LocalSettings.php, then both of 
these tests will once again fail. The tests in phase3/t/inc are not executed as 
the result of a web request or through a command line execution path that 
includes maintenance/Command.inc. They normally are executed by typing at a 
terminal: "prove t/inc -r" or, e.g., "prove t/inc/Global.t". "prove" is a TAP 
protocol consumer that digests and displays the results of the tests, which are 
TAP protocol producers.

So, eliminating $IP computation from LocalSettings would require the 
development of new code for these tests. That would mean there would be 4 
places where $IP is computed: WebStart.php, Command.inc, /config/index.php and 
the t test place. Not good.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Brion Vibber  wrote:

> These scripts should simply be updated to initialize the
> framework 
> properly instead of trying to half-ass it and load
> individual classes.

I agree, which is why I am trying to figure out how to consolidate the tests in 
/tests/ and /t/. [The example I gave was to illustrate how bugs can pop up when 
you use code that depends on the position of files in a distribution tree, not 
because I think the tests are in good shape. The bug fixes are only intended to 
make these tests available again, not to declare them finished.]

I could use some help on test system architecture - you do wear the systems 
architect hat :-). It doesn't seem right to use WebStart.php to initialize the 
tests. For one thing, WebStart starts up profiling, which doesn't seem relevant 
for a test. That leaves Command.inc. However, the t tests stream TAP protocol 
text to "prove", a PERL script that normally runs them. I have no way of 
running these tests through prove because my IDE doesn't support PERL, so if I 
changed the tests to require Command.inc, it would be hard to debug any 
problems.

I researched other TAP consumers and didn't find anything other than prove. I 
was hoping that one written in PHP existed, but I haven't found anything. So, I 
am in kind of a bind. We could just dump the t tests, but at least one 
(Parser.t, which runs parserTests) is pretty useful. Furthermore, TAP has an 
IETF standardization effort and phpunit can produce TAP output. This suggests 
that TAP is a good candidate for test system infrastructure.

So, what are your thoughts on this?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, lee worden  wrote:

> Placing it in the include path could make it hard to run
> more than one version of the MW code on the same server,
> since both would probably find the same file and one of them
> would likely end up using the other one's $IP.

That is a good point. However, I don't think it is insurmountable. Callers to 
MWInit() could pass their path (which they can get calling "realpath( '.' )"). 
In a multi-MW environment MWInit() could disambiguate the root path by 
searching the provided path against those of all installed root paths.

> Another way of putting it is, is it really better to
> hard-code the absolute position of the MW root rather than
> its position relative to the files in it?

Well, I think so. Hardcoding the absolute position of the MW root occurs at 
install time. Using file position dependent code is a development time 
dependency. Files are not moved around once installed, but could be moved 
around during the development process. So, bugs that depend on file position 
are normally not caused by installation activity, but rather by development 
activity.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Trevor Parscal  wrote:

Not to worry. I've given up on this issue, at least for the moment.

Dan

> What seems to be being discussed here are particular
> offensive areas of 
> MediaWiki - however if you really get to know MediaWiki you
> will likely 
> find tons of these things everywhere... So are we proposing
> a specific 
> change that will provide a solution to a problem or just
> nit-picking?
> 
> I ask cause I'm wondering if I should ignore this thread or
> not (an 
> others are probably wondering the same) - and I'm sort of
> feeling like 
> this is becoming one of those threads where the people
> discussing things 
> spend more time and effort battling each other than just
> fixing the code.



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad  wrote:

> To be perfectly honest, I'm of the opinion that tests/ and
> t/
> should be scrapped and it should all be done over,
> properly.
> 
> What we need is an easy and straightforward way to write
> test cases, so people are encouraged to write them. Right
> now, nobody understands wtf is going on in tests/ and t/,
> so
> they get ignored and the /vast/ majority of the code isn't
> tested.
> 
> What we need is something similar to parser tests, where
> it's
> absurdly easy to pop new tests in with little to no coding
> required at all. Also, extensions having the ability to
> inject
> their own tests into the framework is a must IMHO.

There is a way to easily add tests, but it requires some community discipline. 
phpunit has a --skeleton command (actually two variations of it) that 
automatically generates unit tests. (see 
http://www.phpunit.de/manual/current/en/skeleton-generator.html). All 
developers have to do is add assertions (which have the appearance of comments) 
to their code and call phpunit with the --skeleton flag. If you want even more 
hand holding, Netbeans will do it for you.

This is all wonderful, but there are problems:

* Who is going to go back and create all of the assertions for existing code? 
Not me (at least not alone). This is too big a job for one person. In order for 
this to work, you need buy in from at least a reasonable number of developers. 
So far, the number of developers expressing an interest in code quality and 
testing is pretty small.

* What motivation is there for those creating new code to do the work to add 
assertions with good code coverage? So far I haven't seen anything in the MW 
code development process that would encourage a developer to do this. Without 
some carrots (and maybe a few sticks) this approach has failure written all 
over it.

* Even if we get a bunch of Unit tests, how are they integrated into a useful 
whole? That requires some decisions on test infrastructure. This thread begins 
the discussion on that, but it really needs a lot more.

* MW has a culture problem. Up to this point people just sling code into trunk 
and think they are done. As far as I can tell, very few feel they have any 
responsibility for ensuring their code won't break the product. [Perhaps I am 
being unkind on this. Without any testing tools available, it is quite possible 
that developers want to ensure the quality of their code, but don't have the 
means of doing so.]

I realize these observations may make me unpopular. However, someone has to 
make them. If everyone just gets mad, it doesn't solve the problem. It just 
pushes it out to a time when it is even more serious.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Alexandre Emsenhuber  wrote:

> +1, we could maybe write our own test system that can be
> based on the  
> new Maintenance class, since we already some test scripts
> in / 
> maintenance/ (cdb-test.php, fuzz-tester.php,
> parserTests.php,  
> preprocessorFuzzTest.php and syntaxChecker.php). Porting
> tests such as  
> parser to PHPUnit is a pain, since it has no native way to
> write a  
> test suite that has a "unknow" number of tests to run.

Rewriting parserTests as php unit tests would be a horrible waste of time. 
parserTests works and it provides a reasonable service. One problem, however, 
is how do we fix the parser? It seems it is a pretty complex code system (when 
I ran a MacGyver test on parserTests, 141 files were accessed, most of which 
are associated with the parser). I have been thinking about this, but those 
thoughts are not yet sufficiently clear to make public yet.

On the other hand, taking the parserTests route and doing all of our own test 
infrastructure would also be a good deal of work. There are tools out there 
(phpuint and prove) that are useful. In my view creating a test infrastructure 
from scratch would unnecessarily waste time and resources.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
Brion Vibber  wrote:
 
> Starting about a week ago, parser test results are now included in code 
> review listings for development trunk:
> 
> http://www.mediawiki.org/w/index.php?title=Special:Code/MediaWiki/path&path=%2
> Ftrunk%2Fphase3
> 
> Regressions are now quickly noted and fixed up within a few revisions -- 
> something which didn't happen when they were only being run manually by 
> a few folks here and there.
> 
> Is this the sort of thing you're thinking of?
> 
> -- brion

Yes. Absolutely. Sight is critical for action and running parserTests on each 
revision in the development trunk is a good first step on improving code 
quality.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Robert Leverington  wrote:

> Please can you properly break your lines in e-mail though,
> to 73(?)
> characters per a line - should be possible to specify this
> in your
> client.

I'm using the web interface provided by yahoo. If you can
point me in the right direction for setting up yahoo to do
this I'll be happy to (I've done this manually on this
message).

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Alexandre Emsenhuber  wrote:

> My idea is the move the "backend" of ParserTest
> (parserTests.txt file  
> processing, result reporting, ...) and the TestRecorder
> stuff to  
> something like a MediaWikiTests class that extends
> Maintenance and  
> move the rest in a file in /maintenance/tests/ (to be
> created) and re- 
> use the backend to have files that have the same format,
> but test's  
> input could be raw PHP code (a bit like PHP core's tests)
> with a new  
> config variable that's like $wgParserTestFiles but for
> these kind of  
> test. This mostly concerns the actual tests in /tests/ and
> /t/inc/).  
> We can also port cdb-test.php, fuzz-tester.php,  
> preprocessorFuzzTest.php and syntaxChecker.php to this new
> system and  
> then create a script in /maintenance/ that runs all the
> tests in / 
> maintenance/tests/. This allows to also upload all the
> tests to  
> CodeReview, not only the parser tests. A benefit is that we
> can get  
> ride of /tests/ and /t/.

One of the beauties of open source code development is he who does the work 
wins the prize. Of course, I am sure senior developers have discretionary power 
on what goes into a release and what does not. But, if you want to do the work, 
go for it (says the guy [me] who just joined the group).

However, I think you should consider the following:

* parserTests is designed to test parsing, which is predominantly a text 
manipulation task. Other parts of MW do not necessarily provide text processing 
markers that can be used to decide whether they are working correctly or not.

* Sometimes testing the action of a module requires determining whether a 
series of actions provide the correct behavior. As far as I am aware, 
parserTests has no facility to tie together a set of actions into a single test.

For example, consider two MW files in phase3/includes: 1) AutoLoader.php and 2) 
Hooks.php. In AutoLoader, the method loadAllExtensions() loads all extensions 
specified in $wgAutoloadClasses. It takes no parameters and has no return 
value. It simply walks through the entries specified in $wgAutoloadClasses and 
if the class specified as the key exists, executes a require of the file 
specified in the value. I don't see how you would specify a test of this method 
using the syntax of parserTests.txt.

In Hooks.php, there is a single function wfRunHooks(). It looks up hooks 
previously set and calls user code for them. It throws exceptions in certain 
error conditions and testing it requires setting a hook and seeing if it is 
called appropriately. I don't see how you could describe this behavior with 
parserTests.txt syntax.

Of course, you could create new syntax and behavior for the parserTests 
software components, but that is a lot of work that other infrastructure has 
already done. For example, see the set of assertions for phpunit 
(http://www.phpunit.de/manual/2.3/en/api.html#api.assert.tables.assertions).

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MW test infrastructure architecture

2009-08-11 Thread dan nessett
--- On Tue, 8/11/09, Chad  wrote:

> > Neither of these need to be tested directly.  If
> AutoLoader breaks,
> > then some other class won't load, and the tests for
> that class will
> > fail.  If wfRunHooks() fails, then some hook won't
> work, and any test
> > of that hook will fail.
> >

I will pass on commenting about these for the moment because:

> > I think what's needed for decent test usage for
> MediaWiki is:
> >
> > 1) Some test suite is picked.  PHPUnit is probably
> fine, if it runs
> > out of the box and doesn't need some extra module to
> be installed.
> >
> > 2) The test suite is integrated into CodeReview with
> nag e-mails for
> > broken tests.
> >
> > 3) A moderate number of tests are written for the test
> suite.
> > Existing parser tests could be autoconverted,
> possibly.  Maybe someone
> > paid could be assigned to spend a day or two on this.
> >
> > 4) A new policy requires that everyone write tests for
> all their bug
> > fixes and enhancements.  Commits that don't add
> enough tests will be
> > flagged as fixme, and reverted if not fixed.
> >
> > (4) is critical here.

All good stuff, especially (4) - applause :-D.

> > While we're at it, it would be nice if we instituted
> some other
> > iron-clad policies.  Here's a proposal:
> >
> > * All new functions (including private helper
> functions, functions in
> > JavaScript includes, whatever) must have
> function-level documentation
> > that explains the purpose of the function and
> describes its
> > parameters.  The documentation must be enough that no
> MediaWiki
> > developer should ever have to read the function's
> source code to be
> > able to use it correctly.  Exception: if a method is
> overridden which
> > is already documented in the base class, it doesn't
> need to be
> > documented again in the derived class, since the
> semantics should be
> > the same.
> > * All new classes must have high-level documentation
> that explains
> > their purpose and structure, and how you should use
> them.  The
> > documentation must be sufficient that any MediaWiki
> developer could
> > understand why they might want to use the class in
> another file, and
> > how they could do so, without reading any of the
> source code.  Of
> > course, developers would still have to read the
> function documentation
> > to learn how to use specific functions.  There are no
> exceptions, but
> > a derived class might only need very brief
> documentation.
> > * All new config variables must have documentation
> explaining what
> > they do in terms understandable to end-users.  They
> must describe what
> > values are accepted, and if the values are complicated
> (like
> > associative arrays), must provide at least one example
> that can be
> > copy-pasted.  There are no exceptions.
> > * If any code is changed so as to make a comment
> incorrect, the
> > comment must be updated to match.  There are no
> exceptions.
> >
> > Or whatever.  We have *way* too few high-level
> comments in our code.
> > We have entire files -- added quite recently, mind
> you, by established
> > developers -- that have no or almost no documentation
> on either the
> > class or function level.  We can really do better
> than this!  If we
> > had a clear list of requirements for comments in new
> code, we could
> > start fixme'ing commits that don't have adequate
> comments.  I think
> > that would be enough to get people to add sufficient
> comments, for the
> > most part.  Without clear rules, though, backed up by
> the threat of
> > reverting, I don't think the situation will improve
> here.

Wonderful stuff - more applause.

> > (Wow, this kind of turned into a thread hijack.  :D)
> >

Who cares. It needs to be said.

> 
> On the whole "new code" front. Can we all /please/ remember
> that
> we're writing PHP5 here. Visibility on all new functions
> and variables
> should also be a must.
> 

OK, I must admit I didn't understand that, probably because I'm new
to PHP. Can you make this more explicit?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
I'm starting a new thread because I noticed my news reader has glued together 
messages with the title "A potential land mine" and "MW test infrastructure 
architecture," which may confuse someone coming into the discussion late. Also, 
the previous thread has branched into several topics and I want to concentrate 
on only one, specifically what can we assume about the system environment for a 
test infrastructure? These assumptions have direct impact on what test harness 
we use. Let me start by stating what I think can be assumed. Then people can 
tell me I am full of beans, add to the assumptions, subtract from them, etc.

The first thing I would assume is that a development system is less constrained 
than a production system in what can and what cannot be installed. For example, 
people shot down my proposal to automatically discover the MW root directory 
because some production systems have administrators without root access, 
without the ability to load code into the PEAR directory, etc. Fair enough 
(although minimizing the number of places where $IP is computed is still 
important). However, if you are doing MW development, then I think this 
assumption is too stringent. You need to run the tests in /tests/PHPUnitTests, 
which in at least one case requires the use of $wgDBadminuser and 
$wgDBadminpassword, something a non-priviledged user would not be allowed to do.

If a developer has more system privileges than a production admin, to what 
extent? Can we assume he has root access? If not, can we assume he can get 
someone who has to do things like install PHPUnit? Can we assume the 
availability of PERL or should we only assume PHP? Can we assume *AMP (e.g., 
LAMP, WAMP, MAMP, XAMP)? Can we assume PEAR? Can the developer install into 
PEAR?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Chad  wrote:

> 
> Tests should run in a vanilla install, with minimal
> dependency on
> external stuff. PHPUnit
> (or whatever framework we use) would be considered an
> acceptable dependency for
> test suites. If PHPUnit isn't available (ie: already
> installed and in
> the include path), then
> we should bail out nicely.
> 
> In general, external dependencies should be used as
> seamlessly as
> possible, with minimal
> involvement from the end-user. A good example is
> wikidiff2...we load
> it if it exists, we fail
> back to PHP diff mode if we can't use it.

OK, graceful backout if an external dependency fails and minimal dependency on 
external stuff. So far we have two categories of proposal for test 
infrastructure : 1) build it all ourselves, and 2) use some external stuff. How 
do we decide which to do?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Problem with phpunit --skeleton

2009-08-12 Thread dan nessett
I have been playing around with phpunit, in particular its facility for 
generating tests from existing PHP code. You do this by processing a suitably 
annotated (using /* @assert ... */ comment lines) version of the file with 
phpunit --skeleton. Unfortunately, the --skeleton option assumes the file 
contains a class with the same name as the file.

For example, I tried annotating Hook.php and then processing it with phpunit 
--skeleton. It didn't work. phpunit reported:

Could not find class "Hook" in ".../phase3/tests/Hook.php"

(where I have replaced my path to MW with "...").

Since there are many MW files that do not contain classes of the same name as 
the file (or even classes at all), the --skeleton is probably not very useful 
for MW phpunit test generation.

You can still use phpunit by inserting the appropriate test code manually, but 
the objective of automatically generating tests with the same convenience as 
adding lines to parserTests.txt isn't available.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Brion Vibber  wrote:

> We *already* automatically discover the MW root directory.

Yes, you're right. I should have said automatically discover the MW root 
directory without using file position dependent code.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Brion Vibber  wrote:

> 
> The suggestions were for explicit manual configuration, not
> 
> autodiscovery. Autodiscovery means *not* having to set
> anything. :)

I am insane to keep this going, but the proposal I made did not require doing 
anything manually (other than running the install script, which you have to do 
anyway). The install script knows (or can find out) where the MW root is 
located. It could then either: 1) rewrite php.ini to concatenate the location 
of MWInit.php at the end of include_path, or 2) plop MWInit.php into a 
directory already searched by PHP for includes/requires (e.g., the PEAR) 
directory.

I gave up on the proposal when people pointed out that MW admins may not have 
the privileges that would allow the install script to do either.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



Re: [Wikitech-l] Assumptions for development machines (w.r.t. to test infrastructure)

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Roan Kattouw  wrote:

> On shared hosting, both are impossible. MediaWiki currently
> works with
> minimal write access requirements (only the config/
> directory for the
> installer and the images/ directory if you want uploads),
> and we'd
> like to keep it that way for people who are condemned to
> shared
> hosting.

Which is why I wrote in the message that is the subject of your reply:

"I gave up on the proposal when people pointed out that MW admins may not have 
the privileges that would allow the install script to do either."

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
I am investigating how to write a comprehensive parser regression test. What I 
mean by this is something you wouldn't normally run frequently, but rather 
something that we could use to get past the "known to fail" tests now disabled. 
The problem is no one understands the parser well enough to have confidence 
that if you fix one of these tests that you will not break something else.

So, I thought, how about using the guts of DumpHTML to create a comprehensive 
parser regression test. The idea is to have two versions of phase3 + 
extensions, one without the change you make to the parser to fix a 
known-to-fail test (call this Base) and one with the change (call this 
Current). Modify DumpHTML to first visit a page through Base, saving the HTML 
then visit the same page through Current and compare the two results. Do this 
for every page in the database. If there are no differences, the change in 
Current works.

Sitting here I can see the eyeballs of various developers bulging from their 
faces. "What?" they say. "If you ran this test on, for example, Wikipedia, it 
could take days to complete." Well, that is one of the things I want to find 
out. The key to making this test useful is getting the code in the loop 
(rendering the page twice and testing the results for equality) very efficient. 
I may not have the skills to do this, but I can at least develop an upper bound 
on the time it would take to run such a test.

A comprehensive parser regression test would be valuable for:

* fixing the known-to-fail tests.
* testing any new parser that some courageous developer decides to code.
* testing major releases before they are released.
* catching bugs that aren't found by the current parserTest tests.
* other things I haven't thought of.

Of course, you wouldn't run this thing nightly or, perhaps, even weekly. Maybe 
once a month would be enough to ensure the parser hasn't regressed out of sight.



  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, dan nessett  wrote:

> "If you ran this test on, for example, Wikipedia, 

Of course, what I meant is run the test on the Wikipedia database, not on the 
live system.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] More fun and games with file position relative code

2009-08-12 Thread dan nessett
So. I checked out a copy of phase3 and extensions to start working on 
investigating the feasibility of a comprehensive parser regression test. After 
getting the working copy downloaded, I do what I usually do - blow away the 
extensions directory stub that comes with phase3 and soft link the downloaded 
copy of extensions in its place. I then began familiarizing myself with 
DumpHTML by starting it up in a debugger. Guess what happened.

If fell over. Why? Because DumpHTML is yet another software module that 
computes the value $IP. So what? Well, DumpHTML.php is located in 
../extensions/DumpHTML. At line 57-59 it executes:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
$IP = dirname(__FILE__).'/../..';
}

This works on a deployed version of MW, since the extensions directory is 
embedded in /phase3. But, in a development version, where /extensions is a 
separate subdirectory, "./../.." does not get you to phase3, it gets you to MW 
root. So, when you execute the next line:

require_once( $IP."/maintenance/commandLine.inc" );

DumpHTML fails.

Of course, since I am going to change DumpHTML anyway, I can move it to 
/phase3/maintenance and change the '/../..' to '/..' and get on with it. But, 
for someone attempting to fix bugs in DumpHTML, the code that uses a knowledge 
of where DumpHTML.php is in the distribution tree is an issue.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Roan Kattouw  wrote:

> I read this paragraph first, then read the paragraph above
> and
> couldn't help saying "WHAT?!?". Using a huge set of pages
> is a poor
> replacement for decent tests.

I am not proposing that the CPRT be a substitute for "decent tests." We still 
need a a good set of tests for the whole MW product (not just the parser). Nor 
would I recommend making a change to the parser and then immediately running 
the CPRT. Any developer that isn't masochistic would first run the existing 
parserTests and ensure it passes. Then, you probably want to run the modified 
DumpHTML against a small random selection of pages in the WP DB. Only if it 
passes those tests would you then run the CPRT for final assurance. 

The CPRT I am proposing is about as good a test of the parser that I can think 
of. If a change to the parser passes it using the Wikipedia database (currently 
5 GB), then I would say for all practical purposes the changes made to the 
parser do not regress it.

> Also, how would you handle
> intentional
> changes to the parser output, especially when they're
> non-trivial?

I don't understand this point. Would you elaborate?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] More fun and games with file position relative code

2009-08-12 Thread dan nessett
Chad  wrote:
 
> DumpHTML will not be moved back to maintenance in the repo, it was
> already removed from maintenance and made into an extension. Issues
> with it as an extension should be fixed, but it should not be encouraged
> to go back into core.

What I meant was I can move the code in DumpHTML as CPRT to maintanence in my 
working copy and work on it there. Whether this code is simply a MacGyver test 
or something else is completely up in the air.

> Also, on a meta notecan you possibly confine all of your testing comments
> to a single thread? We don't need a new thread for each problem you find :)
> 

My apologies.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] More fun and games with file position relative code

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Brion Vibber  wrote:

> Your setup is incorrect: the extensions folder *always*
> goes inside the 
> MediaWiki root dir. Always.
> 

Sorry, my inexperience with Subversion led me in the wrong direction. I didn't 
realize I could check out phase3 then point Subversion to the extensions 
directory in phase3 to check out extensions. I thought Subversion would get 
confused, but I just tried it and it worked :-). At least Subversion performed 
the checkout. I haven't tried doing an update.

I just (case sensitively) searched the extensions directory for the string "$IP 
=" and found 32 files that compute $IP on their own. How about creating a 
standard bit of code that extensions and other modules can copy and use to 
figure out MW root. For example, it is very unlikely that the name of the first 
level directory (i.e., phase3) will change. The code can call dirname( __FILE__ 
) and then search from the end of the pathname until it finds phase3. It then 
knows that the prefix is MW root.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A comprehensive parser regression test

2009-08-12 Thread dan nessett
--- On Wed, 8/12/09, Tim Landscheidt  wrote:

> I think though that more people
> would read and embrace your
> thoughts if you would find
> a more concise way to put
>  them across :-).

Mea Culpa. I'll shut up for a while.


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] identifier collisions

2009-08-14 Thread dan nessett
One of the first problems to solve in developing the proposed CPRT is how to 
call a function with the same name in two different MW distributions. I can 
think of 3 ways: 1) use the Namespace facility of PHP 5.3, 2) use threads, or 
3) use separate process and IPC. Since MAMP supports none of these I am off 
building an AMP installation from scratch.

Some questions:

* Are there other ways to solve the identifier collision problem?

* Are some of the options I mention unsuitable for a MW CPRT, e.g., currently 
MW only assumes PHP 5.0 and requiring 5.3 may unacceptably constrain the user 
base.

* Is MW thread safe?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-14 Thread dan nessett
--- On Fri, 8/14/09, Dmitriy Sintsov  wrote:

> I remember some time ago I was strongly discouraged to
> compile and run 
> PHP threaded MPM for apache because some functions or
> libraries of PHP 
> itself were not thread safe.

OK, this and Chad's comment suggests the option is multi-process/IPC. One more 
question:

* Can we assume PHP has PCNLT support or will the test require startup from a 
shell script?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-14 Thread dan nessett
--- On Fri, 8/14/09, Dmitriy Sintsov  wrote:

> I remember some time ago I was strongly discouraged to
> compile and run 
> PHP threaded MPM for apache because some functions or
> libraries of PHP 
> itself were not thread safe.

While my machine was compiling AMP components, I thought about this a little 
bit. It seems weird that the implementation of a language intended to provide 
backend functionality for web servers isn't thread safe. Apache and other web 
server software must be threaded operationally towards infinity. How do they 
deal with using a non-thread safe library? Each time a thread executes PHP code 
does it have to grab a lock that protects the whole PHP library?

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-15 Thread dan nessett
--- On Fri, 8/14/09, Tim Starling  wrote:

> And please, spare us from your rant about how terrible this
> is. It's
> not PHP's fault that you don't know anything about it.

I'm sorry my questions make you angry. I don't recall ranting about PHP. 
Actually, I kind of like it. Lack of thread safety is an implementation problem 
not a problem with the language.

But, let's not dwell on your rant about my stupidity. Let's do something 
positive. You are an expert on the MW software and presumably PHP, Apache and 
MySQL. If you find it ridiculous that a newbie is driving the discussion about 
MW QA (I certainly do), pick up the ball and run with it. How would you fix the 
parser so all disabled tests in parserTests run? How would you build a test 
harness so developers can write unit tests for their bug fixes, feature 
additions and extensions? How would you integrate these unit tests into a good 
set of regression tests? How would you divide up the work so a set of 
developers can complete it in a reasonable amount of time? How do you propose 
achieving consensus on all of this?

On the other hand, maybe you would rather code than think strategically. Fine. 
Commit yourself to fixing the parser so all of the disabled tests run and also 
all or most of the pages on Wikipedia do not break and I will shut up about the 
CPRT. Commit yourself to creating a test harness that other developers can use 
to write unit tests and I will gladly stop writing emails about it. Commit 
yourself to develop the software the organizes the unit tests into a product 
regression test that developers can easily run and I will no longer bother you 
about MW QA.

My objective is a MW regression test suite that provides evidence that any 
extensions I write do not break the product. Once that objective is achieved, I 
will no longer infect your ears with dumb questions.

Dan 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-17 Thread dan nessett
--- On Mon, 8/17/09, Brion Vibber  wrote:

> Really though, this thread has gotten extremely unfocused;
> it's not  
> clear what's being proposed to begin with and we've
> wandered off to a  
> lot of confusion.

I'll take partial responsibility for the confusion. Like I said recently, I 
think it is pretty ridiculous that a newbie like me is pushing the discussion 
on MW QA. I am trying to learn the underlying technologies as fast as I can, 
but it is a steep learning curve. Also, let me reiterate. If someone more 
knowledgeable about these technologies than I is willing to step up and lead, I 
have no problem whatsoever following. On the other hand, I need a MW product 
regression test suite. If getting it means I have to expose my ignorance to an 
international audience, I'm willing to take that hit.

> We should probably start up a 'testing center' page on
> mediawiki.org  
> and begin by summarizing what we've already got -- then we
> can move on  
> to figuring out what else we need.
> 

I think this is a great idea.

> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] CPRT feasibility

2009-08-20 Thread dan nessett
I am looking into the feasibility of writing a comprehensive parser regression 
test (CPRT). Before writing code, I thought I would try to get some idea of how 
well such a tool would perform and what gotchas might pop up. An easy first 
step is to run dump_HTML and capture some data and statistics.

I tried to run the version of dumpHTML in r54724, but it failed. So, I went 
back to 1.14 and ran that version against a small personal wiki database I 
have. I did this to get an idea of what structures dump_HTML produces and also 
to get some performance data with which to do projections of runtime/resource 
usage.

I ran dumpHTML twice using the same MW version and same database. I then diff'd 
the two directories produced. One would expect no differences, but that 
expectation is wrong. I got a bunch of diffs of the following form (I have put 
a newline between the two file names to shorten the line length):

diff -r 
HTML_Dump/articles/d/n/e/User~Dnessett_Bref_Examples_Example1_Chapter_1_4083.html
 
HTML_Dump2/articles/d/n/e/User~Dnessett_Bref_Examples_Example1_Chapter_1_4083.html
77,78c77,78
< Post-expand include size: 16145/2097152 bytes
< Template argument size: 12139/2097152 bytes
---
> Post-expand include size: 16235/2097152 bytes
> Template argument size: 12151/2097152 bytes

I looked at one of the html files to see where these differences appear. They 
occur in an html comment:



Does anyone have an idea of what this is for? Is there any way to configure MW 
so it isn't produced?

I will post some performance data later.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] CPRT feasibility

2009-08-20 Thread dan nessett
--- On Thu, 8/20/09, Andrew Garrett  wrote:

> As the title implies, it is a performance limit report. You
> can remove  
> it by changing the parser options passed to the parser.
> Look at the  
> ParserOptions and Parser classes.

Thanks. It appears dumpHTML has no command option to turn off this report (the 
parser option is mEnableLimitReport).

A question to the developer community: Is it better to change dumpHTML to 
accept a new option (to turn off Limit Reports) or copy dumpHTML into a new 
CPRT extension and change it. I strongly feel that having two extensions with 
essentially the same functionality is bad practice. On the other hand, changing 
dumpHTML means it becomes dual purposed, which has the potential of making it 
big and ugly. One compromise position is to attempt to factor dumpHTML so that 
a core provides common functionality to two different upper layers. However, I 
don't know if that is acceptable practice for extensions.

A short term fix is to pipe the output of dumpHTML through a filter that 
removes the Limit Report. That would allow developers to use dumpHTML (as a 
CPRT) fairly quickly to find and fix the known-to-fail parser bugs. The 
downside to this is it may significantly degrade performance.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-22 Thread dan nessett
--- On Fri, 8/21/09, Andrew Garrett  wrote:

> Yes, this is where we started because this is the status
> quo. What I  
> was describing is how it's done now.

Is maintaining the status quo really desirable? Look at the extensions 
directory. It currently has ~400 extension sub-directories. If you wanted to 
reorganize this so there is some sort of logic to it (e.g., first cut - put 
command line extensions in one directory and hook based extensions in another) 
you would have to change a lot of code so the computation of $IP by some 
extensions is correct.

How about this. When you run php from the command line, there is a flag -d  
defined as: "foo[=bar] Define INI entry foo with value 'bar'". For those 
extensions that run from the command line you could require callers to include 
a value for the php "include_path". This value would be the value in php.ini 
appended with the value of a directory containing the simple MWInit.php file 
provided in my message of Aug. 11, 9:57AM. Command line extensions could then 
call MWInit() to get MW root.

I just tried this and it worked. It would fix the problem for at least command 
line utilities.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-22 Thread dan nessett
--- On Sat, 8/22/09, Platonides  wrote:

> How's that better than MW_CONFIG_PATH environment
> variable?

My understanding is that the administrators of certain installations cannot set 
environmental variables (I am taking this on faith, because that seems like a 
very very restricted site). What I suggested takes no work at all by installers 
or wiki administrators. MWInit.php (or whatever people want to name it) is part 
of the distribution. When you run a command line utility you have to be able to 
type something like "php .php". If you can do that, you should be 
able to type "php -d  .php".

If I am wrong and for some sites even that is not possible, then I am not sure 
how you would use command line utlities at all.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A potential land mine

2009-08-23 Thread dan nessett
--- On Sun, 8/23/09, Andrew Garrett  wrote:

> $ MW_INSTALL_PATH=/var/wiki/mediawiki php/maintenance/update.php

I don't understand the point you are making. If an MW administrator can set 
environmental variables, then, of course, what you suggests works. However, 
Brion mentions in his Tues, Aug 11, 10:09 email that not every MW installation 
admin can set environmental variables and Aryeh states in his Tues, Aug. 11, 
10:09am message that some MW administrators only have FTP access to the 
installations they manage. So, as I understand it some administrators cannot 
use the tactic you describe.

An important issue is whether these admins have access to command line 
utilities at all. If not, then the use of file position dependent code in 
command line utilities can be eliminated by substituting:

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) die();

for (taken from dumpHTML.php):

$IP = getenv( 'MW_INSTALL_PATH' );
if ( $IP === false ) {
$IP = dirname(__FILE__).'/../..';

This works if only admins who can set environmental variables can execute MW 
command line utilities.

If there are administrators who can execute command lines, but cannot set 
environmental variables (e.g., they are confined to use a special shell), then 
what I suggested in the previous email eliminates file position dependency. 
That is, the command line would be:

php -d include_path=: 
.php

If an admin can execute php .php, he should be able to execute the 
prior command.

Dan


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


  1   2   >