Hoi,
Suppose that someone fixes a test that has been always failing... one of
those "known to fail". It makes no difference right ?? Giving them the
status of pass is imho dead wrong because they should not fail in the first
place.. now a status of KNOWN TO FAIL makes sense.
Thanks,
GeardM
2
On Tue, Jul 21, 2009 at 3:33 AM, Kwan Ting Chan wrote:
> I know you want to avoid using command line, but in this case it's really
> much simpler / only feasible choice to search the internet / ask around for
> the right commands and issue that on the command line. It's only going to be
> one line
On Tue, Jul 21, 2009 at 7:19 AM, Gerard
Meijssen wrote:
> Suppose that someone fixes a test that has been always failing... one of
> those "known to fail". It makes no difference right ??
Difference in what sense? It means we have one less failing test
reported, presumably.
> Giving them the
> s
Hoi,
There is no point having a perfect score when it is actually a lie. It seems
to me that Brion is against the removal of these tests because he wants them
to pass. Having a third state of "known to fail" makes sense, just changing
them to pass makes it necessary to add a "citation needed" becau
On Tue, Jul 21, 2009 at 9:56 AM, Gerard
Meijssen wrote:
> There is no point having a perfect score when it is actually a lie. It seems
> to me that Brion is against the removal of these tests because he wants them
> to pass. Having a third state of "known to fail" makes sense, just changing
> them
Aryeh Gregor wrote:
On Tue, Jul 21, 2009 at 9:56 AM, Gerard
Meijssen wrote:
There is no point having a perfect score when it is actually a lie. It seems
to me that Brion is against the removal of these tests because he wants them
to pass. Having a third state of "known to fail" makes sense, just
On Tue, Jul 21, 2009 at 7:43 AM, Kwan Ting Chan wrote:
> Aryeh Gregor wrote:
>>
>> On Tue, Jul 21, 2009 at 9:56 AM, Gerard
>> Meijssen wrote:
>>>
>>> There is no point having a perfect score when it is actually a lie. It
>>> seems
>>> to me that Brion is against the removal of these tests because h
On Mon, Jul 20, 2009 at 11:33 PM, Kwan Ting Chan wrote:
> Chengbin Zheng wrote:
>
>>
>> Thank you for dropping by and sharing this information with us Tomasz!
>>
>> It is good just knowing that it is in the queue. Have you considered
>> making
>> a version of static HTML Wikipedia where there are
> Actually, I do have to learn everything. I know absolutely
> nothing about
> HTML and all the stuff (Maybe I will when I take the computer
> science course
> in grade 10). Think of it this way, you have a radioactive
> material decay
> problem, where you want to find out how much mass is left
Hoi,
That is exactly the problem. You report that they pass and in reality they
still fail. Someone should change them to pass ie fix the software.
Thanks,
GerardM
2009/7/21 Aryeh Gregor
>
> On Tue, Jul 21, 2009 at 9:56 AM, Gerard
> Meijssen wrote:
> > There is no point having a perfect sco
On Tue, Jul 21, 2009 at 3:21 PM, Gerard
Meijssen wrote:
> Hoi,
> That is exactly the problem. You report that they pass and in reality they
> still fail. Someone should change them to pass ie fix the software.
I think the appropriate English reply in this context would be "No
s**t, Sherlock"...
_
The change I made was to add a "flipresult" option that simply turns a success
into a failure and a failure into a success. This is what I understood I was
asked to do. On the plus side, this approach also allows the addition of parser
tests that are supposed to fail (not just have always faile
On Tue, Jul 21, 2009 at 9:37 AM, Lane, Ryan
wrote:
> > Actually, I do have to learn everything. I know absolutely
> > nothing about
> > HTML and all the stuff (Maybe I will when I take the computer
> > science course
> > in grade 10). Think of it this way, you have a radioactive
> > material decay
On Tue, Jul 21, 2009 at 11:18 AM, Chengbin Zheng wrote:
>
>
> On Tue, Jul 21, 2009 at 9:37 AM, Lane, Ryan > wrote:
>
>> > Actually, I do have to learn everything. I know absolutely
>> > nothing about
>> > HTML and all the stuff (Maybe I will when I take the computer
>> > science course
>> > in gr
On Wed, Jul 22, 2009 at 12:45 AM, dan nessett wrote:
>
> I just looked at the code and it shouldn't be hard to add a knowntofail
> option that acts like flipresult and then add a new category of test result
> that specifies how many known to fail results occurred. However, one issue is
> whether
Can someone please help. I have only been working on this code for 5 days and I
do not yet understand it. It turns out that the redirect happens in
InitializeSpecialCases. The reason I get into a redirect loop is the code
continually falls into the else if statement that has the following condi
Hi!
What's up with the translation extension? The messages of the new
Vector skin are translated into hungarian weeks ago on translatewiki,
but since no update, and no translation extension to transfer these
strings :[ I don't want to create all these pages on Hungarian
Wikipedia, it's unnecessary
On Tue, Jul 21, 2009 at 11:22 AM, Chengbin Zheng wrote:
> On a side note, if parsing the XML gets you the static HTML version of
> Wikipedia, why can't Wikimedia just parse it for us and save a lot of our
> time (parsing and learning), and use that as the static HTML dump version?
I'd assume it wa
On Tue, Jul 21, 2009 at 10:45 AM, dan nessett wrote:
> The change I made was to add a "flipresult" option that simply turns a
> success into a failure and a failure into a success. This is what I
> understood I was asked to do. On the plus side, this approach also allows the
> addition of parser
On Tue, Jul 21, 2009 at 10:51 AM, Aryeh Gregor <
simetrical+wikil...@gmail.com > wrote:
> But if we're going to keep the known-to-fail tests at all, it doesn't make
> a lot of sense to report
> them as passing when they're actually failing . . . if we do that we may as
> well just drop them.
Th
On Tue, Jul 21, 2009 at 12:47 PM, Aryeh Gregor <
simetrical+wikil...@gmail.com > wrote:
> On Tue, Jul 21, 2009 at 11:22 AM, Chengbin Zheng
> wrote:
> > On a side note, if parsing the XML gets you the static HTML version of
> > Wikipedia, why can't Wikimedia just parse it for us and save a lot of o
On Tue, Jul 21, 2009 at 12:34 PM, dan nessett wrote:
> else if( $action == 'view' && !$request->wasPosted() &&
> ( !isset($this->GET['title']) ||
> $title->getPrefixedDBKey() != $this->GET['title'] ) &&
> !count( array_diff( array_keys( $this->GET ), a
On Tue, Jul 21, 2009 at 12:57 PM, Brian wrote:
> The tests have never passed - they should be commented out for usability
> reasons.
Well, I have no objections, but apparently that's not acceptable. An
"expected fail" flag would be about as usable.
> And ideally there would be a post-commit hook
On Tue, Jul 21, 2009 at 1:08 PM, Chengbin Zheng wrote:
> Wouldn't parsing it be faster than actually creating that many HTMLs?
Parsing it *is* creating the HTML files. That's what "parsing" means
in MediaWiki, converting wikitext to HTML. It's kind of a misnomer,
admittedly.
___
On Tue, Jul 21, 2009 at 1:11 PM, Aryeh Gregor
> wrote:
> On Tue, Jul 21, 2009 at 1:08 PM, Chengbin Zheng
> wrote:
> > Wouldn't parsing it be faster than actually creating that many HTMLs?
>
> Parsing it *is* creating the HTML files. That's what "parsing" means
> in MediaWiki, converting wikitext
> No, I know what parsing means. Even if it takes 2 days to parse them,
> wouldn't it be faster than to actually create a static HTML dump the
> traditional way?
>
The content is wiki-text. It has to be parsed to be turned into HTML. There
isn't a more traditional way, because there is no other w
On Tue, Jul 21, 2009 at 1:17 PM, Chengbin Zheng wrote:
> No, I know what parsing means. Even if it takes 2 days to parse them,
> wouldn't it be faster than to actually create a static HTML dump the
> traditional way?
I don't know. I can only speculate. Whatever it is, it will take
some attention
Hoi,
TheDevilOnline and I have worked on the "LocalisationUpdate" extension. This
extension allows for a daiily update of messages that are waiting in SVN to
be updated on the Wiki. It works. I have been testing it for a couple of
weeks. There are a few issues with the code .. minor issues.. I have
On Tue, Jul 21, 2009 at 7:17 PM, Chengbin Zheng wrote:
...
>
> No, I know what parsing means. Even if it takes 2 days to parse them,
> wouldn't it be faster than to actually create a static HTML dump the
> traditional way?
>
> If it is not, then what is the difficulty of making static HTML dumps? I
>> wouldn't it be faster than to actually create a static HTML dump the
>> traditional way?
> The content is wiki-text. It has to be parsed to be turned into HTML. There
> isn't a more traditional way, because there is no other way.
Wouldn't it be possible to dump the parser cache instead of dumpi
On Tue, Jul 21, 2009 at 1:42 PM, Tei wrote:
> On Tue, Jul 21, 2009 at 7:17 PM, Chengbin Zheng
> wrote:
> ...
>>
>> No, I know what parsing means. Even if it takes 2 days to parse them,
>> wouldn't it be faster than to actually create a static HTML dump the
>> traditional way?
>>
>> If it is not, t
Thanks! Here are the values that cause entry into the else-if statement:
$targetUrl === 'http://localhost/MediawikiTest/Latest Trunk
Version/phase3/index.php/Main_Page'
$action === 'view'
$request->data ===
$this->GET ===
$title->mDbkeyform === 'Main_Page'
_SERVER[REQUEST_METHOD] === GET
Not sure the post-commit hook running the parser is a good idea. The software
could have been broken by a previous committer. From that point on parserTests
will report errors until the problem is fixed, so committers will just learn to
ignore the message.
--- On Tue, 7/21/09, Brian wrote:
>
On Tue, Jul 21, 2009 at 12:05 PM, dan nessett wrote:
>
> Not sure the post-commit hook running the parser is a good idea. The
> software could have been broken by a previous committer. From that point on
> parserTests will report errors until the problem is fixed, so committers
> will just learn
On Tue, Jul 21, 2009 at 1:49 PM, Chad wrote:
> On Tue, Jul 21, 2009 at 1:42 PM, Tei wrote:
> > On Tue, Jul 21, 2009 at 7:17 PM, Chengbin Zheng
> wrote:
> > ...
> >>
> >> No, I know what parsing means. Even if it takes 2 days to parse them,
> >> wouldn't it be faster than to actually create a stat
Better ideas. Another possibility is every 24hrs to run parser tests (and any
other regression tests that might exist) against all revisions committed into
trunk since the last run. Post the results and keep track of the number of bugs
each committer has introduced into the code base for the pa
On Tue, Jul 21, 2009 at 2:20 PM, Chengbin Zheng wrote:
>
>
> On Tue, Jul 21, 2009 at 1:49 PM, Chad wrote:
>
>> On Tue, Jul 21, 2009 at 1:42 PM, Tei wrote:
>> > On Tue, Jul 21, 2009 at 7:17 PM, Chengbin Zheng
>> wrote:
>> > ...
>> >>
>> >> No, I know what parsing means. Even if it takes 2 days to
I think I have found the problem causing the continuous redirect on my test
wiki. However, since I am new at this, I want to run this past someone with a
better understanding of the code to make sure I have it right.
At line 145 of WebRequest::extractTitle() [r53551] is the following test:
if(
On Tue, Jul 21, 2009 at 8:20 PM, Chengbin Zheng wrote:
..
> Why would you download Wikipedia? Internet is so readily available, and the
> online version has images.
It obviusly don't make much sense for final users.
It has been discused before anyway..
http://www.mail-archive.com/search?q=wikitec
At a guess, you haven't set up short URLs properly. If you have no active
apache rewrite rules, then that is a URL that will trigger the internal
redirect. However, if your MW installation *thinks* that it's supposed to
serve short URLs in that format, then it will happily form the redirect UR
dan nessett wrote:
> The URL to my testwiki is:
>
> '/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page'
>
> This is the value in $path. However, the value in $base is:
>
> '/MediawikiTest/Latest Trunk Version/phase3/index.php/'
>
> So, the call to substr fails and the code that
--HM--
Thanks. I have never really gotten around to setting up short URLs.
But, see my latest message. I think there is a bug in
WebRequest::extractTitle(). If this turns out to be correct, I will either need
to locally patch my installations or change the location of the test phase3
director
On Tue, Jul 21, 2009 at 6:05 PM, dan nessett wrote:
> Not sure the post-commit hook running the parser is a good idea. The software
> could have been broken by a previous committer. From that point on
> parserTests will report errors until the problem is fixed, so committers will
> just learn to
If I am reading the report correctly, fixing this bug is going to require much
more than a change here or there. It may be better to simply require that URLs
to wikis contain no blanks. In any case, the quickest way for me to fix the
problem is to change the name of the directory where I store
dan nessett wrote:
> If I am reading the report correctly, fixing this bug is going to
> require much more than a change here or there. It may be better to
> simply require that URLs to wikis contain no blanks. In any case, the
> quickest way for me to fix the problem is to change the name of the
>
This raises the issue that started this going in the first place. Suppose I
make the necessary modifications so "$wgScriptPath gets properly escaped when
initialized." There could be all kinds of dependencies on this global scattered
all over the place. Just running parserTests and getting them
Aryeh Gregor wrote:
> I'm CCing wikitech-l here for broader input, since I do think
> Wikipedia would be interested in adopting this but I can't really
> speak for Wikipedia myself. The history of this discussion can be
> found in the archives:
>
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.
47 matches
Mail list logo