Hi,
While running your changed tests on Windows, I think I found new failures.
Being a bot and all I'm not very good at pattern recognition, so I might be
wrong, but could you please double-check?
Full results can be found at
http://testbot.winehq.org/JobDetails.pl?Key=15809
Your paranoid
On 12/07/11 16:52, Marvin wrote:
Hi,
While running your changed tests on Windows, I think I found new failures.
Being a bot and all I'm not very good at pattern recognition, so I might be
wrong, but could you please double-check?
Full results can be found at
Hi,
While running your changed tests on Windows, I think I found new failures.
Being a bot and all I'm not very good at pattern recognition, so I might be
wrong, but could you please double-check?
Full results can be found at
http://testbot.winehq.org/JobDetails.pl?Key=10676
Your paranoid
Hello,
looking at http://test.winehq.org, i remarked that the file
ddraw/tests/ddrawmodes does not exist and so it is never tested.
Does anyone have an explanation for this fact?
A+
David
On 10/24/2010 09:23 AM, paulo lesgaz wrote:
Hello,
looking at http://test.winehq.org, i remarked that the file
ddraw/tests/ddrawmodes does not exist and so it is never tested.
What about:
http://test.winehq.org/data/tests/ddraw:ddrawmodes.html
or am I missing something?
--
Cheers,
Paul.
but nothing at
http://test.winehq.org/data/8c5718ec9d0613be7208e1ceaecac0e7434c4cf5/index_2003.html
for instance.
A+
David
De : Paul Vriens paul.vriens.w...@gmail.com
À : paulo lesgaz jeremielapu...@yahoo.fr
Cc : wine-devel@winehq.org
Envoyé le : Dim 24
Am 24.10.2010 13:51, schrieb paulo lesgaz:
but nothing at
http://test.winehq.org/data/8c5718ec9d0613be7208e1ceaecac0e7434c4cf5/index_2003.html
for instance.
A+
David
*De :* Paul Vriens paul.vriens.w...@gmail.com
On 10/24/2010 01:51 PM, paulo lesgaz wrote:
but nothing at
http://test.winehq.org/data/8c5718ec9d0613be7208e1ceaecac0e7434c4cf5/index_2003.html
for instance.
(And to list, I should remember to hit the correct button).
That's because there are no failures/skips for ddraw:drawmodes
one to more easily debug the program. You won't have to go back and rerun
the program yourself in wine, you will already be given the output so you
can see what failed, where.
3) I'd also like to implement a feature to have the output more in the
format of test.winehq.org. I am thinking
yourself in wine, you will already be given the output so you
can see what failed, where.
3) I'd also like to implement a feature to have the output more in the
format of test.winehq.org. I am thinking that it would be easier to have it
done split into two scripts, one on client-side, one server
Yea, I'm just about ready. I was going to wait until I had the initial
buildbot system ready to go, but there's no reason I couldn't but my
proposal on wine-devel tonight. Quick summary of my proposal:
- Buildbot system to automate testing of compilation and application
installation / execution
On Tue, Mar 9, 2010 at 6:53 AM, Seth Shelnutt shelnu...@gmail.com wrote:
It's harder than it sounds to actually get
a few useful app tests written. Plus there'll
be some work writing up and fixing the bugs
it uncovers.
So then do you think it's a worthy proposal to just work on wpkg
.
That is why I thinking working on tying it into test.winehq.org is more
worthy.
I still think tying appinstall and patchwatcher together would be
beneficial and allow people submitting patches to see where
regressions pop up in software, and to see
what new software works with each
On Tue, Mar 9, 2010 at 9:19 AM, Arjun Comar mand...@rose-hulman.edu wrote:
Yea, I'm just about ready. I was going to wait until I had the initial
buildbot system ready to go, but there's no reason I couldn't but my
proposal on wine-devel tonight. Quick summary of my proposal:
- Buildbot system
Yea, I noticed. That's why I was planning to include various build
parameters based on time constraints.
On Tue, Mar 9, 2010 at 11:31 AM, Austin English austinengl...@gmail.comwrote:
On Tue, Mar 9, 2010 at 9:19 AM, Arjun Comar mand...@rose-hulman.edu
wrote:
Yea, I'm just about ready. I was
on tying it into test.winehq.org is more
worthy.
I still think tying appinstall and patchwatcher together would be
beneficial and allow people submitting patches to see where
regressions pop up in software, and to see
what new software works with each submitted patch.
Perhaps a project
Alright, after reviewing everything, I think that tying appinstall into
test.winehq.org along with adding more (if not most) of the wpkg scripts to
appinstall would be most beneficial in terms of conformance and regression
testing. Converting the wpkg scripts to appinstall doesn't seem like
On Thu, Apr 9, 2009 at 3:41 AM, Paul Vriens paul.vriens.w...@gmail.com wrote:
2) we don't have a stable stable (sic) of machines
running the tests
So what constitutes a stable machine?
I wasn't complaining about unstable machines;
I was complaining that the set of machines reporting test
reporting test
results varies.
- Dan
Ok, got it. So how do we come up with a stable set?
Or do you just like to have some kind of script that fetches the reports
for a fixed set of boxes (configurable) and generate pages like
test.winehq.org?
--
Cheers,
Paul.
computers for which results are not
consistently available throughout the time range
being displayed
Or do you just like to have some kind of script that fetches the reports for
a fixed set of boxes (configurable) and generate pages like test.winehq.org?
Quick and dirty way would be to take the same
Paul Vriens wrote:
Dan Kegel wrote:
On Thu, Apr 9, 2009 at 3:41 AM, Paul Vriens
paul.vriens.w...@gmail.com wrote:
2) we don't have a stable stable (sic) of machines
running the tests
So what constitutes a stable machine?
I wasn't complaining about unstable machines;
I was complaining that
2009/4/9 Dan Kegel d...@kegel.com:
(This was last discussed in February, e.g.
http://www.winehq.org/pipermail/wine-devel/2009-February/073060.html )
The results on test.winehq.org seem more variable than one
would expect, which makes it harder to gauge wine's progress.
I can think of two
(This was last discussed in February, e.g.
http://www.winehq.org/pipermail/wine-devel/2009-February/073060.html )
The results on test.winehq.org seem more variable than one
would expect, which makes it harder to gauge wine's progress.
I can think of two sources of noise:
1) 32 and 64 bit results
Dan Kegel wrote:
(This was last discussed in February, e.g.
http://www.winehq.org/pipermail/wine-devel/2009-February/073060.html )
The results on test.winehq.org seem more variable than one
would expect, which makes it harder to gauge wine's progress.
I can think of two sources of noise:
1) 32
On Thu, Apr 9, 2009 at 5:09 AM, Rob Shearman robertshear...@gmail.com wrote:
Removing these two sources of noise might be as simple as
1) omit 64 bit results, and
Not a bad idea, but I would suggest classifying these machines as a
different category (I would suggest other for the moment)
Rob Shearman robertshear...@gmail.com writes:
2009/4/9 Dan Kegel d...@kegel.com:
(This was last discussed in February, e.g.
http://www.winehq.org/pipermail/wine-devel/2009-February/073060.html )
The results on test.winehq.org seem more variable than one
would expect, which makes it harder
On Thu, Apr 9, 2009 at 10:17 AM, Alexandre Julliard julli...@winehq.org wrote:
Having the 64-bit results in the same platform group as the 32-bit ones
is actually very helpful, it makes comparing them a lot easier. I don't
think we want to split them.
But suppressing or splitting the results
Dan Kegel wrote:
On Thu, Apr 9, 2009 at 10:17 AM, Alexandre Julliard julli...@winehq.org wrote:
Having the 64-bit results in the same platform group as the 32-bit ones
is actually very helpful, it makes comparing them a lot easier. I don't
think we want to split them.
But suppressing or
On Thu, Apr 9, 2009 at 10:50 AM, Paul Vriens paul.vriens.w...@gmail.com wrote:
But suppressing or splitting the results from machines
that don't have a full set of results shouldn't hurt...
How many runs should have a box have before it's considered for the 'stable'
list? We have 2 months
Dan Kegel d...@kegel.com writes:
On Thu, Apr 9, 2009 at 10:17 AM, Alexandre Julliard julli...@winehq.org
wrote:
Having the 64-bit results in the same platform group as the 32-bit ones
is actually very helpful, it makes comparing them a lot easier. I don't
think we want to split them.
But
The problem with the current arrangement is that when machines pop in and
out, any failures that are more likely on those machines also pop in and
out, so error counts fluctuate, which obscures the smaller changes due to
wine fixes or regressions.
I like the current output, and don't want to
On Thu, Apr 9, 2009 at 2:24 PM, Dan Kegel d...@kegel.com wrote:
The problem with the current arrangement is that when machines pop in and
out, any failures that are more likely on those machines also pop in and
out, so error counts fluctuate, which obscures the smaller changes due to
wine
John Klehm wrote:
On Wed, Feb 11, 2009 at 3:58 PM, Juan Lang juan.l...@gmail.com wrote:
My own feeling is that there are far fewer failing tests now than
there used to be, and I'd sure like to see that reflected somewhere at
a quick glance. Thoughts?
Maybe a test.winehq.org/trends page
On Thu, Feb 12, 2009 at 8:27 AM, Paul Vriens paul.vriens.w...@gmail.com wrote:
I do it for my own boxes (see attachment). The spikes (up and down) are
mainly
when I didn't run the tests on all my boxes. But you can see the overall
trend.
Nice. :) What are you using to generate that?
--John
John Klehm wrote:
On Thu, Feb 12, 2009 at 8:27 AM, Paul Vriens paul.vriens.w...@gmail.com
wrote:
I do it for my own boxes (see attachment). The spikes (up and down) are
mainly
when I didn't run the tests on all my boxes. But you can see the overall
trend.
Nice. :) What are you using to
The front page of test.winehq.org shows statistics about failed tests,
but it doesn't seem to take into account the number of individual
tests that passed and failed, rather the number of files that had any
failures.
So, for example, about a week ago I got a fix committed for some
failing mapi32
On Wed, Feb 11, 2009 at 3:58 PM, Juan Lang juan.l...@gmail.com wrote:
My own feeling is that there are far fewer failing tests now than
there used to be, and I'd sure like to see that reflected somewhere at
a quick glance. Thoughts?
Maybe a test.winehq.org/trends page showing some nice
On Wed, Feb 11, 2009 at 1:58 PM, Juan Lang juan.l...@gmail.com wrote:
The front page of test.winehq.org shows statistics about failed tests,
but it doesn't seem to take into account the number of individual
tests that passed and failed, rather the number of files that had any
failures.
So
James Hawkins trui...@gmail.com writes:
We should leave the failing files percentage up (note the name change)
and add a failing tests percentage next to it. The failing tests
percentage should be total_test_failures / total_tests_run.
That's not a useful number, many files run a lot of
On Wed, Feb 11, 2009 at 3:27 PM, Alexandre Julliard julli...@winehq.org wrote:
James Hawkins trui...@gmail.com writes:
We should leave the failing files percentage up (note the name change)
and add a failing tests percentage next to it. The failing tests
percentage should be
2009/2/12 James Hawkins trui...@gmail.com:
On Wed, Feb 11, 2009 at 3:27 PM, Alexandre Julliard julli...@winehq.org
wrote:
James Hawkins trui...@gmail.com writes:
We should leave the failing files percentage up (note the name change)
and add a failing tests percentage next to it. The
2009/2/11 Alexandre Julliard julli...@winehq.org
That's not a useful number, many files run a lot of tests, of which a
huge majority always succeeds. Having a single failure among 10,000
tests means that the test failed, and it's something bad that should be
taken care of. Showing that as a
James Hawkins wrote:
On Wed, Feb 11, 2009 at 3:27 PM, Alexandre Julliard julli...@winehq.org
wrote:
James Hawkins trui...@gmail.com writes:
We should leave the failing files percentage up (note the name change)
and add a failing tests percentage next to it. The failing tests
percentage
Juan Lang wrote:
The front page of test.winehq.org shows statistics about failed tests,
but it doesn't seem to take into account the number of individual
tests that passed and failed, rather the number of files that had any
failures.
So, for example, about a week ago I got a fix committed
Hi Alexandre,
These changes introduce 15 errors on all platforms.
--
Cheers,
Paul.
Jeff Zaroyko wrote:
This should clear up the remaining dsound test failures shown for the
Win98 and WinME test results.
Hi Jeff,
Would be a nice-to-have if you could add the platforms as comments to the
different
On Thu, Dec 4, 2008 at 7:35 PM, Paul Vriens [EMAIL PROTECTED] wrote:
Jeff Zaroyko wrote:
This should clear up the remaining dsound test failures shown for the
Win98 and WinME test results.
Hi Jeff,
Would be a
Jeff Zaroyko wrote:
On Thu, Dec 4, 2008 at 7:35 PM, Paul Vriens [EMAIL PROTECTED] wrote:
Jeff Zaroyko wrote:
This should clear up the remaining dsound test failures shown for the
Win98 and WinME test results.
Hi
On Saturday 29 November 2008 15:40:33 Paul Vriens wrote:
This also means that Paul Millars winetest is no longer available.
Yup, this is true. After providing winetest.exe for (I think) a little over 4
years, quisquiliae is falling silent and WineHQ is picking up the baton for
winetest.exe.
Hi,
Test.winehq.org is the new home of the cross compiled winetest.exe. Alexandre
put in some magic to create the new winetest.exe on winehq.
The new link:
http://test.winehq.org/builds/winetest-latest.exe
The link can also be found at the bottom of the test.winehq.org index page. I
already
Hi,
I was just reading a Dutch site were somebody mentioned to help Wine and
referred to:
http://test.winehq.com/data/062d61a5a4ba2d7972a0011387ceda64c79dd4e3/
(with a remark about possible reboots of your system).
The current page however doesn't have a link to the winetest executable. This
Dimi Paun wrote:
On Tue, 2008-05-27 at 07:20 -0700, Dan Kegel wrote:
Alexandre did an awesome job of improving the index page.
It's very polished and useful now.
http://test.winehq.org/data/
This is indeed very cool! I think we should link this
from somewhere on the Wiki or even WineHQ
2008/5/27 Dimi Paun [EMAIL PROTECTED]:
On Tue, 2008-05-27 at 07:20 -0700, Dan Kegel wrote:
Alexandre did an awesome job of improving the index page.
It's very polished and useful now.
http://test.winehq.org/data/
This is indeed very cool! I think we should link this
from somewhere
On Wed, 19 Mar 2008, Reece Dunn wrote:
Hi,
Looking at the results from a test run (e.g.
http://test.winehq.org/data/200803181000/), it would be nice to have:
1. A summary of a dlls overall results (i.e. the summation of all
its unit test results), preferably before the individual
Hi,
Looking at the results from a test run (e.g.
http://test.winehq.org/data/200803181000/), it would be nice to have:
1. A summary of a dlls overall results (i.e. the summation of all
its unit test results), preferably before the individual tests, but
could live with them at the bottom
On Wed, Mar 19, 2008 at 3:47 AM, Reece Dunn [EMAIL PROTECTED] wrote:
3. Possibly display the number of tests run in a given pass (e.g.
0/12 or 1+3/27). This is to give an idea of how many tests have been
run easily (especially for the dll and overall summaries).
The idea is to get a
Reece Dunn wrote:
Hi,
Looking at the results from a test run (e.g.
http://test.winehq.org/data/200803181000/), it would be nice to have:
1. A summary of a dlls overall results (i.e. the summation of all
its unit test results), preferably before the individual tests, but
could live
On Thu, 31 Jan 2008, Reece Dunn wrote:
[...]
Also, the security tests (the previous one) have two output lines, e.g.:
security: 18 tests executed (0 marked as todo, 0 failures), 0 skipped.
security: 944 tests executed (0 marked as todo, 0 failures), 1 skipped.
Yes, this typically happens
Hello,
i just noticed that the test results for advapi32:service tests are not
available for the last build (http://test.winehq.org/data/200801301937/). It
looks like the parser did not create the files.
Can someone familar with this stuff have a look?
Thanks Stefan
Hi,
I an getting a network timeout whern connecting to http://test.winehq.org.
- Reece
On 31/01/2008, Reece Dunn [EMAIL PROTECTED] wrote:
Hi,
I an getting a network timeout whern connecting to http://test.winehq.org.
Ignore... it is working for me again.
- Reece
On 31/01/2008, Stefan Leichter [EMAIL PROTECTED] wrote:
Hello,
i just noticed that the test results for advapi32:service tests are not
available for the last build (http://test.winehq.org/data/200801301937/). It
looks like the parser did not create the files.
Can someone familar
Reece Dunn wrote:
On 31/01/2008, Stefan Leichter [EMAIL PROTECTED] wrote:
Hello,
i just noticed that the test results for advapi32:service tests are not
available for the last build (http://test.winehq.org/data/200801301937/). It
looks like the parser did not create the files.
Can someone
Stefan Leichter wrote:
Hello,
i just noticed that the test results for advapi32:service tests are not
available for the last build (http://test.winehq.org/data/200801301937/). It
looks like the parser did not create the files.
Can someone familar with this stuff have a look?
Thanks
John Klehm wrote:
Bit off topic:
One thing that struck me was the difference in test results for the Wine runs.
This has most likely to do with old Wine installations running new tests. I will
add the Wine version to the infrastructure soon so that these outcomes make more
sense.
In the same
Michael Stefaniuc wrote:
Hello,
Paul Vriens wrote:
While I'm also busy with getting dll information on the page, I'm still
adding
stuff.
This next iteration will add todo information on the pages.
Current situation:
http://test.winehq.org/data/200708221000/
There are several todo's
infrastructure I meant 'test results'.
Hi,
Just to show you the opposite as well:
http://test.winehq.org/data/200708241000/wine_2000_0.9.43-431-g0a485b3/cabinet:extract.txt
Here can you see that winetest still has the old tests whereas my box was
updated to the latest GIT just an hour ago
Hi,
While I'm also busy with getting dll information on the page, I'm still adding
stuff.
This next iteration will add todo information on the pages.
Current situation:
http://test.winehq.org/data/200708221000/
There are several todo's for the Wine test but this is not shown on the page
Hello,
Paul Vriens wrote:
While I'm also busy with getting dll information on the page, I'm still
adding
stuff.
This next iteration will add todo information on the pages.
Current situation:
http://test.winehq.org/data/200708221000/
There are several todo's for the Wine test
Bit off topic:
One thing that struck me was the difference in test results for the Wine runs.
This has most likely to do with old Wine installations running new tests. I
will
add the Wine version to the infrastructure soon so that these outcomes make
more
sense.
In the same vein of
On Mi, 2007-03-14 at 09:07 -0700, Lei Zhang wrote:
BTW, are we evert going to put up a better front page for
test.winehq.org?
It still says 403 Forbidden.
Yes Please: http://bugs.winehq.org/show_bug.cgi?id=3187
--
By by ... Detlef
Hi,
currently all tests on test.winehq.org show the blue border to indicate the test
is running on a visible desktop. I think we can get rid of this one as winetest
will not run if we don't have a visible desktop.
My idea is to visualize the skipped tests by using the blue border
Paul Vriens [EMAIL PROTECTED] writes:
My idea is to visualize the skipped tests by using the blue border
at the single test level to indicate tests are skipped.
Hey, I like it! That blue border was already obsolete when I last
touched the code, but I found it so neat that I couldn't kill it
On Mo, 2007-03-05 at 12:07 +0100, Paul Vriens wrote:
My idea is to visualize the skipped tests by using the blue border at the
single
test level to indicate tests are skipped (see attached picture).
Comments, idea's, thoughts?
Nice Idea.
--
By by ... Detlef
Hi,
This patch add the skip information to test.winehq.org. I'm not a 100% sure what
will happen to the already existing reports when summary.js is patched. This
however is only relevant for someone opening the popup window for old reports.
There could be an issue when somebody is running
for the exception one is the test itself.
Should we consolidate the above ones as well? I think yes, cause the shell32
example is reflected as 0 errors on test.winehq.org.
Cheers,
Paul.
Hi,
sorry for not replying earlier, i didn't check mail yesterday.
If the problem is only the prefix
consolidate the above ones as well? I think yes, cause the shell32
example is reflected as 0 errors on test.winehq.org.
Cheers,
Paul.
Hi,
sorry for not replying earlier, i didn't check mail yesterday.
If the problem is only the prefix of the test result line the following trivial
patch
Hi,
So finally we have a working winetest executable and the test.winehq.org scripts
should be able to handle the new reports.
Could someone have a look what the issue currently is on the webserver as I've
sent several reports (a few hours ago) and they are not handled yet?
Cheers,
Paul.
Paul Vriens wrote:
Hi,
So finally we have a working winetest executable and the test.winehq.org
scripts should be able to handle the new reports.
Could someone have a look what the issue currently is on the webserver
as I've sent several reports (a few hours ago) and they are not handled
Paul Vriens [EMAIL PROTECTED] writes:
exception: 42 tests executed (0 marked as todo, 0 failures), 0 skipped.
exception: 279 tests executed (0 marked as todo, 5 failures), 0 skipped.
The first one triggers the 'end of this test' in dissect. The second
one is thus not accepted as dissect
is reflected as 0 errors on test.winehq.org.
Cheers,
Paul.
Hello,
during the weekend i run winetest-200510121000-paul-mingw.exe and
winetest-200510131000-paul-mingw.exe on my NT4 box and clicked on the button
to submit the test results. The progress bar for submitting the test results
finished without error, but the test results did not show up on the
Sunday, October 16, 2005, 2:19:18 PM, Stefan Leichter wrote:
Hello,
during the weekend i run winetest-200510121000-paul-mingw.exe and
winetest-200510131000-paul-mingw.exe on my NT4 box and clicked on the button
to submit the test results. The progress bar for submitting the test results
Hello,
looks like adding the dll name to the output of the tests broke the script
that creates the pages on test.winehq.org. Now all test are listed as failed
output from winetest-200504301000-paul-mingw.exe
crypt: 37 tests executed, 0 marked as todo, 0 failures.
output from winetest
On 5/6/05, Stefan Leichter [EMAIL PROTECTED] wrote:
Hello,
looks like adding the dll name to the output of the tests broke the script
that creates the pages on test.winehq.org. Now all test are listed as failed
output from winetest-200504301000-paul-mingw.exe
crypt: 37 tests executed, 0
Am Freitag, 6. Mai 2005 21:54 schrieb James Hawkins:
On 5/6/05, Stefan Leichter [EMAIL PROTECTED] wrote:
Hello,
looks like adding the dll name to the output of the tests broke the
script that creates the pages on test.winehq.org. Now all test are listed
as failed
output from
On Friday 06 May 2005 21:14, Stefan Leichter wrote:
http://test.winehq.org/data/200505051000/
the summary page litst all tests as failed.
OK, me bad; sorry.
The cross-building and WRT shared a bit more common code than I
thought, specifically a Local-Patches directory.
Tomorrows build
Hello,
i noticed something wrong in the summary of 200411201000.
The main summary shows that the tests winspool.drv:info fails sometimes on the
platform win2k, but in the summary of win2k (2000 differences) the line
winspool.drv:info is not listed.
Bye Stefan
On Wed, 8 Sep 2004, Dmitry Timoshkov wrote:
Michael Kaufmann [EMAIL PROTECTED] wrote:
Have you checked that Windows 2003 still passes this test?
No, I don't. For exactly that reason we have http://test.winehq.org
where the results of the current test suite are posted.
And where something
On Wed, Sep 08, 2004 at 06:32:31PM +0200, Saulius Krasuckas wrote:
When can we expect it to be up and running?
It is up and running, we're just missing an index page.
Until that gets created, use this link instead:
http://test.winehq.org/data/
Unfortunately, the testing process has
/08 Wed PM 12:06:04 EDT
To: Saulius Krasuckas [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: test.winehq.org site
On Wed, Sep 08, 2004 at 06:32:31PM +0200, Saulius Krasuckas wrote:
When can we expect it to be up and running?
It is up and running, we're just missing an index page
of the results isn't
occuring correctly?
Chris
From: Dimitrie O. Paun [EMAIL PROTECTED]
Date: 2004/09/08 Wed PM 12:06:04 EDT
To: Saulius Krasuckas [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: test.winehq.org site
On Wed, Sep 08, 2004 at 06:32:31PM +0200, Saulius
Hello,
is it right that the test result in the directory 200407111000 are build
from cvs of the 11th July 2004? If no please delete the mail now ;-)
Otherwise please take a look to the failing kernel:profile tests results
of e.g win98
(http://test.winehq.org/data/200407111000/98se_JosephBooker
in the directory 200407111000 are build
from cvs of the 11th July 2004? If no please delete the mail now ;-)
Otherwise please take a look to the failing kernel:profile tests results
of e.g win98
(http://test.winehq.org/data/200407111000/98se_JosephBooker/kernel32:profile.txt)
you will see lines like
Hello,
i noticed that not all dlls with unittests are listed inside the Main summary
of test.winehq.org. Missing dlls are: iphlpapi, mapi32, msvcrtd, psapi,
version
Is there any reason for this?
Bye Stefan
Stefan Leichter [EMAIL PROTECTED] writes:
i noticed that not all dlls with unittests are listed
inside the Main summary of test.winehq.org. Missing dlls
are: iphlpapi, mapi32, msvcrtd, psapi, version
Is there any reason for this?
Not that I know of. Submitting a patch, thanks for pointing
.
For example, in today's results:
http://test.winehq.org/data/200406171000/
If you look at the summary for the shlwapi:clist test,
it reports 2 errors (in red) in the Win98 column.
If you click on the 2, you are taken to the Win98
differences table (correctly), but there's
Dimitrie O. Paun [EMAIL PROTECTED] writes:
Speaking of the test results, I've noticed the following problems:
1. Some errors reported in the summary
don't get reported in the differences.
Good catch, fixed (*)
2. The differences tables are inconsistent.
How can you say that?! Do
On Fri, Jun 18, 2004 at 02:33:31AM +0200, Ferenc Wagner wrote:
Dimitrie O. Paun [EMAIL PROTECTED] writes:
Good catch, fixed (*)
Nice, thanks for the quick fix.
2. The differences tables are inconsistent.
How can you say that?! Do you think WineHQ can't reliably
run my program? :)
Dimitrie O. Paun [EMAIL PROTECTED] writes:
3. We are running multiple test _per_ build, but only
one is curretly reported. Currently, it says:
Main summary for build 200406171000
where '200406171000' is a link to the test. But since
we have multiple downloads, it should
1 - 100 of 107 matches
Mail list logo