If you're already in there, go ahead.
Thanks,
Avi
On Wed, Sep 23, 2009 at 9:32 PM, Dirk Pranke dpra...@google.com wrote:
I think this plan sounds good, too.
I'm mucking with those scripts a bit at the moment for the LTTF
reporting, so I can make this change tomorrow, unless someone else
I had assumed that test_shell was also going to be modified, in order to
produce IMAGEFAIL or TEXTFAIL automatically in regressions.txt. So
it wouldn't require any intervention other than a first pass through
test_expectations.txt to mark the partial failures. I'm sure how often
IMAGEFAIL - FAIL
I don't think this is just about ignoring image-only results on mac for the
short-term. My subjective sense is that we have many tests that start out as
failing only image comparisons (e.g. due to theming), but over time another
failure creeps in that causes a text failure that goes unnoticed.
Right--the question is for the long term if this will be a useful feature to
keep around or if we'll be dropping it once the Mac tests work. I'll take
the change either way but I can appreciate the extra granularity that this
would provide.
Pam:
having a way to say oh, it's 'only' the image
On Thu, Sep 24, 2009 at 3:11 PM, Ojan Vafai o...@chromium.org wrote:
I don't think this is just about ignoring image-only results on mac for
the short-term. My subjective sense is that we have many tests that start
out as failing only image comparisons (e.g. due to theming), but over time
No, there's no way to do that but it would be easy enough to add.
-- Dirk
On Wed, Sep 23, 2009 at 12:16 PM, Avi Drissman a...@google.com wrote:
I've been looking into the pixel test situation on the Mac, and it isn't bad
at all. Of ~5300 tests that have png results, we're failing ~800, most
I'm new to the test runner (and to python in general). Can you give me a
pointer where I should start?
Avi
On Wed, Sep 23, 2009 at 3:22 PM, Dirk Pranke dpra...@google.com wrote:
No, there's no way to do that but it would be easy enough to add.
-- Dirk
On Wed, Sep 23, 2009 at 12:16 PM, Avi
There is not. But adding it would be easy. There's been mention of
doing this for a while, but noone has made the effort to make it work.
All you'd have to do is:
-modify a few lines in TestExpectationsFile in
src/webkit/tools/layout_tests/layout_package/test_expectations.py to
add support for
BTW, would we want this to be temporary? I was thinking so, but then again,
being able to suppress a pixel failure separately from the layout failure
might be useful.
Avi
On Wed, Sep 23, 2009 at 3:24 PM, Avi Drissman a...@google.com wrote:
I'm new to the test runner (and to python in general).
+pam, tc, darin in case they disagree with what I'm saying here.
Also a bunch of current expectations would need to be modified. All
the cases where there is currently FAIL would need to be changed to
either FAIL or IMAGE or both if it's a text and image failure. You
should be able to get most
On Wed, Sep 23, 2009 at 12:33 PM, Ojan Vafai o...@chromium.org wrote:
+pam, tc, darin in case they disagree with what I'm saying here.
Also a bunch of current expectations would need to be modified. All
the cases where there is currently FAIL would need to be changed to
either FAIL or IMAGE
Could we make them TEXTFAIL and IMAGEFAIL, just to be clear?
Stephen
(And then post them to failblog if they're really embarassing.. J/K ;)
On Wed, Sep 23, 2009 at 3:33 PM, Ojan Vafai o...@chromium.org wrote:
+pam, tc, darin in case they disagree with what I'm saying here.
Also a bunch of
I prefer IMAGE and TEXT since people maintaining these lists need to type
these all the time. Also, longer names make for more bloat in the file and
in the dashboard. Anyone who works with these lists even a small amount will
know that IMAGE and TEXT refer to failures.
We should really get rid of
Call me a wet blanket, but I don't think there's a strong need for more
divergence in the file. Anything not passing is failing and needs looking
at; having a way to say oh, it's 'only' the image that's bad will increase
maintenance burden and support ignoring problems. Situations where we're
14 matches
Mail list logo