There is a significant practical problem to turn the tree red and work
with someone to rebaseline the tests. It takes multiple hours for some
bots to build and test a given patch. That means, at any moment, you will
have maybe tens and in some cases hundreds of failing tests associated with
some
On Tue, Apr 10, 2012 at 6:10 AM, Stephen Chenney schen...@chromium.orgwrote:
There is a significant practical problem to turn the tree red and work
with someone to rebaseline the tests. It takes multiple hours for some
bots to build and test a given patch. That means, at any moment, you will
On Tue, Apr 10, 2012 at 11:42 AM, Stephen Chenney schen...@chromium.orgwrote:
On Tue, Apr 10, 2012 at 1:00 PM, Ryosuke Niwa rn...@webkit.org wrote:
On Tue, Apr 10, 2012 at 6:10 AM, Stephen Chenney
schen...@chromium.orgwrote:
There is a significant practical problem to turn the tree red and
I don't think we can come up with a hard and fast rule given current
tooling. In a theoretical future world in which it's easy to get expected
results off the EWS bots (or some other infrastructure), it would be
reasonable to expect people to incorporate the correct expected results for
any
On Tue, Apr 10, 2012 at 12:29 PM, Ojan Vafai o...@chromium.org wrote:
I don't think we can come up with a hard and fast rule given current
tooling. In a theoretical future world in which it's easy to get expected
results off the EWS bots (or some other infrastructure), it would be
reasonable
I agree with Ojan. It's clear that there are arguments for both
approaches and my initial note did not address all the situations that
come up. I will write up something further and put it on the wiki.
I will also continue mulling over what sorts of changes to the tools
we could do in the short
Hi all,
Recently I've noticed more people making changes and adding test
failure suppressions to various ports' test_expectations.txt files.
This is great!
However, I don't think we have an agreement over what the best
practices are here, so I thought I'd list out what I thought they
were, and
On Mon, Apr 9, 2012 at 3:02 PM, James Robinson jam...@google.com wrote:
3) Don't use test_expectations.txt to suppress failures across a
single cycle of the bot, just so you can gather updated baselines
without the tree going red. While it might seem that you're doing tree
maintainers a favor,
On Monday 09 April 2012 22:42:33 Dirk Pranke wrote:
Hi all,
Recently I've noticed more people making changes and adding test
failure suppressions to various ports' test_expectations.txt files.
This is great!
Hi Dirk,
It would be good to have a page describing the right thing to do for each
In my ideal world, you would be able to get updated baselines *prior*
to trying to land a patch. This is of course not really possible today
for any test that fails on multiple ports with different results, as
it's practically impossible to run more than a couple of ports by
hand, and we
On Mon, Apr 9, 2012 at 3:50 PM, Julien Chaffraix jchaffr...@webkit.orgwrote:
If there's consensus in the mean time that it is better on balance to
check in suppressions, perhaps we can figure out a better way to do
that. Maybe (shudder) a second test_expectations file? Or maybe it
would
On Mon, Apr 9, 2012 at 3:50 PM, Julien Chaffraix jchaffr...@webkit.org wrote:
In my ideal world, you would be able to get updated baselines *prior*
to trying to land a patch. This is of course not really possible today
for any test that fails on multiple ports with different results, as
it's
If there's consensus in the mean time that it is better on balance to
check in suppressions, perhaps we can figure out a better way to do
that. Maybe (shudder) a second test_expectations file? Or maybe it
would be better to actually check in suppressions marked as REBASELINE
(or something
13 matches
Mail list logo