[ 
https://issues.apache.org/jira/browse/IMPALA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16737439#comment-16737439
 ] 

Paul Rogers commented on IMPALA-8055:
-------------------------------------

Thanks [~philip] for taking a look at this one. Would do so myself but I'm 
knee-deep in other issues at the moment.

Agree that we don't need 100s of test failures because something basic is 
broken. On the other hand a limit of 1 failure might be a bit low. Perhaps 
somewhere around 10 failures before giving up might be more helpful.

Easy to reproduce. Fine "explain-level1.test" and modify one of the expected 
lines so that the test will fail. Maybe change this:

{noformat}
'   runtime filters: RF000 -> l_orderkey'
{noformat}

To this:

{noformat}
'   runtime filters: RF000 -> bogus'
{noformat}

Do the same for "explain-level2.test". Run this in your dev environment with:

{noformat}
${IMPALA_HOME}/tests/run-tests.py -s --update_results metadata/test_explain.py
{noformat}

You'll get the results shown in the description.

Now, modify one other test as well, say {{test_compute_stats.py}}. Upload a 
fake patch and run the pre-review tests. If the pattern holds, the output will 
show one of the failures, but not the other. Fix the failure and rerun. Now the 
other failure will break the build.

> run-tests.py reports tests as passed even if the did not
> --------------------------------------------------------
>
>                 Key: IMPALA-8055
>                 URL: https://issues.apache.org/jira/browse/IMPALA-8055
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Infrastructure
>    Affects Versions: Impala 3.1.0
>            Reporter: Paul Rogers
>            Priority: Minor
>
> Been mucking about with the EXPLAIN output format which required rebasing a 
> bunch of tests on the new format. PlannerTest is fine: it clearly fails when 
> the expected ".test" files don't match the new "actual" files.
> When run on Jenkins in "pre-review" mode, the build does fail if a Python 
> end-to-end test fails. But, the job seems to give up at that point, not 
> running other tests and finding more problems. (There were three separate 
> test cases that needed fixing; took multiple runs to find them.)
> When run on my dev box, I get the following (highly abbreviated) output:
> {noformat}
> '|  in pipelines: 00(GETNEXT)' != '|  row-size=402B cardinality=5.76M'
> ...
> [gw3] PASSED 
> metadata/test_explain.py::TestExplain::test_explain_level0[protocol: beeswax 
> | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: text/none] 
> ...
> ==== 6 passed in 68.63 seconds =====
> {noformat}
> I've learned that "passed" means "maybe failed" and to go back and inspect 
> the actual output to figure out if the test did, indeed, fail. I suspect 
> "passed" means "didn't crash" rather than "tests worked."
> Would be very helpful to plumb the failure through to the summary line so it 
> said "3 passed, 3 failed" or whatever. Would be a huge time-saver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to