Ok there are a few cases where we can indeed make tests faster, but it will be 
work for us. And that won't really speed up much since we're adding piles more 
testcases at a pretty quick rate. And many of these new testcases are CRC 
based, so inheritely take some time to run.
[He, Shuang] OK, so it takes at least n/60 in usual case to have result 
detected plus additional execution time, depending on how many rounds of 
testing. We will be absolutely happy to see more tests coming that is useful
[Guang YANG] Except these CRC case, some stress case may also cost a bit of 
time, especially<app:ds:especially> on some old platforms. Maybe can reduce the 
loop in that kind of stress case?

So I think longer-term we simply need to throw more machines at the problem and 
run testcases in parallel on identical machines.
[He, Shuang] This would be the perfect way to go if all tests are really 
feasible to take long time to run. If we get more identical test machines, then 
problem solved
[Guang YANG] shuang's PRTS can cover some work for i-g-t testing and catch some 
regressions. Most of the i-g-t bugs are from HSW+, so I hope keep focus on 
these new platforms.  but now we don't have enough free machine resource (such 
as BYT,BDW)to support one machine only run i-g-t in nightly.


Wrt analyzing issues I think the right approach for moving forward is:
a) switch to piglit to run tests, not just enumerate them. This will allow QA 
and developers to share testcase analysis.
[He, Shuang] Yes, though this could not actually accelerate the test. We could 
directly wrap over piglit to run testing (have other control process to monitor 
and collecting test results)
[Guang YANG] Yeah, Shuang said is what we did. Piglit have been improved more 
powerful, but our infrastructure have better remote control and result 
collecting. If it will be comfortable for Developers to see the case result 
from running piglit, we can discuss how to match these two framework together.

b) add automated analysis for time-consuming and error prone cases like dmesg 
warnings and backtraces. Thomas&I have just discussed a few ideas in this are 
in our 1:1 today.

Reducing the set of igt tests we run is imo pointless: The goal of igt is to 
hit corner-cases, arbitrarily selecting which kinds of corner-cases we test 
just means that we have a nice illusion about our test coverage.
[He, Shuang] I don't think select a subset of test cases to run is pointless. 
It's a trade-off between speed and correctness. For our nightly testing it's 
not so useful to run only a small set of testing. But for fast sanity testing, 
it should be easier, which is supposed to catch regression in major/critical 
functionality (So other developers and QA could continue their work).


Adding more people to the discussion.

Cheers, Daniel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to