Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-07-06 Thread Dylan Baker
Quoting Marek Olšák (2016-07-04 16:39:48)
> On Fri, May 27, 2016 at 7:53 PM, Dylan Baker  wrote:
> > Quoting Marek Olšák (2016-04-16 15:16:34)
> >> Hi,
> >>
> >> This makes shader_runner very fast. The expected result is 40%
> >> decrease in quick.py running time, or a 12x faster piglit run if you
> >> run shader tests alone.
> >>
> >> Branch:
> >> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
> >>
> >> Changes:
> >>
> >> 1) Any number of test files can be specified as command-line
> >> parameters. Those command lines can be insanely long.
> >>
> >> 2) shader_runner can re-create the window & GL context if test
> >> requirements demand different settings when going from one test to
> >> another.
> >>
> >> 3) all.py generates one shader_runner instance per group of tests
> >> (usually one or two directories - tests and generated_tests).
> >> Individual tests are reported as subtests.
> >>
> >> The shader_runner part is done. The python part needs more work.
> >>
> >>
> >> What's missing:
> >>
> >> Handling of crashes. If shader_runner crashes:
> >> - The crash is not shown in piglit results (other tests with subtests
> >> already have the same behavior)
> >> - The remaining tests will not be run.
> >>
> >> The ShaderTest python class has the list of all files and should be
> >> able to catch a crash, check how many test results have been written,
> >> and restart shader_runner with the remaining tests.
> >>
> >> shader_runner prints TEST %i: and then the subtest result. %i is the
> >> i-th file in the list. Python can parse that and re-run shader_runner
> >> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
> >> -> crash; %i+1.. -> run again)
> >>
> >>
> >> I'm by no means a python expert, so here's an alternative solution (for 
> >> me):
> >> - Catch crash signals in shader_runner.
> >> - In the single handler, re-run shader_runner with the remaining tests.
> >>
> >> Opinions welcome,
> >>
> >> Marek
> >> ___
> >> Piglit mailing list
> >> Piglit@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/piglit
> >
> > Hey Marek,
> >
> > I'd picked this up and was finishing it up, I have a branch on my github
> > (https://github.com/dcbaker/piglit wip/multi-shader_runner), I'm just
> > trying to make sure we're not duplicating effort.
> 
> Hi,
> 
> What's the current state of this please?
> 
> Do you need shader_runner to be able to recover from crashes or does
> the framework handle them already?
> 
> Marek

Sorry for the late reply, it was a holiday here in the US,

I've set up the framework to recover from them. My assumption was that
this would be more portable than C, since we would probably need windows
specific code and linux specific code.

I was planning to pick this up again today, I've had other things that
needed to be solved first, but basically I'm down to the bug squashing
and patch ordering stage, plus handling a few corner cases (there are a
couple of extensions that apply to GLES and to desktop GL).

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-07-04 Thread Marek Olšák
On Fri, May 27, 2016 at 7:53 PM, Dylan Baker  wrote:
> Quoting Marek Olšák (2016-04-16 15:16:34)
>> Hi,
>>
>> This makes shader_runner very fast. The expected result is 40%
>> decrease in quick.py running time, or a 12x faster piglit run if you
>> run shader tests alone.
>>
>> Branch:
>> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
>>
>> Changes:
>>
>> 1) Any number of test files can be specified as command-line
>> parameters. Those command lines can be insanely long.
>>
>> 2) shader_runner can re-create the window & GL context if test
>> requirements demand different settings when going from one test to
>> another.
>>
>> 3) all.py generates one shader_runner instance per group of tests
>> (usually one or two directories - tests and generated_tests).
>> Individual tests are reported as subtests.
>>
>> The shader_runner part is done. The python part needs more work.
>>
>>
>> What's missing:
>>
>> Handling of crashes. If shader_runner crashes:
>> - The crash is not shown in piglit results (other tests with subtests
>> already have the same behavior)
>> - The remaining tests will not be run.
>>
>> The ShaderTest python class has the list of all files and should be
>> able to catch a crash, check how many test results have been written,
>> and restart shader_runner with the remaining tests.
>>
>> shader_runner prints TEST %i: and then the subtest result. %i is the
>> i-th file in the list. Python can parse that and re-run shader_runner
>> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
>> -> crash; %i+1.. -> run again)
>>
>>
>> I'm by no means a python expert, so here's an alternative solution (for me):
>> - Catch crash signals in shader_runner.
>> - In the single handler, re-run shader_runner with the remaining tests.
>>
>> Opinions welcome,
>>
>> Marek
>> ___
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/piglit
>
> Hey Marek,
>
> I'd picked this up and was finishing it up, I have a branch on my github
> (https://github.com/dcbaker/piglit wip/multi-shader_runner), I'm just
> trying to make sure we're not duplicating effort.

Hi,

What's the current state of this please?

Do you need shader_runner to be able to recover from crashes or does
the framework handle them already?

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-05-27 Thread Marek Olšák
On Fri, May 27, 2016 at 7:53 PM, Dylan Baker  wrote:
> Quoting Marek Olšák (2016-04-16 15:16:34)
>> Hi,
>>
>> This makes shader_runner very fast. The expected result is 40%
>> decrease in quick.py running time, or a 12x faster piglit run if you
>> run shader tests alone.
>>
>> Branch:
>> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
>>
>> Changes:
>>
>> 1) Any number of test files can be specified as command-line
>> parameters. Those command lines can be insanely long.
>>
>> 2) shader_runner can re-create the window & GL context if test
>> requirements demand different settings when going from one test to
>> another.
>>
>> 3) all.py generates one shader_runner instance per group of tests
>> (usually one or two directories - tests and generated_tests).
>> Individual tests are reported as subtests.
>>
>> The shader_runner part is done. The python part needs more work.
>>
>>
>> What's missing:
>>
>> Handling of crashes. If shader_runner crashes:
>> - The crash is not shown in piglit results (other tests with subtests
>> already have the same behavior)
>> - The remaining tests will not be run.
>>
>> The ShaderTest python class has the list of all files and should be
>> able to catch a crash, check how many test results have been written,
>> and restart shader_runner with the remaining tests.
>>
>> shader_runner prints TEST %i: and then the subtest result. %i is the
>> i-th file in the list. Python can parse that and re-run shader_runner
>> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
>> -> crash; %i+1.. -> run again)
>>
>>
>> I'm by no means a python expert, so here's an alternative solution (for me):
>> - Catch crash signals in shader_runner.
>> - In the single handler, re-run shader_runner with the remaining tests.
>>
>> Opinions welcome,
>>
>> Marek
>> ___
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/piglit
>
> Hey Marek,
>
> I'd picked this up and was finishing it up, I have a branch on my github
> (https://github.com/dcbaker/piglit wip/multi-shader_runner), I'm just
> trying to make sure we're not duplicating effort.

Thanks,

I've not looked at it since this thread was created.

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-05-27 Thread Mark Janes
Marek Olšák  writes:

> On Fri, May 27, 2016 at 3:18 AM, Mark Janes  wrote:
>> Marek Olšák  writes:
>>
>>> On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker  
>>> wrote:
 Quoting Marek Olšák (2016-04-16 15:16:34)
> Hi,
>
> This makes shader_runner very fast. The expected result is 40%
> decrease in quick.py running time, or a 12x faster piglit run if you
> run shader tests alone.
>
> Branch:
> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
>
> Changes:
>
> 1) Any number of test files can be specified as command-line
> parameters. Those command lines can be insanely long.
>
> 2) shader_runner can re-create the window & GL context if test
> requirements demand different settings when going from one test to
> another.
>
> 3) all.py generates one shader_runner instance per group of tests
> (usually one or two directories - tests and generated_tests).
> Individual tests are reported as subtests.
>
> The shader_runner part is done. The python part needs more work.
>
>
> What's missing:
>
> Handling of crashes. If shader_runner crashes:
> - The crash is not shown in piglit results (other tests with subtests
> already have the same behavior)
> - The remaining tests will not be run.
>
> The ShaderTest python class has the list of all files and should be
> able to catch a crash, check how many test results have been written,
> and restart shader_runner with the remaining tests.
>
> shader_runner prints TEST %i: and then the subtest result. %i is the
> i-th file in the list. Python can parse that and re-run shader_runner
> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
> -> crash; %i+1.. -> run again)
>
> I'm by no means a python expert, so here's an alternative solution (for 
> me):
> - Catch crash signals in shader_runner.
> - In the single handler, re-run shader_runner with the remaining tests.
>
> Opinions welcome,
>>
>> Per-test process isolation is a key feature of Piglit that the Intel CI
>> relies upon.  If non-crash errors bleed into separate tests, results
>> will be unusable.
>>
>> In fact, we wrap all other test suites in piglit primarily to provide
>> them with per-test process isolation.
>>
>> For limiting test run-time, we shard tests into groups and run them on
>> parallel systems.  Currently this is only supported by dEQP features,
>> but it can make test time arbitrarily low if you have adequate hardware.
>>
>> For test suites that don't support sharding, I think it would be useful
>> to generate suites from start/end times that can run the maximal set of
>> tests in the targeted duration.
>>
>> I would be worried by complex handling of crashes.  It would be
>> preferable if separate suites were available to run with/without shader
>> runner process isolation.
>>
>> Users desiring faster execution can spend the saved time figuring out
>> which test crashed.
>
> I would say that the majority of upstream users care more about piglit
> running time and less about process isolation.
>
> Process isolation can be an optional piglit flag.

WFM.

>>
> Marek
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit

 Thanks for working on this Marek,

 This has been discussed here several times amongst the intel group, and
 the recurring problem to solve is crashing. I don't have a strong
 opinion on python vs catching a fail in the signal handler, except that
 handling in the python might be more robust, but I'm not really familiar
 with what a C signal handler can recover from, so it may not.
>>>
>>> I can catch signals like exceptions and report 'crash'. Then I can
>>> open a new process from the handler to run the remaining tests, wait
>>> and exit.
>>
>> Will an intermittent crash be run again until it passes?
>
> No.
>
> Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-05-27 Thread Marek Olšák
On Fri, May 27, 2016 at 3:18 AM, Mark Janes  wrote:
> Marek Olšák  writes:
>
>> On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker  wrote:
>>> Quoting Marek Olšák (2016-04-16 15:16:34)
 Hi,

 This makes shader_runner very fast. The expected result is 40%
 decrease in quick.py running time, or a 12x faster piglit run if you
 run shader tests alone.

 Branch:
 https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner

 Changes:

 1) Any number of test files can be specified as command-line
 parameters. Those command lines can be insanely long.

 2) shader_runner can re-create the window & GL context if test
 requirements demand different settings when going from one test to
 another.

 3) all.py generates one shader_runner instance per group of tests
 (usually one or two directories - tests and generated_tests).
 Individual tests are reported as subtests.

 The shader_runner part is done. The python part needs more work.


 What's missing:

 Handling of crashes. If shader_runner crashes:
 - The crash is not shown in piglit results (other tests with subtests
 already have the same behavior)
 - The remaining tests will not be run.

 The ShaderTest python class has the list of all files and should be
 able to catch a crash, check how many test results have been written,
 and restart shader_runner with the remaining tests.

 shader_runner prints TEST %i: and then the subtest result. %i is the
 i-th file in the list. Python can parse that and re-run shader_runner
 with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
 -> crash; %i+1.. -> run again)

 I'm by no means a python expert, so here's an alternative solution (for 
 me):
 - Catch crash signals in shader_runner.
 - In the single handler, re-run shader_runner with the remaining tests.

 Opinions welcome,
>
> Per-test process isolation is a key feature of Piglit that the Intel CI
> relies upon.  If non-crash errors bleed into separate tests, results
> will be unusable.
>
> In fact, we wrap all other test suites in piglit primarily to provide
> them with per-test process isolation.
>
> For limiting test run-time, we shard tests into groups and run them on
> parallel systems.  Currently this is only supported by dEQP features,
> but it can make test time arbitrarily low if you have adequate hardware.
>
> For test suites that don't support sharding, I think it would be useful
> to generate suites from start/end times that can run the maximal set of
> tests in the targeted duration.
>
> I would be worried by complex handling of crashes.  It would be
> preferable if separate suites were available to run with/without shader
> runner process isolation.
>
> Users desiring faster execution can spend the saved time figuring out
> which test crashed.

I would say that the majority of upstream users care more about piglit
running time and less about process isolation.

Process isolation can be an optional piglit flag.

>
 Marek
 ___
 Piglit mailing list
 Piglit@lists.freedesktop.org
 https://lists.freedesktop.org/mailman/listinfo/piglit
>>>
>>> Thanks for working on this Marek,
>>>
>>> This has been discussed here several times amongst the intel group, and
>>> the recurring problem to solve is crashing. I don't have a strong
>>> opinion on python vs catching a fail in the signal handler, except that
>>> handling in the python might be more robust, but I'm not really familiar
>>> with what a C signal handler can recover from, so it may not.
>>
>> I can catch signals like exceptions and report 'crash'. Then I can
>> open a new process from the handler to run the remaining tests, wait
>> and exit.
>
> Will an intermittent crash be run again until it passes?

No.

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-04-18 Thread Dylan Baker
Quoting Marek Olšák (2016-04-18 10:39:46)
> On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker  wrote:
[snip]
> >
> > Thanks for working on this Marek,
> >
> > This has been discussed here several times amongst the intel group, and
> > the recurring problem to solve is crashing. I don't have a strong
> > opinion on python vs catching a fail in the signal handler, except that
> > handling in the python might be more robust, but I'm not really familiar
> > with what a C signal handler can recover from, so it may not.
> 
> I can catch signals like exceptions and report 'crash'. Then I can
> open a new process from the handler to run the remaining tests, wait
> and exit.
> 
> The signal catching won't work on Windows.
> 
> Also, there are piglit GL framework changes that have only been tested
> with Waffle and may break other backends.

It wouldn't be difficult to handle in the python framework. I have some
patches that are half baked to do exactly this sort of thing for
piglit/deqp, it shouldn't be too hard to generalize that code and handle
it that way. I think it would be better to handle it in shader_runner if
we can, but as a fallback if we decide that having one solution that
works everywhere is better than having one for windows and one for
not-windows it can be done in python.

> 
> >
> > The one concern I have is using subtests. There are a couple of
> > limitations to them, first we'll loose all of the per test stdout/stderr
> > data, and that seems less than optimal. I wonder if it would be better
> > to have shader runner print some sort of scissor to stdout and stderr
> > when it starts a test and when it finishes one, and then report results
> > as normal without the subtest. That would maintain the output of each
> > test file, which seems like what we want, otherwise the output will be
> 
> That can be done easily in C.
> 
> > jumbled. The other problem with subtests is that the JUnit backend
> > doesn't have a way to represent subtests at the moment. That would be
> > problematic for both us and for VMWare.
> 
> I can't help with anything related to python.
> 
> The goal is to make piglit faster for general regression testing.
> Other use cases can be affected negatively, but the time savings are
> worth it.

Well, we either need to not use subtests, or the junit subtest handling
deficiency needs to be solved before landing this or its going to be a
huge problem, since we rely on the CI heavily and this would hide a lot
of specifics about regressions. What we'd end up with is something like
'spec/ARB_ham_sandwich: fail', which is completely insufficient, since
most of piglit is shader_runner based.

Personally, I think using the scissoring approach is better anyway since
it also allows us to link the stdout/stderr to the specific test, and
with that approach we don't need to use subtests either, the python
layer can just make a test result per scissor and the changes wouldn't
be user visible at all (barring any bugs).  There's a few changes to the
python that would need to happen to make this work, but I don't think
it's going to be more than a couple of patches.

> 
> >
> > Looking at the last patch the python isn't all correct there, it will
> > run in some cases and fail in others, particularly it will do something
> > odd if fast skipping is enabled, but I'm not sure exactly what. I think
> > it's worth measuring and seeing if the fast skipping path is even an
> > optimization with your enhancements, if it's not we should just disable
> > it for shader_runner or remove it entirely, it would remove a lot of
> > complexity.
> 
> If the fast skipping is the only issue, I can remove it.

I'd be fine with just removing it from shader_runner for now and I could
run tests to see if it's actually an improvement later, and get it
working at that point if it is advantageous, and rip it out if it isn't.
I could see it still being a win for some of the very old platforms we
support, since they tend to have slow CPUs and limited OpenGL support.

The most straightforward way to disable it would be to just remove or
comment out "self.__find_requirements" in ShaderTest.__init__, I think. 

> 
> >
> > I'd be more than happy to help get the python work done and running,
> > since this would be really useful for us in our CI system.
> 
> What else needs to be done in python?
> 
> Marek

I guess that depends on what approach you want to take on things.

If you want to try the scissor output approach we'll need to write an
extended interpret_result method for ShaderTest. I don't think it'll be
that complicated since it'll just be looking for the scissor marks and
the test name, and passing the rest up via super(). There's a few more
changes that would be needed, but I don't think they'd be too
complicated.

If you want to have the crash handler/rerunner in python we'll need to
implement that, that's probably a bit more complicated, but shouldn't be
bad.

Dylan


signature.asc
Description: 

Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-04-18 Thread Marek Olšák
On Mon, Apr 18, 2016 at 6:45 PM, Dylan Baker  wrote:
> Quoting Marek Olšák (2016-04-16 15:16:34)
>> Hi,
>>
>> This makes shader_runner very fast. The expected result is 40%
>> decrease in quick.py running time, or a 12x faster piglit run if you
>> run shader tests alone.
>>
>> Branch:
>> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
>>
>> Changes:
>>
>> 1) Any number of test files can be specified as command-line
>> parameters. Those command lines can be insanely long.
>>
>> 2) shader_runner can re-create the window & GL context if test
>> requirements demand different settings when going from one test to
>> another.
>>
>> 3) all.py generates one shader_runner instance per group of tests
>> (usually one or two directories - tests and generated_tests).
>> Individual tests are reported as subtests.
>>
>> The shader_runner part is done. The python part needs more work.
>>
>>
>> What's missing:
>>
>> Handling of crashes. If shader_runner crashes:
>> - The crash is not shown in piglit results (other tests with subtests
>> already have the same behavior)
>> - The remaining tests will not be run.
>>
>> The ShaderTest python class has the list of all files and should be
>> able to catch a crash, check how many test results have been written,
>> and restart shader_runner with the remaining tests.
>>
>> shader_runner prints TEST %i: and then the subtest result. %i is the
>> i-th file in the list. Python can parse that and re-run shader_runner
>> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
>> -> crash; %i+1.. -> run again)
>>
>>
>> I'm by no means a python expert, so here's an alternative solution (for me):
>> - Catch crash signals in shader_runner.
>> - In the single handler, re-run shader_runner with the remaining tests.
>>
>> Opinions welcome,
>>
>> Marek
>> ___
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/piglit
>
> Thanks for working on this Marek,
>
> This has been discussed here several times amongst the intel group, and
> the recurring problem to solve is crashing. I don't have a strong
> opinion on python vs catching a fail in the signal handler, except that
> handling in the python might be more robust, but I'm not really familiar
> with what a C signal handler can recover from, so it may not.

I can catch signals like exceptions and report 'crash'. Then I can
open a new process from the handler to run the remaining tests, wait
and exit.

The signal catching won't work on Windows.

Also, there are piglit GL framework changes that have only been tested
with Waffle and may break other backends.

>
> The one concern I have is using subtests. There are a couple of
> limitations to them, first we'll loose all of the per test stdout/stderr
> data, and that seems less than optimal. I wonder if it would be better
> to have shader runner print some sort of scissor to stdout and stderr
> when it starts a test and when it finishes one, and then report results
> as normal without the subtest. That would maintain the output of each
> test file, which seems like what we want, otherwise the output will be

That can be done easily in C.

> jumbled. The other problem with subtests is that the JUnit backend
> doesn't have a way to represent subtests at the moment. That would be
> problematic for both us and for VMWare.

I can't help with anything related to python.

The goal is to make piglit faster for general regression testing.
Other use cases can be affected negatively, but the time savings are
worth it.

>
> Looking at the last patch the python isn't all correct there, it will
> run in some cases and fail in others, particularly it will do something
> odd if fast skipping is enabled, but I'm not sure exactly what. I think
> it's worth measuring and seeing if the fast skipping path is even an
> optimization with your enhancements, if it's not we should just disable
> it for shader_runner or remove it entirely, it would remove a lot of
> complexity.

If the fast skipping is the only issue, I can remove it.

>
> I'd be more than happy to help get the python work done and running,
> since this would be really useful for us in our CI system.

What else needs to be done in python?

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-04-18 Thread Dylan Baker
Quoting Marek Olšák (2016-04-16 15:16:34)
> Hi,
> 
> This makes shader_runner very fast. The expected result is 40%
> decrease in quick.py running time, or a 12x faster piglit run if you
> run shader tests alone.
> 
> Branch:
> https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner
> 
> Changes:
> 
> 1) Any number of test files can be specified as command-line
> parameters. Those command lines can be insanely long.
> 
> 2) shader_runner can re-create the window & GL context if test
> requirements demand different settings when going from one test to
> another.
> 
> 3) all.py generates one shader_runner instance per group of tests
> (usually one or two directories - tests and generated_tests).
> Individual tests are reported as subtests.
> 
> The shader_runner part is done. The python part needs more work.
> 
> 
> What's missing:
> 
> Handling of crashes. If shader_runner crashes:
> - The crash is not shown in piglit results (other tests with subtests
> already have the same behavior)
> - The remaining tests will not be run.
> 
> The ShaderTest python class has the list of all files and should be
> able to catch a crash, check how many test results have been written,
> and restart shader_runner with the remaining tests.
> 
> shader_runner prints TEST %i: and then the subtest result. %i is the
> i-th file in the list. Python can parse that and re-run shader_runner
> with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
> -> crash; %i+1.. -> run again)
> 
> 
> I'm by no means a python expert, so here's an alternative solution (for me):
> - Catch crash signals in shader_runner.
> - In the single handler, re-run shader_runner with the remaining tests.
> 
> Opinions welcome,
> 
> Marek
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit

Thanks for working on this Marek,

This has been discussed here several times amongst the intel group, and
the recurring problem to solve is crashing. I don't have a strong
opinion on python vs catching a fail in the signal handler, except that
handling in the python might be more robust, but I'm not really familiar
with what a C signal handler can recover from, so it may not.

The one concern I have is using subtests. There are a couple of
limitations to them, first we'll loose all of the per test stdout/stderr
data, and that seems less than optimal. I wonder if it would be better
to have shader runner print some sort of scissor to stdout and stderr
when it starts a test and when it finishes one, and then report results
as normal without the subtest. That would maintain the output of each
test file, which seems like what we want, otherwise the output will be
jumbled. The other problem with subtests is that the JUnit backend
doesn't have a way to represent subtests at the moment. That would be
problematic for both us and for VMWare.

Looking at the last patch the python isn't all correct there, it will
run in some cases and fail in others, particularly it will do something
odd if fast skipping is enabled, but I'm not sure exactly what. I think
it's worth measuring and seeing if the fast skipping path is even an
optimization with your enhancements, if it's not we should just disable
it for shader_runner or remove it entirely, it would remove a lot of
complexity.

I'd be more than happy to help get the python work done and running,
since this would be really useful for us in our CI system.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] Nearly finished: shader_runner running THOUSANDS of tests per process

2016-04-16 Thread Marek Olšák
Hi,

This makes shader_runner very fast. The expected result is 40%
decrease in quick.py running time, or a 12x faster piglit run if you
run shader tests alone.

Branch:
https://cgit.freedesktop.org/~mareko/piglit/log/?h=shader-runner

Changes:

1) Any number of test files can be specified as command-line
parameters. Those command lines can be insanely long.

2) shader_runner can re-create the window & GL context if test
requirements demand different settings when going from one test to
another.

3) all.py generates one shader_runner instance per group of tests
(usually one or two directories - tests and generated_tests).
Individual tests are reported as subtests.

The shader_runner part is done. The python part needs more work.


What's missing:

Handling of crashes. If shader_runner crashes:
- The crash is not shown in piglit results (other tests with subtests
already have the same behavior)
- The remaining tests will not be run.

The ShaderTest python class has the list of all files and should be
able to catch a crash, check how many test results have been written,
and restart shader_runner with the remaining tests.

shader_runner prints TEST %i: and then the subtest result. %i is the
i-th file in the list. Python can parse that and re-run shader_runner
with the first %i tests removed. (0..%i-1 -> parse subtest results; %i
-> crash; %i+1.. -> run again)


I'm by no means a python expert, so here's an alternative solution (for me):
- Catch crash signals in shader_runner.
- In the single handler, re-run shader_runner with the remaining tests.

Opinions welcome,

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit