On Mon, Nov 23, 2015 at 10:49:13AM -0800, Eric Anholt wrote:
> Ilia Mirkin writes:
>
> > On Sun, Nov 22, 2015 at 7:12 PM, Eric Anholt wrote:
> >> Ilia Mirkin writes:
> >>
> >>> It looks like we're up to something like 1K non-concurrent piglit
> >>> tests... maybe more. Can someone who actually
Ilia Mirkin writes:
> On Sun, Nov 22, 2015 at 7:12 PM, Eric Anholt wrote:
>> Ilia Mirkin writes:
>>
>>> It looks like we're up to something like 1K non-concurrent piglit
>>> tests... maybe more. Can someone who actually understands the issues
>>> explain what makes a piglit test unreliable when
On Sun, Nov 22, 2015 at 7:12 PM, Eric Anholt wrote:
> Ilia Mirkin writes:
>
>> It looks like we're up to something like 1K non-concurrent piglit
>> tests... maybe more. Can someone who actually understands the issues
>> explain what makes a piglit test unreliable when run concurrently with
>> ano
Ilia Mirkin writes:
> It looks like we're up to something like 1K non-concurrent piglit
> tests... maybe more. Can someone who actually understands the issues
> explain what makes a piglit test unreliable when run concurrently with
> another test? Then we can go and enable concurrency on probably
On Fri, Nov 20, 2015 at 11:19 PM, Ilia Mirkin wrote:
> Right... so I'm looking for concrete things I can look for in tests to
> determine whether the run_concurrent=False is set incorrectly. I know
> the *approximate* reasons, but I'd like to be certain and then go grep
> it all and remove the run
Ah, okay. Never mind then.
On Fri, Nov 20, 2015 at 05:19:45PM -0500, Ilia Mirkin wrote:
> Right... so I'm looking for concrete things I can look for in tests to
> determine whether the run_concurrent=False is set incorrectly. I know
> the *approximate* reasons, but I'd like to be certain and then
Right... so I'm looking for concrete things I can look for in tests to
determine whether the run_concurrent=False is set incorrectly. I know
the *approximate* reasons, but I'd like to be certain and then go grep
it all and remove the run_concurrent flag from 75% of those that have
it (either by upd
I don't remember. I asked Ken about it when Marek updated a huge swath
of tests to run concurrent and I swapped the default flag from
non-concurrent to concurrent, but I don't remember all of the details.
Front buffer rendering and timer query were two cases where concurrent
definitely wasn't safe
And how do you tell if a test is using front buffer rendering? Is that
the only situation, or are there others?
On Fri, Nov 20, 2015 at 3:41 PM, Dylan Baker wrote:
> Any tests that use front buffer rendering cannot be run concurrently. I
> think that's some other cases.
>
> On Nov 20, 2015 12:32,
Any tests that use front buffer rendering cannot be run concurrently. I
think that's some other cases.
On Nov 20, 2015 12:32, "Ilia Mirkin" wrote:
> It looks like we're up to something like 1K non-concurrent piglit
> tests... maybe more. Can someone who actually understands the issues
> explain w
It looks like we're up to something like 1K non-concurrent piglit
tests... maybe more. Can someone who actually understands the issues
explain what makes a piglit test unreliable when run concurrently with
another test? Then we can go and enable concurrency on probably 75% of
the currently-marked-n
11 matches
Mail list logo