On 10/12/2017 12:11 PM, Jose Fonseca wrote:
On 12/10/17 17:51, Brian Paul wrote:
On 10/12/2017 08:04 AM, Jose Fonseca wrote:
The intent here was not so much to match the piglti MSVC build, but apps
build by MSVC in general.

After all, nothing ever prevented us from setting a huge stack size on
both MinGW and MSVC alike, as both toolchains allow to congifure the
stack size to whatever we want.


The key issue here is that OpenGL driver don't get to pick the apps they
are loaded, and real OpenGL applications will be likely built with MSVC
instead of Mingw, and therefore will likely only have the MSVC default
stack size.  And we should err on the side of caution when testing.


Regardless of the compiler used, if we bump the stack size in piglit,
one is just increasing the chance that wasteful stack allocations go
undetected on piglit and blow up on real applications.


Therefore I suggest we continue to keep 1MB default, and try to fix Mesa
to be less stack hungry.   If that's not pratical here,

The ir_expression::constant_expression_value() function's local vars
are pretty minimal.  The two that stand out:

    ir_constant *op[ARRAY_SIZE(this->operands)] = { NULL, };
    ir_constant_data data;

are only 32 and 128 bytes, respectively (on a 32-bit build).  I'm not
sure what else accounts for the approx 2KB of the activation record.

I don't see an obvious way to fix the problem.  Even if we could
reduce per-call stack memory, someone could write an evil shader that
adds a thousand or more terms and we'd overflow the stack again.

I'm not following...

If the application is malicous, it can overflow the stack without OpenGL
driver help.  We have to trust the application.  After all, the opengl
driver is in the same process.

If the application is a browser, who needs to handle untrusted shaders,
then it's the application responsibility to ensure it has enough stack
to cope with big shaders.

And for the sake of argument, if increasing stack size is the solution
for malicious shaders, where would one stop?  If increasing stack size
is not a solution to malicous shaders, then why is it relevant to the
discussion?

I'm just saying that if we found a way to "fix Mesa to be less stack hungry" someone could generate a new "evil" Piglit test that'd still breaks things. Any construct which generates a really deep IR tree and is evaluated recursively could run out of stack space.

I wasn't thinking about it, but now that you mention malicious shaders, we should probably consider the case of the GLSL compiler running in a driver loaded by the X server for indirect rendering. But that's a whole other topic.



then we should
try to bump the stack size of specific piglit tests (like those that
stress the compiler to the extreme, as Cmake allows to set these options
per executable), and only those tests.  Or just mark the affected tests
as expected fail/skip.

The op-selection-bool-bvec4-bvec4.frag test is run with
shader_runner.exe.  I think most of the large/complex shaders in
Piglit are run with shader_runner so I don't think it makes much
difference if only shader_runner or all piglit executables are build
with a 2MB stack.

In any case, I've got a patch which only sets shader_runner's stack
size which I'll post next.

I guess another option is to change the
op-selection-bool-bvec4-bvec4.frag test to be simpler, but that's just
sweeping the root problem under the rug.  Someone could always craft a
complex shader which blows out the stack.

That's fine.  I'm not worried about uber shaders at all.  Malicous or not.

What I'm worried is that once you set the stack to 2MB, some appearently
innocuous code somewhere in Mesa source tree (probably completely
unrelated to GLSL for example) wastefully uses up 1MB of stack, and
piglit does not catch the issue because all tests have 2MB of stack.

And this is not hypothetical: we had several functions in Mesa related
to writing/read/converting pixels to/from textures, eating up wasteful
amounts of stack.  And more than once real apps hit stack overflow
because of it.  I'd hate to see us increasing the stack sizes of all our
tests, just for the same issues with real applications creeping back in.

The only way I see to avoid this sort of regressions is to continue
using the default stack, and use big stacks only when necessary, IMO.

So, is increasing the stack size for shader_runner alone to 2MB acceptable to you?

Otherwise, the question is: can we change the default stack size at shader_runner runtime for particular shader tests? I guess setrlimit(RLIMIT_STACK) is an option on Linux. Not sure about Windows.

-Brian

_______________________________________________
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit

Reply via email to