Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-17 Thread Jose Fonseca
Thanks Emil, makes sense. I totally missed the stable tag.

Jose 


From: Emil Velikov 
Sent: 16 November 2014 21:12
To: Jose Fonseca; Tom Stellard
Cc: emil.l.veli...@gmail.com; mesa-dev@lists.freedesktop.org
Subject: Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading 
opengl32.dll

On 13/11/14 11:10, Jose Fonseca wrote:
> Hi Tom,
>
> That's peculiar. It looks like pthreads got into a weird state somehow.  
> Don't precisely understand how though.  Maybe there's a race inside 
> pipe_semaphore_signal() with the destruction of the semaphore.
>
> I think the best thing for now is to revert to old behavior for non-windows 
> platforms:
>
Hi Jose,

Seems like the patch was missing the mesa-stable tag, unlike the commit
that caused the regression. I've went ahead, squashed the two and
committed them to 10.3.
Please let me know if I've missed anything :)

Thanks
Emil

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-16 Thread Emil Velikov
On 13/11/14 11:10, Jose Fonseca wrote:
> Hi Tom,
> 
> That's peculiar. It looks like pthreads got into a weird state somehow.  
> Don't precisely understand how though.  Maybe there's a race inside 
> pipe_semaphore_signal() with the destruction of the semaphore.
> 
> I think the best thing for now is to revert to old behavior for non-windows 
> platforms:
> 
Hi Jose,

Seems like the patch was missing the mesa-stable tag, unlike the commit
that caused the regression. I've went ahead, squashed the two and
committed them to 10.3.
Please let me know if I've missed anything :)

Thanks
Emil

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-13 Thread Tom Stellard
On Thu, Nov 13, 2014 at 11:10:39AM +, Jose Fonseca wrote:
> Hi Tom,
> 
> That's peculiar. It looks like pthreads got into a weird state somehow.  
> Don't precisely understand how though.  Maybe there's a race inside 
> pipe_semaphore_signal() with the destruction of the semaphore.
> 
> I think the best thing for now is to revert to old behavior for non-windows 
> platforms:
> 
> diff --git a/src/gallium/drivers/llvmpipe/lp_rast.c 
> b/src/gallium/drivers/llvmpipe/lp_rast.c
> index 6b54d43..e168766 100644
> --- a/src/gallium/drivers/llvmpipe/lp_rast.c
> +++ b/src/gallium/drivers/llvmpipe/lp_rast.c
> @@ -800,7 +800,9 @@ static PIPE_THREAD_ROUTINE( thread_function, init_data )
>pipe_semaphore_signal(&task->work_done);
> }
>  
> +#ifdef _WIN32
> pipe_semaphore_signal(&task->work_done);
> +#endif
>  
> return 0;
>  }
> @@ -891,7 +893,11 @@ void lp_rast_destroy( struct lp_rasterizer *rast )
>  * We don't actually call pipe_thread_wait to avoid dead lock on Windows
>  * per https://bugs.freedesktop.org/show_bug.cgi?id=76252 */
> for (i = 0; i < rast->num_threads; i++) {
> +#ifdef _WIN32
>pipe_semaphore_wait(&rast->tasks[i].work_done);
> +#else
> +  pipe_thread_wait(rast->threads[i]);
> +#endif
> }
>  
> /* Clean up per-thread data */
> 
> 
> Because I don't think that the Windows deadlock ever happens on Linux.
> 

This solution works for me.  Feel free to commit.

I wonder if the problem may be the pipe-loader is unloading pipe_swrast.so
before all the threads have finished.

-Tom


> Jose
> 
> 
> ____________________
> From: Tom Stellard 
> Sent: 13 November 2014 01:45
> To: Jose Fonseca
> Cc: mesa-dev@lists.freedesktop.org; Roland Scheidegger
> Subject: Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading 
> opengl32.dll
> 
> On Fri, Nov 07, 2014 at 04:52:25PM +, jfons...@vmware.com wrote:
> > From: José Fonseca 
> >
> 
> Hi Jose,
> 
> This patch is causing random segfaults with OpenCL programs on radeonsi.
> I haven't been able to figure out exactly what is happening, so I was
> hoping you could help.
> 
> I think the problem has something to do with the fact that when clover
> probes the hardware for OpenCL devices, the pipe_loader creates an
> llvmpipe screen, checks the value of PIPE_CAP_COMPUTE, and then destroys
> the screen since PIPE_CAP_COMPUTE is 0.
> 
> The only way I can reproduce this bug is by running the piglit OpenCL
> tests concurrently.  If it helps, here are the stack traces
> from one of the core dumps I captured from a piglit run:
> 
> (gdb) thread 1
> [Switching to thread 1 (Thread 0x7f6d53cdf700 (LWP 18653))]
> #0  0x7f6d53e56d2d in ?? ()
> (gdb) bt
> #0  0x7f6d53e56d2d in ?? ()
> #1  0x in ?? ()
> (gdb) thread 2
> [Switching to thread 2 (Thread 0x7f6d5495f700 (LWP 18652))]
> #0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
> (gdb) bt
> #0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
> #1  0x7f6d54c71dbb in mtx_init (mtx=0x7f6d54c71dbb ,type=0) 
> at ../../../../../include/c11/threads_posix.h:182
> #2  0x7f6d54c72157 in radeon_set_fd_access 
> (applier=0x61e828,owner=0x61e800, mutex=0x7f6d54c71dbb , 
> request=0,request_name=0x0, enable=238 '\356') at radeon_drm_winsys.c:70
> #3  0x7f6d54c7ad30 in radeon_drm_cs_emit_ioctl (param=0x61e4f0) at 
> radeon_drm_winsys.c:598
> #4  0x7f6d54c71ce0 in cnd_wait (cond=0x61e4f0, mtx=0x7f6d54c7ad07 
> ) at 
> ../../../../../include/c11/threads_posix.h:152
> #5  0x7f6d5aac91da in start_thread () from /lib64/libpthread.so.0
> #6  0x7f6d5afd5d7d in clone () from /lib64/libc.so.6
> (gdb) thread 3
> [Switching to thread 3 (Thread 0x7f6d5c20c740 (LWP 18649))]
> #0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
> (gdb) bt
> #0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
> #1  0x7f6d5afae7fe in register_state () from /lib64/libc.so.6
> #2  0x7f6d5afb1d39 in re_acquire_state_context () from /lib64/libc.so.6
> #3  0x7f6d5afbaa95 in re_compile_internal () from /lib64/libc.so.6
> #4  0x7f6d5afbb603 in regcomp () from /lib64/libc.so.6
> #5  0x00403e9b in regex_get_matches (src=0x63e6c0 "float", 
> pattern=0x40b940 "^ulong|ulong2|ulong3|ulong4|ulong8|ulong16$", pmatch=0x0, 
> size=0, cflags=4) at 
> /home/tstellar/piglit/tests/cl/program/program-tester.c:476
> #6  0x004040e2 in regex_match (src=0x63e6c0 "float", pattern=0x40b940 
> "^ulong|ulong2|ulong3|ulong4|ulo

Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-13 Thread Jose Fonseca
Hi Tom,

That's peculiar. It looks like pthreads got into a weird state somehow.  Don't 
precisely understand how though.  Maybe there's a race inside 
pipe_semaphore_signal() with the destruction of the semaphore.

I think the best thing for now is to revert to old behavior for non-windows 
platforms:

diff --git a/src/gallium/drivers/llvmpipe/lp_rast.c 
b/src/gallium/drivers/llvmpipe/lp_rast.c
index 6b54d43..e168766 100644
--- a/src/gallium/drivers/llvmpipe/lp_rast.c
+++ b/src/gallium/drivers/llvmpipe/lp_rast.c
@@ -800,7 +800,9 @@ static PIPE_THREAD_ROUTINE( thread_function, init_data )
   pipe_semaphore_signal(&task->work_done);
}
 
+#ifdef _WIN32
pipe_semaphore_signal(&task->work_done);
+#endif
 
return 0;
 }
@@ -891,7 +893,11 @@ void lp_rast_destroy( struct lp_rasterizer *rast )
 * We don't actually call pipe_thread_wait to avoid dead lock on Windows
 * per https://bugs.freedesktop.org/show_bug.cgi?id=76252 */
for (i = 0; i < rast->num_threads; i++) {
+#ifdef _WIN32
   pipe_semaphore_wait(&rast->tasks[i].work_done);
+#else
+  pipe_thread_wait(rast->threads[i]);
+#endif
}
 
/* Clean up per-thread data */


Because I don't think that the Windows deadlock ever happens on Linux.

Jose



From: Tom Stellard 
Sent: 13 November 2014 01:45
To: Jose Fonseca
Cc: mesa-dev@lists.freedesktop.org; Roland Scheidegger
Subject: Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading 
opengl32.dll

On Fri, Nov 07, 2014 at 04:52:25PM +, jfons...@vmware.com wrote:
> From: José Fonseca 
>

Hi Jose,

This patch is causing random segfaults with OpenCL programs on radeonsi.
I haven't been able to figure out exactly what is happening, so I was
hoping you could help.

I think the problem has something to do with the fact that when clover
probes the hardware for OpenCL devices, the pipe_loader creates an
llvmpipe screen, checks the value of PIPE_CAP_COMPUTE, and then destroys
the screen since PIPE_CAP_COMPUTE is 0.

The only way I can reproduce this bug is by running the piglit OpenCL
tests concurrently.  If it helps, here are the stack traces
from one of the core dumps I captured from a piglit run:

(gdb) thread 1
[Switching to thread 1 (Thread 0x7f6d53cdf700 (LWP 18653))]
#0  0x7f6d53e56d2d in ?? ()
(gdb) bt
#0  0x7f6d53e56d2d in ?? ()
#1  0x in ?? ()
(gdb) thread 2
[Switching to thread 2 (Thread 0x7f6d5495f700 (LWP 18652))]
#0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
(gdb) bt
#0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
#1  0x7f6d54c71dbb in mtx_init (mtx=0x7f6d54c71dbb ,type=0) at 
../../../../../include/c11/threads_posix.h:182
#2  0x7f6d54c72157 in radeon_set_fd_access 
(applier=0x61e828,owner=0x61e800, mutex=0x7f6d54c71dbb , 
request=0,request_name=0x0, enable=238 '\356') at radeon_drm_winsys.c:70
#3  0x7f6d54c7ad30 in radeon_drm_cs_emit_ioctl (param=0x61e4f0) at 
radeon_drm_winsys.c:598
#4  0x7f6d54c71ce0 in cnd_wait (cond=0x61e4f0, mtx=0x7f6d54c7ad07 
) at 
../../../../../include/c11/threads_posix.h:152
#5  0x7f6d5aac91da in start_thread () from /lib64/libpthread.so.0
#6  0x7f6d5afd5d7d in clone () from /lib64/libc.so.6
(gdb) thread 3
[Switching to thread 3 (Thread 0x7f6d5c20c740 (LWP 18649))]
#0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
(gdb) bt
#0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
#1  0x7f6d5afae7fe in register_state () from /lib64/libc.so.6
#2  0x7f6d5afb1d39 in re_acquire_state_context () from /lib64/libc.so.6
#3  0x7f6d5afbaa95 in re_compile_internal () from /lib64/libc.so.6
#4  0x7f6d5afbb603 in regcomp () from /lib64/libc.so.6
#5  0x00403e9b in regex_get_matches (src=0x63e6c0 "float", 
pattern=0x40b940 "^ulong|ulong2|ulong3|ulong4|ulong8|ulong16$", pmatch=0x0, 
size=0, cflags=4) at /home/tstellar/piglit/tests/cl/program/program-tester.c:476
#6  0x004040e2 in regex_match (src=0x63e6c0 "float", pattern=0x40b940 
"^ulong|ulong2|ulong3|ulong4|ulong8|ulong16$") at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:532
#7  0x004059c6 in get_test_arg (src=0x63de70 "1 buffer float[7] 0.5 
-0.5 0.0 -0.0 nan -3.99 1.5", test=0x645710, arg_in=true) at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:1016
#8  0x00406f4a in parse_config ( config_str=0x63fe30 
"\n[config]\nname: Test float trunc built-in on CL 1.1\nclc_version_min: 
10\ndimensions: 1\n\n[test]\nname: trunc float1\nkernel_name: 
test_1_trunc_float\nglobal_size: 7 0 0\n\narg_out: 0 buffer float[7] 0.0 
-0.0"..., config=0x60e260 ) at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:1410
#9  0x004074a7 in init (argc=2, argv=0x7fff46612d88, config=0x60e260 
) at /home/tstellar/piglit

Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-12 Thread Tom Stellard
On Fri, Nov 07, 2014 at 04:52:25PM +, jfons...@vmware.com wrote:
> From: José Fonseca 
> 

Hi Jose,

This patch is causing random segfaults with OpenCL programs on radeonsi.
I haven't been able to figure out exactly what is happening, so I was
hoping you could help.

I think the problem has something to do with the fact that when clover
probes the hardware for OpenCL devices, the pipe_loader creates an
llvmpipe screen, checks the value of PIPE_CAP_COMPUTE, and then destroys
the screen since PIPE_CAP_COMPUTE is 0.

The only way I can reproduce this bug is by running the piglit OpenCL
tests concurrently.  If it helps, here are the stack traces
from one of the core dumps I captured from a piglit run:

(gdb) thread 1
[Switching to thread 1 (Thread 0x7f6d53cdf700 (LWP 18653))]
#0  0x7f6d53e56d2d in ?? ()
(gdb) bt
#0  0x7f6d53e56d2d in ?? ()
#1  0x in ?? ()
(gdb) thread 2
[Switching to thread 2 (Thread 0x7f6d5495f700 (LWP 18652))]
#0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
(gdb) bt
#0  0x7f6d5aacd44c in pthread_cond_wait () from /lib64/libpthread.so.0
#1  0x7f6d54c71dbb in mtx_init (mtx=0x7f6d54c71dbb ,type=0) at 
../../../../../include/c11/threads_posix.h:182
#2  0x7f6d54c72157 in radeon_set_fd_access 
(applier=0x61e828,owner=0x61e800, mutex=0x7f6d54c71dbb , 
request=0,request_name=0x0, enable=238 '\356') at radeon_drm_winsys.c:70
#3  0x7f6d54c7ad30 in radeon_drm_cs_emit_ioctl (param=0x61e4f0) at 
radeon_drm_winsys.c:598
#4  0x7f6d54c71ce0 in cnd_wait (cond=0x61e4f0, mtx=0x7f6d54c7ad07 
) at 
../../../../../include/c11/threads_posix.h:152
#5  0x7f6d5aac91da in start_thread () from /lib64/libpthread.so.0
#6  0x7f6d5afd5d7d in clone () from /lib64/libc.so.6
(gdb) thread 3
[Switching to thread 3 (Thread 0x7f6d5c20c740 (LWP 18649))]
#0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
(gdb) bt
#0  0x7f6d5afae73e in re_node_set_insert_last () from /lib64/libc.so.6
#1  0x7f6d5afae7fe in register_state () from /lib64/libc.so.6
#2  0x7f6d5afb1d39 in re_acquire_state_context () from /lib64/libc.so.6
#3  0x7f6d5afbaa95 in re_compile_internal () from /lib64/libc.so.6
#4  0x7f6d5afbb603 in regcomp () from /lib64/libc.so.6
#5  0x00403e9b in regex_get_matches (src=0x63e6c0 "float", 
pattern=0x40b940 "^ulong|ulong2|ulong3|ulong4|ulong8|ulong16$", pmatch=0x0, 
size=0, cflags=4) at /home/tstellar/piglit/tests/cl/program/program-tester.c:476
#6  0x004040e2 in regex_match (src=0x63e6c0 "float", pattern=0x40b940 
"^ulong|ulong2|ulong3|ulong4|ulong8|ulong16$") at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:532
#7  0x004059c6 in get_test_arg (src=0x63de70 "1 buffer float[7] 0.5 
-0.5 0.0 -0.0 nan -3.99 1.5", test=0x645710, arg_in=true) at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:1016
#8  0x00406f4a in parse_config ( config_str=0x63fe30 
"\n[config]\nname: Test float trunc built-in on CL 1.1\nclc_version_min: 
10\ndimensions: 1\n\n[test]\nname: trunc float1\nkernel_name: 
test_1_trunc_float\nglobal_size: 7 0 0\n\narg_out: 0 buffer float[7] 0.0 
-0.0"..., config=0x60e260 ) at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:1410
#9  0x004074a7 in init (argc=2, argv=0x7fff46612d88, config=0x60e260 
) at /home/tstellar/piglit/tests/cl/program/program-tester.c:1555
#10 0x7f6d5be0232c in piglit_cl_program_test_init (argc=2, 
argv=0x7fff46612d88, void_config=0x60e260 ) at 
/home/tstellar/piglit/tests/util/piglit-framework-cl-program.c:60
#11 0x7f6d5be00f33 in piglit_cl_framework_run (argc=2, argv=0x7fff46612d88) 
at /home/tstellar/piglit/tests/util/piglit-framework-cl.c:154
#12 0x00403535 in main (argc=2, argv=0x7fff46612d88) at 
/home/tstellar/piglit/tests/cl/program/program-tester.c:164


Thanks,
Tom

> On Windows, DllMain calls and thread creation/destruction are
> serialized, so when llvmpipe is destroyed from DllMain waiting for the
> rasterizer threads to finish will deadlock.
> 
> So, instead of waiting for rasterizer threads to have finished, simply wait 
> for the
> rasterizer threads to notify they are just about to finish.
> 
> Verified with this very simple program:
> 
>#include 
>int main() {
>   HMODULE hModule = LoadLibraryA("opengl32.dll");
>   FreeLibrary(hModule);
>}
> 
> Fixes https://bugs.freedesktop.org/show_bug.cgi?id=76252
> 
> Cc: 10.2 10.3 
> ---
>  src/gallium/drivers/llvmpipe/lp_rast.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/src/gallium/drivers/llvmpipe/lp_rast.c 
> b/src/gallium/drivers/llvmpipe/lp_rast.c
> index a3420a2..6b54d43 100644
> --- a/src/gallium/drivers/llvmpipe/lp_rast.c
> +++ b/src/gallium/drivers/llvmpipe/lp_rast.c
> @@ -800,6 +800,8 @@ static PIPE_THREAD_ROUTINE( thread_function, init_data )
>pipe_semaphore_signal(&task->work_done);
> }
>  
> +   pipe_semaphore_signal(&task->work_done);
> +
> retur

Re: [Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-07 Thread Roland Scheidegger
Reviewed-by: Roland Scheidegger 

Am 07.11.2014 um 17:52 schrieb jfons...@vmware.com:
> From: José Fonseca 
> 
> On Windows, DllMain calls and thread creation/destruction are
> serialized, so when llvmpipe is destroyed from DllMain waiting for the
> rasterizer threads to finish will deadlock.
> 
> So, instead of waiting for rasterizer threads to have finished, simply wait 
> for the
> rasterizer threads to notify they are just about to finish.
> 
> Verified with this very simple program:
> 
>#include 
>int main() {
>   HMODULE hModule = LoadLibraryA("opengl32.dll");
>   FreeLibrary(hModule);
>}
> 
> Fixes https://bugs.freedesktop.org/show_bug.cgi?id=76252
> 
> Cc: 10.2 10.3 
> ---
>  src/gallium/drivers/llvmpipe/lp_rast.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/src/gallium/drivers/llvmpipe/lp_rast.c 
> b/src/gallium/drivers/llvmpipe/lp_rast.c
> index a3420a2..6b54d43 100644
> --- a/src/gallium/drivers/llvmpipe/lp_rast.c
> +++ b/src/gallium/drivers/llvmpipe/lp_rast.c
> @@ -800,6 +800,8 @@ static PIPE_THREAD_ROUTINE( thread_function, init_data )
>pipe_semaphore_signal(&task->work_done);
> }
>  
> +   pipe_semaphore_signal(&task->work_done);
> +
> return 0;
>  }
>  
> @@ -885,9 +887,11 @@ void lp_rast_destroy( struct lp_rasterizer *rast )
>pipe_semaphore_signal(&rast->tasks[i].work_ready);
> }
>  
> -   /* Wait for threads to terminate before cleaning up per-thread data */
> +   /* Wait for threads to terminate before cleaning up per-thread data.
> +* We don't actually call pipe_thread_wait to avoid dead lock on Windows
> +* per https://bugs.freedesktop.org/show_bug.cgi?id=76252 */
> for (i = 0; i < rast->num_threads; i++) {
> -  pipe_thread_wait(rast->threads[i]);
> +  pipe_semaphore_wait(&rast->tasks[i].work_done);
> }
>  
> /* Clean up per-thread data */
> 

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [PATCH] llvmpipe: Avoid deadlock when unloading opengl32.dll

2014-11-07 Thread jfonseca
From: José Fonseca 

On Windows, DllMain calls and thread creation/destruction are
serialized, so when llvmpipe is destroyed from DllMain waiting for the
rasterizer threads to finish will deadlock.

So, instead of waiting for rasterizer threads to have finished, simply wait for 
the
rasterizer threads to notify they are just about to finish.

Verified with this very simple program:

   #include 
   int main() {
  HMODULE hModule = LoadLibraryA("opengl32.dll");
  FreeLibrary(hModule);
   }

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=76252

Cc: 10.2 10.3 
---
 src/gallium/drivers/llvmpipe/lp_rast.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/gallium/drivers/llvmpipe/lp_rast.c 
b/src/gallium/drivers/llvmpipe/lp_rast.c
index a3420a2..6b54d43 100644
--- a/src/gallium/drivers/llvmpipe/lp_rast.c
+++ b/src/gallium/drivers/llvmpipe/lp_rast.c
@@ -800,6 +800,8 @@ static PIPE_THREAD_ROUTINE( thread_function, init_data )
   pipe_semaphore_signal(&task->work_done);
}
 
+   pipe_semaphore_signal(&task->work_done);
+
return 0;
 }
 
@@ -885,9 +887,11 @@ void lp_rast_destroy( struct lp_rasterizer *rast )
   pipe_semaphore_signal(&rast->tasks[i].work_ready);
}
 
-   /* Wait for threads to terminate before cleaning up per-thread data */
+   /* Wait for threads to terminate before cleaning up per-thread data.
+* We don't actually call pipe_thread_wait to avoid dead lock on Windows
+* per https://bugs.freedesktop.org/show_bug.cgi?id=76252 */
for (i = 0; i < rast->num_threads; i++) {
-  pipe_thread_wait(rast->threads[i]);
+  pipe_semaphore_wait(&rast->tasks[i].work_done);
}
 
/* Clean up per-thread data */
-- 
1.9.1

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev