[ 
https://issues.apache.org/jira/browse/OPENJPA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12864025#action_12864025
 ] 

Simon So commented on OPENJPA-1648:
-----------------------------------

Hi Pinaki

I try and finally{} the block with threadPool.shutdown(). 

That seems to work.  100k transaction no problem now.

You nail it right on with the excessive pool creation in every flush().

I am not sure CachedThreadPool would have solved the problem.  You still have 
more flush() call coming along the way -- then we need to define 
RejectExecutionHandler.  We can't abort and we can't discard.

Since the pool is gonna go out of scope by the time flush() is done, we 
probably need to shut it down before going out of scope (so that expired thread 
no longer hang around).

I will keep on stressing the stack and see if there are more problems.

Cheers,
Simon

http://openjpa.208410.n2.nabble.com/Spring-3-0-2-OpenJPA-2-0-Slice-OutOfMemoryError-shortly-after-pounding-1000-threads-to-the-system-td5000822.html#a5000822


> Slice thread pool breaks down under high concurrency 
> -----------------------------------------------------
>
>                 Key: OPENJPA-1648
>                 URL: https://issues.apache.org/jira/browse/OPENJPA-1648
>             Project: OpenJPA
>          Issue Type: Bug
>          Components: slice
>            Reporter: Pinaki Poddar
>             Fix For: 2.1.0
>
>
> Slice thread pool breaks down under heavy usage [1].
> This is due to poor choice of thread pool.
> Also creation of thread pool for every flush() is inefficient.
> Simple solution will be to use a cached thread pool -- which will be upper 
> bounded by available system's capacity for concurrent native threads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to