[ 
https://issues.apache.org/jira/browse/DERBY-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13106042#comment-13106042
 ] 

Ramin Baradari commented on DERBY-5416:
---------------------------------------

Patch "compress_test_5416.patch" adds a new test method to 
CompressTableTest.java that reproduces the crash. The test reproduced the crash 
on my systems each time I ran it but the bug depends a bit on the proper timing 
so it might actually succeed on systems. The test is a bit slow takes about 
10minutes to complete.

The test creates a table with an index that is larger in data size than the 
512mb maximum heap size the test runner has. (should be about 600mb+). It then 
fills the heap in chunks of several megabytes until an OOM occurs that is 
catched and ignore. The reference to that dummy data is then released and the 
table compression is called.

> SYSCS_COMPRESS_TABLE causes an OutOfMemoryError when the heap is full at call 
> time and then gets mostly garbage collected later on
> ----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-5416
>                 URL: https://issues.apache.org/jira/browse/DERBY-5416
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.6.2.1, 10.7.1.1, 10.8.1.2
>            Reporter: Ramin Baradari
>            Priority: Critical
>         Attachments: compress_test_5416.patch
>
>
> When compressing a table with an index that is larger than the maximum heap 
> size and therefore cannot be hold in memory as a whole an OutOfMemoryError 
> can occur. 
> For this to happen the heap usage must be close to the maximum heap size at 
> the start of the index recreation and then while the entries are sorted a 
> garbage collection run must clean out most of the heap. This can happen 
> because a concurrent process releases a huge chunk of memory or just because 
> the buffer of a previous table compression has not yet been garbage 
> collected. 
> The internally used heuristics to guess when more memory can be used for the 
> merge inserter estimates that more memory is available and then the sort 
> buffer gets doubled. The buffer size gets doubled until the heap usage is 
> back to the level when the merge inserter was first initialized or when the 
> OOM occurs.
> The problem lies in MergeInsert.insert(...). The check if the buffer can be 
> doubled contains the expression "estimatedMemoryUsed < 0" where 
> estimatedMemoryUsed is the difference in current heap usage and heap usage at 
> initialization. Unfortunately, in the aforementioned scenario this will be 
> true until the heap usage will reach close to maximum heap size before 
> doubling the buffer size will be stopped.
> I've tested it with 10.6.2.1, 10.7.1.1 and 10.8.1.2 but the actual bug most 
> likely exists in prior versions too.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to