[ 
https://issues.apache.org/jira/browse/DERBY-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12694084#action_12694084
 ] 

Knut Anders Hatlen commented on DERBY-4119:
-------------------------------------------

Thanks again for looking at the patch.

It's true that catching the OOME means that we end up with an array that's a 
lot smaller than Integer.MAX_VALUE (it doesn't have to be a lot smaller, but 
with the current value of GROWTH_MULTIPLIER (2) it will be about half the 
size). But even if it is smaller than ideal, it will still be better than the 
current situation, where it'll crash. And it will also fix problems that may 
arise because the amount of available memory has become smaller than it was 
when we checked it in MergeInserter.insert() and decided to increase the 
maximum size (which was the problem the original check for newArray==null was 
supposed to fix).

If we had a way to find the exact maximum array size supported by the JVM, I 
agree that we should use that limit instead of MAX_VALUE, but it doesn't sound 
that appealing to guess a limit that may be suboptimal for JVMs without the 
limitation, and possibly too large for JVMs we haven't tested. If it turns out 
to be a problem, we could improve this further at a later point by successively 
trying to allocate a slightly smaller array on OOME until we are able to 
allocate it. Hopefully most JVMs will have removed this limitation before it 
becomes common for Derby to run into it.

As far as I can see, your assumption is correct that failing to increase the 
size of the array will lead to more stages in the sort, and not cause the sort 
to fail.

> Compress on a large table fails with IllegalArgumentException - Illegal 
> Capacity
> --------------------------------------------------------------------------------
>
>                 Key: DERBY-4119
>                 URL: https://issues.apache.org/jira/browse/DERBY-4119
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.5.1.0
>            Reporter: Kristian Waagan
>            Assignee: Knut Anders Hatlen
>         Attachments: overflow.diff, overflow2.diff, overflow3.diff
>
>
> When compressing a large table, Derby failed with the following exception:
> IllegalArgumentException; Illegal Capacity: -X
> I was able to access the database afterwards, but haven't yet checked if all 
> the data is still available.
> The compress was started with CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('schema', 
> 'table', 1) from ij.
> The data in the table was inserted with 25 concurrent threads. This seems to 
> cause excessive table growth, as the data inserted should weigh in at around 
> 2 GB. The table size after the insert is ten times bigger, 20 GB.
> I have been able to generate the table and do a compress earlier, but then I 
> have been using fewer insert threads.
> I have also been able to successfully compress the table when retrying after 
> the failure occurred (shut down the database, then booted again and 
> compressed).
> I'm trying to reproduce, and will post more information (like the stack 
> trace) later.
> So far my attempts at reproducing has failed. Normally the data is generated 
> and the compress is started without shutting down the database. My attempts 
> this far has consisted of doing compress on the existing database (where the 
> failure was first seen).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to