[
https://issues.apache.org/jira/browse/FLINK-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056159#comment-14056159
]
Till Rohrmann commented on FLINK-1013:
--------------------------------------
That should do the trick. However, it did not work when I set the lower bound
of bucketCount to 1. Then the system will complain that the recursive descent
of the construction process of the hash table is too deep. Thus, we should try
they new formula out and see whether it works.
> ArithmeticException: / by zero in MutableHashTable
> --------------------------------------------------
>
> Key: FLINK-1013
> URL: https://issues.apache.org/jira/browse/FLINK-1013
> Project: Flink
> Issue Type: Bug
> Reporter: Till Rohrmann
>
> I encountered a division by zero exception in the MutableHashTable. It
> happened when I joined two datasets of relatively big records (approx. 40-50
> MB I think). When joining them the buildTableFromSpilledPartition method of
> the MutableHashTable is called. In case that the available buffers are
> smaller than the needed number of buffers, the mutable hash table will
> calculate the bucket count
> {code}
> bucketCount = (int) (((long) totalBuffersAvailable) * RECORD_TABLE_BYTES /
> (avgRecordLenPartition + RECORD_OVERHEAD_BYTES));
> {code}
> If the average record length is sufficiently large, then the bucket count
> will be 0. Initializing the hash table with a 0 bucket count will cause then
> the division by 0 exception. I don't know whether this problem can be
> mitigated but it should at least throw a meaningful exception instead of the
> ArithmeticException.
--
This message was sent by Atlassian JIRA
(v6.2#6252)