[ 
https://issues.apache.org/jira/browse/FLINK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14716324#comment-14716324
 ] 

ASF GitHub Bot commented on FLINK-2545:
---------------------------------------

Github user StephanEwen commented on the pull request:

    https://github.com/apache/flink/pull/1067#issuecomment-135354418
  
    @ChengXiangLi Do you know what caused the problem initially? I was puzzled, 
because the count in the bucket should never be negative, and a zero sized 
bucket should work with your original code.
    
    Would be great to capture that error, to see if the root bug was actually 
somewhere else (not in the bloom filters), but in the other parts of the hash 
table structure.
    
    Hopefully Greg can help us to reproduce this...


> NegativeArraySizeException while creating hash table bloom filters
> ------------------------------------------------------------------
>
>                 Key: FLINK-2545
>                 URL: https://issues.apache.org/jira/browse/FLINK-2545
>             Project: Flink
>          Issue Type: Bug
>          Components: Distributed Runtime
>    Affects Versions: master
>            Reporter: Greg Hogan
>            Assignee: Chengxiang Li
>
> The following exception occurred a second time when I immediately re-ran my 
> application, though after recompiling and restarting Flink the subsequent 
> execution ran without error.
> java.lang.Exception: The data preparation for task '...' , caused an error: 
> null
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:465)
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NegativeArraySizeException
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucket(MutableHashTable.java:1160)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucketsInPartition(MutableHashTable.java:1143)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.spillPartition(MutableHashTable.java:1117)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertBucketEntry(MutableHashTable.java:946)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:868)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildInitialTable(MutableHashTable.java:692)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.open(MutableHashTable.java:455)
>       at 
> org.apache.flink.runtime.operators.hash.ReusingBuildSecondHashMatchIterator.open(ReusingBuildSecondHashMatchIterator.java:93)
>       at 
> org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:195)
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:459)
>       ... 3 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to