[ 
https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479719#comment-16479719
 ] 

Sergey Shelukhin commented on HIVE-19258:
-----------------------------------------

I was able to repro the failure of bucketizedhiveinputformat on the same query, 
then I realized I forgot to actually apply the patch.
 The failure, assuming it's the same one as here, is OOM in one of the stdouts:
{noformat}
[DEBUG] 2018-05-17 14:00:24.209 [IPC Client (928988232) connection to 
localhost/127.0.0.1:50278 from sergey] Client - IPC Client (928988232) 
connection to localhost/127.0.0.1:50278 from sergey: closed
[DEBUG] 2018-05-17 14:00:24.209 [IPC Client (928988232) connection to 
localhost/127.0.0.1:50278 from sergey] Client - IPC Client (928988232) 
connection to localhost/127.0.0.1:50278 from sergey: stopped, remaining 
connections 0
#
# java.lang.OutOfMemoryError: GC overhead limit exceeded
# -XX:OnOutOfMemoryError="kill %p"
#   Executing "kill 5333"...
[ERROR] 2018-05-17 14:01:39.613 [SIGTERM handler] CoarseGrainedExecutorBackend 
- RECEIVED SIGNAL TERM
[DEBUG] 2018-05-17 14:01:39.613 [Executor task launch worker for task 18] 
TaskMemoryManager - unreleased 268.0 MB memory from 
org.apache.spark.util.collection.ExternalAppendOnlyMap@141a89ea
[ERROR] 2018-05-17 14:01:39.616 [Executor task launch worker for task 18] 
Executor - Exception in task 0.0 in stage 2.2 (TID 18)
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:188)
 ~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:185)
 ~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:438) 
~[scala-library-2.11.8.jar:?]
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
~[scala-library-2.11.8.jar:?]
        at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:153)
 ~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:41) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:90)
 ~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:105) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at org.apache.spark.scheduler.Task.run(Task.scala:109) 
~[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 
[spark-core_2.11-2.3.0.jar:2.3.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
[INFO ] 2018-05-17 14:01:39.617 [pool-8-thread-1] DiskBlockManager - Shutdown 
hook called
{noformat}
cc [~xuefuz] 

> add originals support to MM tables (and make the conversion a metadata only 
> operation)
> --------------------------------------------------------------------------------------
>
>                 Key: HIVE-19258
>                 URL: https://issues.apache.org/jira/browse/HIVE-19258
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>            Priority: Major
>         Attachments: HIVE-19258.01.patch, HIVE-19258.02.patch, 
> HIVE-19258.03.patch, HIVE-19258.04.patch, HIVE-19258.05.patch, 
> HIVE-19258.06.patch, HIVE-19258.07.patch, HIVE-19258.08.patch, 
> HIVE-19258.08.patch, HIVE-19258.09.patch, HIVE-19258.10.patch, 
> HIVE-19258.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to